Dec 03 12:55:39 crc systemd[1]: Starting Kubernetes Kubelet... Dec 03 12:55:39 crc restorecon[4698]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 12:55:39 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 03 12:55:40 crc restorecon[4698]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 03 12:55:40 crc restorecon[4698]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Dec 03 12:55:40 crc kubenswrapper[4986]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 03 12:55:40 crc kubenswrapper[4986]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Dec 03 12:55:40 crc kubenswrapper[4986]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 03 12:55:40 crc kubenswrapper[4986]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 03 12:55:40 crc kubenswrapper[4986]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 03 12:55:40 crc kubenswrapper[4986]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.726356 4986 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.730731 4986 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.730773 4986 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.730779 4986 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.730785 4986 feature_gate.go:330] unrecognized feature gate: Example Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.730790 4986 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.730797 4986 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.730801 4986 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.730807 4986 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.730813 4986 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.730821 4986 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.730826 4986 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.730831 4986 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.730836 4986 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.730841 4986 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.730846 4986 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.730851 4986 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.730856 4986 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.730861 4986 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.730865 4986 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.730870 4986 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.730874 4986 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.730879 4986 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.730883 4986 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.730888 4986 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.730892 4986 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.730898 4986 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.730904 4986 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.730909 4986 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.730913 4986 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.730917 4986 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.730921 4986 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.730925 4986 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.730930 4986 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.730935 4986 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.730940 4986 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.730945 4986 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.730949 4986 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.730953 4986 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.730957 4986 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.730963 4986 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.730969 4986 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.730974 4986 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.730978 4986 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.730982 4986 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.730986 4986 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.730990 4986 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.730994 4986 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.730998 4986 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.731002 4986 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.731007 4986 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.731011 4986 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.731015 4986 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.731019 4986 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.731023 4986 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.731027 4986 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.731032 4986 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.731037 4986 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.731041 4986 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.731045 4986 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.731049 4986 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.731053 4986 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.731056 4986 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.731060 4986 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.731064 4986 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.731068 4986 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.731072 4986 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.731078 4986 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.731085 4986 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.731089 4986 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.731094 4986 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.731098 4986 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731206 4986 flags.go:64] FLAG: --address="0.0.0.0" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731218 4986 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731232 4986 flags.go:64] FLAG: --anonymous-auth="true" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731240 4986 flags.go:64] FLAG: --application-metrics-count-limit="100" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731251 4986 flags.go:64] FLAG: --authentication-token-webhook="false" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731257 4986 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731320 4986 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731329 4986 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731335 4986 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731340 4986 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731346 4986 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731352 4986 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731357 4986 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731363 4986 flags.go:64] FLAG: --cgroup-root="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731367 4986 flags.go:64] FLAG: --cgroups-per-qos="true" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731372 4986 flags.go:64] FLAG: --client-ca-file="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731377 4986 flags.go:64] FLAG: --cloud-config="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731382 4986 flags.go:64] FLAG: --cloud-provider="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731387 4986 flags.go:64] FLAG: --cluster-dns="[]" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731393 4986 flags.go:64] FLAG: --cluster-domain="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731397 4986 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731402 4986 flags.go:64] FLAG: --config-dir="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731407 4986 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731413 4986 flags.go:64] FLAG: --container-log-max-files="5" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731420 4986 flags.go:64] FLAG: --container-log-max-size="10Mi" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731425 4986 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731430 4986 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731435 4986 flags.go:64] FLAG: --containerd-namespace="k8s.io" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731441 4986 flags.go:64] FLAG: --contention-profiling="false" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731446 4986 flags.go:64] FLAG: --cpu-cfs-quota="true" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731451 4986 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731457 4986 flags.go:64] FLAG: --cpu-manager-policy="none" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731463 4986 flags.go:64] FLAG: --cpu-manager-policy-options="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731469 4986 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731476 4986 flags.go:64] FLAG: --enable-controller-attach-detach="true" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731482 4986 flags.go:64] FLAG: --enable-debugging-handlers="true" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731487 4986 flags.go:64] FLAG: --enable-load-reader="false" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731492 4986 flags.go:64] FLAG: --enable-server="true" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731497 4986 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731504 4986 flags.go:64] FLAG: --event-burst="100" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731509 4986 flags.go:64] FLAG: --event-qps="50" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731514 4986 flags.go:64] FLAG: --event-storage-age-limit="default=0" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731519 4986 flags.go:64] FLAG: --event-storage-event-limit="default=0" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731524 4986 flags.go:64] FLAG: --eviction-hard="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731531 4986 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731535 4986 flags.go:64] FLAG: --eviction-minimum-reclaim="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731541 4986 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731546 4986 flags.go:64] FLAG: --eviction-soft="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731551 4986 flags.go:64] FLAG: --eviction-soft-grace-period="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731555 4986 flags.go:64] FLAG: --exit-on-lock-contention="false" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731560 4986 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731565 4986 flags.go:64] FLAG: --experimental-mounter-path="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731570 4986 flags.go:64] FLAG: --fail-cgroupv1="false" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731575 4986 flags.go:64] FLAG: --fail-swap-on="true" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731580 4986 flags.go:64] FLAG: --feature-gates="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731587 4986 flags.go:64] FLAG: --file-check-frequency="20s" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731592 4986 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731598 4986 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731603 4986 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731609 4986 flags.go:64] FLAG: --healthz-port="10248" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731614 4986 flags.go:64] FLAG: --help="false" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731619 4986 flags.go:64] FLAG: --hostname-override="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731624 4986 flags.go:64] FLAG: --housekeeping-interval="10s" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731630 4986 flags.go:64] FLAG: --http-check-frequency="20s" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731635 4986 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731640 4986 flags.go:64] FLAG: --image-credential-provider-config="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731647 4986 flags.go:64] FLAG: --image-gc-high-threshold="85" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731653 4986 flags.go:64] FLAG: --image-gc-low-threshold="80" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731660 4986 flags.go:64] FLAG: --image-service-endpoint="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731666 4986 flags.go:64] FLAG: --kernel-memcg-notification="false" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731671 4986 flags.go:64] FLAG: --kube-api-burst="100" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731676 4986 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731682 4986 flags.go:64] FLAG: --kube-api-qps="50" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731688 4986 flags.go:64] FLAG: --kube-reserved="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731693 4986 flags.go:64] FLAG: --kube-reserved-cgroup="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731698 4986 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731703 4986 flags.go:64] FLAG: --kubelet-cgroups="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731709 4986 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731714 4986 flags.go:64] FLAG: --lock-file="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731719 4986 flags.go:64] FLAG: --log-cadvisor-usage="false" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731724 4986 flags.go:64] FLAG: --log-flush-frequency="5s" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731730 4986 flags.go:64] FLAG: --log-json-info-buffer-size="0" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731738 4986 flags.go:64] FLAG: --log-json-split-stream="false" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731743 4986 flags.go:64] FLAG: --log-text-info-buffer-size="0" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731749 4986 flags.go:64] FLAG: --log-text-split-stream="false" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731754 4986 flags.go:64] FLAG: --logging-format="text" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731759 4986 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731765 4986 flags.go:64] FLAG: --make-iptables-util-chains="true" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731770 4986 flags.go:64] FLAG: --manifest-url="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731775 4986 flags.go:64] FLAG: --manifest-url-header="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731782 4986 flags.go:64] FLAG: --max-housekeeping-interval="15s" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731788 4986 flags.go:64] FLAG: --max-open-files="1000000" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731794 4986 flags.go:64] FLAG: --max-pods="110" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731799 4986 flags.go:64] FLAG: --maximum-dead-containers="-1" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731804 4986 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731810 4986 flags.go:64] FLAG: --memory-manager-policy="None" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731815 4986 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731820 4986 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731826 4986 flags.go:64] FLAG: --node-ip="192.168.126.11" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731832 4986 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731843 4986 flags.go:64] FLAG: --node-status-max-images="50" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731849 4986 flags.go:64] FLAG: --node-status-update-frequency="10s" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731854 4986 flags.go:64] FLAG: --oom-score-adj="-999" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731859 4986 flags.go:64] FLAG: --pod-cidr="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731865 4986 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731874 4986 flags.go:64] FLAG: --pod-manifest-path="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731879 4986 flags.go:64] FLAG: --pod-max-pids="-1" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731885 4986 flags.go:64] FLAG: --pods-per-core="0" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731890 4986 flags.go:64] FLAG: --port="10250" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731895 4986 flags.go:64] FLAG: --protect-kernel-defaults="false" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731901 4986 flags.go:64] FLAG: --provider-id="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731905 4986 flags.go:64] FLAG: --qos-reserved="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731910 4986 flags.go:64] FLAG: --read-only-port="10255" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731916 4986 flags.go:64] FLAG: --register-node="true" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731921 4986 flags.go:64] FLAG: --register-schedulable="true" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731926 4986 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731935 4986 flags.go:64] FLAG: --registry-burst="10" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731941 4986 flags.go:64] FLAG: --registry-qps="5" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731947 4986 flags.go:64] FLAG: --reserved-cpus="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731952 4986 flags.go:64] FLAG: --reserved-memory="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731959 4986 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731964 4986 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731971 4986 flags.go:64] FLAG: --rotate-certificates="false" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731977 4986 flags.go:64] FLAG: --rotate-server-certificates="false" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731982 4986 flags.go:64] FLAG: --runonce="false" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731987 4986 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731993 4986 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.731998 4986 flags.go:64] FLAG: --seccomp-default="false" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.732004 4986 flags.go:64] FLAG: --serialize-image-pulls="true" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.732009 4986 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.732015 4986 flags.go:64] FLAG: --storage-driver-db="cadvisor" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.732029 4986 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.732034 4986 flags.go:64] FLAG: --storage-driver-password="root" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.732040 4986 flags.go:64] FLAG: --storage-driver-secure="false" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.732045 4986 flags.go:64] FLAG: --storage-driver-table="stats" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.732051 4986 flags.go:64] FLAG: --storage-driver-user="root" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.732061 4986 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.732066 4986 flags.go:64] FLAG: --sync-frequency="1m0s" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.732072 4986 flags.go:64] FLAG: --system-cgroups="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.732076 4986 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.732085 4986 flags.go:64] FLAG: --system-reserved-cgroup="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.732090 4986 flags.go:64] FLAG: --tls-cert-file="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.732095 4986 flags.go:64] FLAG: --tls-cipher-suites="[]" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.732102 4986 flags.go:64] FLAG: --tls-min-version="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.732107 4986 flags.go:64] FLAG: --tls-private-key-file="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.732113 4986 flags.go:64] FLAG: --topology-manager-policy="none" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.732118 4986 flags.go:64] FLAG: --topology-manager-policy-options="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.732123 4986 flags.go:64] FLAG: --topology-manager-scope="container" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.732128 4986 flags.go:64] FLAG: --v="2" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.732135 4986 flags.go:64] FLAG: --version="false" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.732143 4986 flags.go:64] FLAG: --vmodule="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.732149 4986 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.732155 4986 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.732339 4986 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.732349 4986 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.732354 4986 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.732359 4986 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.732364 4986 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.732368 4986 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.732372 4986 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.732378 4986 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.732384 4986 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.732389 4986 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.732400 4986 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.732406 4986 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.732411 4986 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.732416 4986 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.732421 4986 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.732425 4986 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.732429 4986 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.732434 4986 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.732438 4986 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.732442 4986 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.732446 4986 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.732450 4986 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.732454 4986 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.732459 4986 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.732464 4986 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.732468 4986 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.732473 4986 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.732477 4986 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.732480 4986 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.732484 4986 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.732489 4986 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.732493 4986 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.732497 4986 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.732501 4986 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.732504 4986 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.732509 4986 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.732513 4986 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.732517 4986 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.732521 4986 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.732526 4986 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.732531 4986 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.732535 4986 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.732542 4986 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.732546 4986 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.732551 4986 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.732555 4986 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.732560 4986 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.732566 4986 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.732571 4986 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.732575 4986 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.732579 4986 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.732583 4986 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.732588 4986 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.732592 4986 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.732596 4986 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.732600 4986 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.732604 4986 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.732609 4986 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.732613 4986 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.732618 4986 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.732622 4986 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.732626 4986 feature_gate.go:330] unrecognized feature gate: Example Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.732630 4986 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.732634 4986 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.732638 4986 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.732642 4986 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.732646 4986 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.732650 4986 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.732658 4986 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.732662 4986 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.732666 4986 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.732680 4986 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.747143 4986 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.747239 4986 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.747432 4986 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.747466 4986 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.747478 4986 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.747489 4986 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.747502 4986 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.747513 4986 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.747525 4986 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.747537 4986 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.747548 4986 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.747558 4986 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.747568 4986 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.747578 4986 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.747588 4986 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.747598 4986 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.747608 4986 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.747619 4986 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.747634 4986 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.747653 4986 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.747665 4986 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.747676 4986 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.747689 4986 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.747703 4986 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.747715 4986 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.747725 4986 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.747735 4986 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.747745 4986 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.747755 4986 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.747765 4986 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.747777 4986 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.747792 4986 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.747807 4986 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.747820 4986 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.747831 4986 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.747841 4986 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.747855 4986 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.747865 4986 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.747874 4986 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.747883 4986 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.747891 4986 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.747899 4986 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.747907 4986 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.747915 4986 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.747923 4986 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.747931 4986 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.747939 4986 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.747947 4986 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.747955 4986 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.747963 4986 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.747973 4986 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.747983 4986 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.747992 4986 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.748001 4986 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.748048 4986 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.748057 4986 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.748066 4986 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.748075 4986 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.748084 4986 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.748094 4986 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.748102 4986 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.748111 4986 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.748119 4986 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.748127 4986 feature_gate.go:330] unrecognized feature gate: Example Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.748135 4986 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.748143 4986 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.748151 4986 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.748159 4986 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.748167 4986 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.748174 4986 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.748182 4986 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.748190 4986 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.748204 4986 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.748219 4986 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.748544 4986 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.748563 4986 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.748573 4986 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.748583 4986 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.748592 4986 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.748600 4986 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.748608 4986 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.748616 4986 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.748627 4986 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.748640 4986 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.748649 4986 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.748657 4986 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.748666 4986 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.748674 4986 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.748682 4986 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.748691 4986 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.748699 4986 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.748713 4986 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.748725 4986 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.748734 4986 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.748743 4986 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.748751 4986 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.748759 4986 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.748767 4986 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.748775 4986 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.748783 4986 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.748791 4986 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.748799 4986 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.748808 4986 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.748816 4986 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.748824 4986 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.748832 4986 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.748840 4986 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.748848 4986 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.748857 4986 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.748865 4986 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.748873 4986 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.748881 4986 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.748889 4986 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.748897 4986 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.748905 4986 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.748914 4986 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.748922 4986 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.748929 4986 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.748937 4986 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.748945 4986 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.748953 4986 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.748961 4986 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.748969 4986 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.748978 4986 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.748987 4986 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.748994 4986 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.749002 4986 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.749010 4986 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.749021 4986 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.749032 4986 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.749043 4986 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.749053 4986 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.749062 4986 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.749070 4986 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.749078 4986 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.749087 4986 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.749094 4986 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.749103 4986 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.749110 4986 feature_gate.go:330] unrecognized feature gate: Example Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.749118 4986 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.749126 4986 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.749134 4986 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.749141 4986 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.749149 4986 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.749158 4986 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.749170 4986 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.750112 4986 server.go:940] "Client rotation is on, will bootstrap in background" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.755748 4986 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.756160 4986 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.757354 4986 server.go:997] "Starting client certificate rotation" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.757414 4986 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.758189 4986 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-21 10:16:54.535428701 +0000 UTC Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.758426 4986 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.769073 4986 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.773413 4986 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 03 12:55:40 crc kubenswrapper[4986]: E1203 12:55:40.773590 4986 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.129.56.112:6443: connect: connection refused" logger="UnhandledError" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.783635 4986 log.go:25] "Validated CRI v1 runtime API" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.814206 4986 log.go:25] "Validated CRI v1 image API" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.817021 4986 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.821234 4986 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-12-03-12-50-42-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.821307 4986 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.840782 4986 manager.go:217] Machine: {Timestamp:2025-12-03 12:55:40.837871104 +0000 UTC m=+0.304302295 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:52e71ce5-258d-4951-aa26-5d4aac0725ad BootID:1b28ccd4-3e47-4e26-a3b8-f87de96f7586 Filesystems:[{Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:d1:74:6e Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:d1:74:6e Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:87:c5:f5 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:9e:49:ef Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:e9:13:da Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:67:43:41 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:fe:d2:20:26:8e:30 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:b2:19:f7:16:0b:3c Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.841355 4986 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.841769 4986 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.842667 4986 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.843157 4986 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.843236 4986 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.843677 4986 topology_manager.go:138] "Creating topology manager with none policy" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.843703 4986 container_manager_linux.go:303] "Creating device plugin manager" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.844034 4986 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.844095 4986 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.844563 4986 state_mem.go:36] "Initialized new in-memory state store" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.844910 4986 server.go:1245] "Using root directory" path="/var/lib/kubelet" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.845839 4986 kubelet.go:418] "Attempting to sync node with API server" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.845875 4986 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.845914 4986 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.845936 4986 kubelet.go:324] "Adding apiserver pod source" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.845955 4986 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.848620 4986 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.849148 4986 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.849992 4986 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.849961 4986 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.112:6443: connect: connection refused Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.850072 4986 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.112:6443: connect: connection refused Dec 03 12:55:40 crc kubenswrapper[4986]: E1203 12:55:40.850167 4986 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.112:6443: connect: connection refused" logger="UnhandledError" Dec 03 12:55:40 crc kubenswrapper[4986]: E1203 12:55:40.850109 4986 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.112:6443: connect: connection refused" logger="UnhandledError" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.850731 4986 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.850784 4986 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.850793 4986 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.850801 4986 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.850814 4986 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.850824 4986 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.850833 4986 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.850846 4986 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.850857 4986 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.850868 4986 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.850882 4986 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.850890 4986 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.851459 4986 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.852042 4986 server.go:1280] "Started kubelet" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.852624 4986 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.112:6443: connect: connection refused Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.853194 4986 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.853481 4986 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 03 12:55:40 crc systemd[1]: Started Kubernetes Kubelet. Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.854372 4986 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 03 12:55:40 crc kubenswrapper[4986]: E1203 12:55:40.855814 4986 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.129.56.112:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187db5d19afa97c2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-03 12:55:40.852008898 +0000 UTC m=+0.318440099,LastTimestamp:2025-12-03 12:55:40.852008898 +0000 UTC m=+0.318440099,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.856447 4986 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.856524 4986 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.856973 4986 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 06:10:51.395506717 +0000 UTC Dec 03 12:55:40 crc kubenswrapper[4986]: E1203 12:55:40.857383 4986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.857472 4986 volume_manager.go:287] "The desired_state_of_world populator starts" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.857541 4986 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.857648 4986 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.858367 4986 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.112:6443: connect: connection refused Dec 03 12:55:40 crc kubenswrapper[4986]: E1203 12:55:40.858582 4986 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.112:6443: connect: connection refused" interval="200ms" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.858994 4986 server.go:460] "Adding debug handlers to kubelet server" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.859496 4986 factory.go:55] Registering systemd factory Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.859532 4986 factory.go:221] Registration of the systemd container factory successfully Dec 03 12:55:40 crc kubenswrapper[4986]: E1203 12:55:40.858439 4986 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.112:6443: connect: connection refused" logger="UnhandledError" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.861830 4986 factory.go:153] Registering CRI-O factory Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.861875 4986 factory.go:221] Registration of the crio container factory successfully Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.861995 4986 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.862034 4986 factory.go:103] Registering Raw factory Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.862066 4986 manager.go:1196] Started watching for new ooms in manager Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.864049 4986 manager.go:319] Starting recovery of all containers Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.877070 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.877472 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.877487 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.877500 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.877513 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.877525 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.877538 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.877550 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.877564 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.877578 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.877590 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.877629 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.877642 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.877657 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.877671 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.877685 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.877698 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.877712 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.877725 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.877737 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.877749 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.877761 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.877773 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.877787 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.877799 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.877812 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.877826 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.877841 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.877885 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.877897 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.877912 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.877926 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.877939 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.877952 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.877965 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.877977 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.877989 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.878001 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.878013 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.878025 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.878037 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.878051 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.878064 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.878076 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.878089 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.878101 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.878115 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.878129 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.878142 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.878155 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.878168 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.878181 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.878205 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.878218 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.878232 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.878245 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.878258 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.878272 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.878309 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.878344 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.878379 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.878391 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.878404 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.878418 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.878430 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.878444 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.878456 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.878469 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.878481 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.878495 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.878507 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.878519 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.878531 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.878545 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.878557 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.878570 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.878583 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.878595 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.878608 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.878622 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.878634 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.878646 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.878658 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.878671 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.878687 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.878700 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.878712 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.878724 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.878737 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.878750 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.878761 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.878774 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.878786 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.878814 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.878826 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.878840 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.878852 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.878865 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.878877 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.878889 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.878901 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.878914 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.878926 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.878938 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.878957 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.878972 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.884840 4986 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.884927 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.884964 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.884991 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.885014 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.885038 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.885062 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.885083 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.885105 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.885129 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.885150 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.885170 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.885190 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.885210 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.885229 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.885247 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.885265 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.885320 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.885343 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.885362 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.885382 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.885402 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.885423 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.885441 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.885460 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.885537 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.885557 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.885578 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.885596 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.885615 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.885636 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.885654 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.885674 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.885692 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.885710 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.885730 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.885751 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.885772 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.885792 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.885811 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.885828 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.885848 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.885865 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.885883 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.885901 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.885920 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.885938 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.885957 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.885975 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.885992 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.886013 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.886033 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.886054 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.886074 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.886095 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.886112 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.886131 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.886150 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.886168 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.886187 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.886207 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.886225 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.886247 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.886273 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.886330 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.886356 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.886378 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.886398 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.886417 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.886450 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.886471 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.886492 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.886512 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.886533 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.886552 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.886572 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.886592 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.886614 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.886635 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.886655 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.886675 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.886922 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.886943 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.886962 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.886984 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.887005 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.887026 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.887045 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.887064 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.887085 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.887105 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.887124 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.887143 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.887161 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.887183 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.887204 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.887223 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.887242 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.887261 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.887333 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.887354 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.887379 4986 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.887397 4986 reconstruct.go:97] "Volume reconstruction finished" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.887410 4986 reconciler.go:26] "Reconciler: start to sync state" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.898228 4986 manager.go:324] Recovery completed Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.910015 4986 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.911558 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.911600 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.911610 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.912593 4986 cpu_manager.go:225] "Starting CPU manager" policy="none" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.912613 4986 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.912630 4986 state_mem.go:36] "Initialized new in-memory state store" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.939201 4986 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.942013 4986 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.942057 4986 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 03 12:55:40 crc kubenswrapper[4986]: I1203 12:55:40.942082 4986 kubelet.go:2335] "Starting kubelet main sync loop" Dec 03 12:55:40 crc kubenswrapper[4986]: E1203 12:55:40.942141 4986 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 03 12:55:40 crc kubenswrapper[4986]: W1203 12:55:40.943003 4986 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.112:6443: connect: connection refused Dec 03 12:55:40 crc kubenswrapper[4986]: E1203 12:55:40.943102 4986 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.112:6443: connect: connection refused" logger="UnhandledError" Dec 03 12:55:40 crc kubenswrapper[4986]: E1203 12:55:40.958257 4986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 03 12:55:41 crc kubenswrapper[4986]: E1203 12:55:41.042354 4986 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 03 12:55:41 crc kubenswrapper[4986]: E1203 12:55:41.059259 4986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 03 12:55:41 crc kubenswrapper[4986]: E1203 12:55:41.059617 4986 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.112:6443: connect: connection refused" interval="400ms" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.124203 4986 policy_none.go:49] "None policy: Start" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.125329 4986 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.125377 4986 state_mem.go:35] "Initializing new in-memory state store" Dec 03 12:55:41 crc kubenswrapper[4986]: E1203 12:55:41.160000 4986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 03 12:55:41 crc kubenswrapper[4986]: E1203 12:55:41.243062 4986 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 03 12:55:41 crc kubenswrapper[4986]: E1203 12:55:41.260229 4986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.280219 4986 manager.go:334] "Starting Device Plugin manager" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.280447 4986 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.280470 4986 server.go:79] "Starting device plugin registration server" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.281337 4986 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.281362 4986 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.281867 4986 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.281963 4986 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.281972 4986 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 03 12:55:41 crc kubenswrapper[4986]: E1203 12:55:41.303573 4986 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.381829 4986 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.383770 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.383842 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.383867 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.383912 4986 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 03 12:55:41 crc kubenswrapper[4986]: E1203 12:55:41.384865 4986 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.112:6443: connect: connection refused" node="crc" Dec 03 12:55:41 crc kubenswrapper[4986]: E1203 12:55:41.460799 4986 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.112:6443: connect: connection refused" interval="800ms" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.585086 4986 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.586730 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.586792 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.586817 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.586856 4986 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 03 12:55:41 crc kubenswrapper[4986]: E1203 12:55:41.587448 4986 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.112:6443: connect: connection refused" node="crc" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.644400 4986 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc"] Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.644675 4986 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.647132 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.647196 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.647220 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.647479 4986 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.648401 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.648865 4986 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.648728 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.649203 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.649232 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.649471 4986 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.649651 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.649723 4986 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.650916 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.650949 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.650972 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.650981 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.650994 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.650997 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.651410 4986 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.651597 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.651657 4986 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.653043 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.653083 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.653101 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.653113 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.653154 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.653181 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.653202 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.653186 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.653228 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.653244 4986 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.653550 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.653597 4986 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.654626 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.654666 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.654686 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.654808 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.654927 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.654975 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.655421 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.655521 4986 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.657523 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.657620 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.657647 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.696323 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.696447 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.696502 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.696550 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.696600 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.696645 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.696688 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.696736 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.696782 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.696830 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.696876 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.696922 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.697066 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.697117 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.697209 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 12:55:41 crc kubenswrapper[4986]: W1203 12:55:41.709931 4986 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.112:6443: connect: connection refused Dec 03 12:55:41 crc kubenswrapper[4986]: E1203 12:55:41.710024 4986 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.112:6443: connect: connection refused" logger="UnhandledError" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.799123 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.799178 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.799199 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.799217 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.799235 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.799252 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.799266 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.799307 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.799321 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.799334 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.799349 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.799366 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.799378 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.799393 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.799411 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.799404 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.799416 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.799477 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.799510 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.799509 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.799576 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.799577 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.799556 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.799550 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.799491 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.799623 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.799641 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.799535 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.799634 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.799707 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.853806 4986 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.112:6443: connect: connection refused Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.858067 4986 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 18:13:20.45618112 +0000 UTC Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.988400 4986 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.989958 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.989997 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.990011 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.990036 4986 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 03 12:55:41 crc kubenswrapper[4986]: E1203 12:55:41.990683 4986 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.112:6443: connect: connection refused" node="crc" Dec 03 12:55:41 crc kubenswrapper[4986]: W1203 12:55:41.993185 4986 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.112:6443: connect: connection refused Dec 03 12:55:41 crc kubenswrapper[4986]: E1203 12:55:41.993310 4986 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.112:6443: connect: connection refused" logger="UnhandledError" Dec 03 12:55:41 crc kubenswrapper[4986]: I1203 12:55:41.996208 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 12:55:42 crc kubenswrapper[4986]: I1203 12:55:42.008511 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 03 12:55:42 crc kubenswrapper[4986]: I1203 12:55:42.034001 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 12:55:42 crc kubenswrapper[4986]: W1203 12:55:42.034447 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-28fb568d72afc3157008666f666e5932c91a2c3c28733e538f0b5502ce588bc0 WatchSource:0}: Error finding container 28fb568d72afc3157008666f666e5932c91a2c3c28733e538f0b5502ce588bc0: Status 404 returned error can't find the container with id 28fb568d72afc3157008666f666e5932c91a2c3c28733e538f0b5502ce588bc0 Dec 03 12:55:42 crc kubenswrapper[4986]: W1203 12:55:42.039079 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-2a209397d53263f23cc772d72693b492d459914100b46e94e6567100d53ccaf5 WatchSource:0}: Error finding container 2a209397d53263f23cc772d72693b492d459914100b46e94e6567100d53ccaf5: Status 404 returned error can't find the container with id 2a209397d53263f23cc772d72693b492d459914100b46e94e6567100d53ccaf5 Dec 03 12:55:42 crc kubenswrapper[4986]: I1203 12:55:42.045865 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 03 12:55:42 crc kubenswrapper[4986]: W1203 12:55:42.052450 4986 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.112:6443: connect: connection refused Dec 03 12:55:42 crc kubenswrapper[4986]: E1203 12:55:42.052556 4986 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.112:6443: connect: connection refused" logger="UnhandledError" Dec 03 12:55:42 crc kubenswrapper[4986]: I1203 12:55:42.052949 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 03 12:55:42 crc kubenswrapper[4986]: W1203 12:55:42.064848 4986 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.112:6443: connect: connection refused Dec 03 12:55:42 crc kubenswrapper[4986]: E1203 12:55:42.064962 4986 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.112:6443: connect: connection refused" logger="UnhandledError" Dec 03 12:55:42 crc kubenswrapper[4986]: W1203 12:55:42.067371 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-2c7b0ada47a0badae18ffdc5e98ac45d6b543fe42a722ab9bf76b163e00f27db WatchSource:0}: Error finding container 2c7b0ada47a0badae18ffdc5e98ac45d6b543fe42a722ab9bf76b163e00f27db: Status 404 returned error can't find the container with id 2c7b0ada47a0badae18ffdc5e98ac45d6b543fe42a722ab9bf76b163e00f27db Dec 03 12:55:42 crc kubenswrapper[4986]: W1203 12:55:42.068750 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-f8848344b7d35cbcc75983d7803377ce92d5db724d88bf9410bb649664e19ee7 WatchSource:0}: Error finding container f8848344b7d35cbcc75983d7803377ce92d5db724d88bf9410bb649664e19ee7: Status 404 returned error can't find the container with id f8848344b7d35cbcc75983d7803377ce92d5db724d88bf9410bb649664e19ee7 Dec 03 12:55:42 crc kubenswrapper[4986]: W1203 12:55:42.084425 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-c9129cedfe0e209adc9d5a3712b227d714363392167de5a0057afc13197dcb0a WatchSource:0}: Error finding container c9129cedfe0e209adc9d5a3712b227d714363392167de5a0057afc13197dcb0a: Status 404 returned error can't find the container with id c9129cedfe0e209adc9d5a3712b227d714363392167de5a0057afc13197dcb0a Dec 03 12:55:42 crc kubenswrapper[4986]: E1203 12:55:42.261814 4986 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.112:6443: connect: connection refused" interval="1.6s" Dec 03 12:55:42 crc kubenswrapper[4986]: I1203 12:55:42.791513 4986 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 12:55:42 crc kubenswrapper[4986]: I1203 12:55:42.793585 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:55:42 crc kubenswrapper[4986]: I1203 12:55:42.793644 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:55:42 crc kubenswrapper[4986]: I1203 12:55:42.793658 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:55:42 crc kubenswrapper[4986]: I1203 12:55:42.793693 4986 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 03 12:55:42 crc kubenswrapper[4986]: E1203 12:55:42.794309 4986 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.112:6443: connect: connection refused" node="crc" Dec 03 12:55:42 crc kubenswrapper[4986]: I1203 12:55:42.802360 4986 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Dec 03 12:55:42 crc kubenswrapper[4986]: E1203 12:55:42.804653 4986 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.129.56.112:6443: connect: connection refused" logger="UnhandledError" Dec 03 12:55:42 crc kubenswrapper[4986]: I1203 12:55:42.854270 4986 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.112:6443: connect: connection refused Dec 03 12:55:42 crc kubenswrapper[4986]: I1203 12:55:42.858487 4986 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 08:07:37.063786855 +0000 UTC Dec 03 12:55:42 crc kubenswrapper[4986]: I1203 12:55:42.858577 4986 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 67h11m54.20521392s for next certificate rotation Dec 03 12:55:42 crc kubenswrapper[4986]: I1203 12:55:42.951074 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"2c7b0ada47a0badae18ffdc5e98ac45d6b543fe42a722ab9bf76b163e00f27db"} Dec 03 12:55:42 crc kubenswrapper[4986]: I1203 12:55:42.952950 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"2a209397d53263f23cc772d72693b492d459914100b46e94e6567100d53ccaf5"} Dec 03 12:55:42 crc kubenswrapper[4986]: I1203 12:55:42.954049 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"28fb568d72afc3157008666f666e5932c91a2c3c28733e538f0b5502ce588bc0"} Dec 03 12:55:42 crc kubenswrapper[4986]: I1203 12:55:42.955034 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"c9129cedfe0e209adc9d5a3712b227d714363392167de5a0057afc13197dcb0a"} Dec 03 12:55:42 crc kubenswrapper[4986]: I1203 12:55:42.956008 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"f8848344b7d35cbcc75983d7803377ce92d5db724d88bf9410bb649664e19ee7"} Dec 03 12:55:43 crc kubenswrapper[4986]: W1203 12:55:43.687776 4986 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.112:6443: connect: connection refused Dec 03 12:55:43 crc kubenswrapper[4986]: E1203 12:55:43.687900 4986 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.112:6443: connect: connection refused" logger="UnhandledError" Dec 03 12:55:43 crc kubenswrapper[4986]: I1203 12:55:43.854588 4986 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.112:6443: connect: connection refused Dec 03 12:55:43 crc kubenswrapper[4986]: E1203 12:55:43.863497 4986 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.112:6443: connect: connection refused" interval="3.2s" Dec 03 12:55:43 crc kubenswrapper[4986]: W1203 12:55:43.913543 4986 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.112:6443: connect: connection refused Dec 03 12:55:43 crc kubenswrapper[4986]: E1203 12:55:43.913708 4986 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.112:6443: connect: connection refused" logger="UnhandledError" Dec 03 12:55:44 crc kubenswrapper[4986]: W1203 12:55:44.133523 4986 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.112:6443: connect: connection refused Dec 03 12:55:44 crc kubenswrapper[4986]: E1203 12:55:44.133596 4986 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.112:6443: connect: connection refused" logger="UnhandledError" Dec 03 12:55:44 crc kubenswrapper[4986]: I1203 12:55:44.395182 4986 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 12:55:44 crc kubenswrapper[4986]: I1203 12:55:44.397687 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:55:44 crc kubenswrapper[4986]: I1203 12:55:44.397740 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:55:44 crc kubenswrapper[4986]: I1203 12:55:44.397761 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:55:44 crc kubenswrapper[4986]: I1203 12:55:44.397848 4986 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 03 12:55:44 crc kubenswrapper[4986]: E1203 12:55:44.398391 4986 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.112:6443: connect: connection refused" node="crc" Dec 03 12:55:44 crc kubenswrapper[4986]: W1203 12:55:44.451056 4986 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.112:6443: connect: connection refused Dec 03 12:55:44 crc kubenswrapper[4986]: E1203 12:55:44.451142 4986 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.112:6443: connect: connection refused" logger="UnhandledError" Dec 03 12:55:44 crc kubenswrapper[4986]: I1203 12:55:44.853273 4986 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.112:6443: connect: connection refused Dec 03 12:55:44 crc kubenswrapper[4986]: I1203 12:55:44.961981 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"d4fc247caf16eab425363e12219a76c40022f0daf0bc9c8b52f4eb8e3726f242"} Dec 03 12:55:44 crc kubenswrapper[4986]: I1203 12:55:44.962038 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b86d62d991d6516ed42f7cad655018e1ebc85c8c1b100d624c1c2e5e8f616cf4"} Dec 03 12:55:44 crc kubenswrapper[4986]: I1203 12:55:44.964367 4986 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="0fb412bfaef1de92aa6be2e23c0dd73fe39eebf29b6814fb3bb6fd0281ddbbd2" exitCode=0 Dec 03 12:55:44 crc kubenswrapper[4986]: I1203 12:55:44.964432 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"0fb412bfaef1de92aa6be2e23c0dd73fe39eebf29b6814fb3bb6fd0281ddbbd2"} Dec 03 12:55:44 crc kubenswrapper[4986]: I1203 12:55:44.964495 4986 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 12:55:44 crc kubenswrapper[4986]: I1203 12:55:44.965484 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:55:44 crc kubenswrapper[4986]: I1203 12:55:44.965516 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:55:44 crc kubenswrapper[4986]: I1203 12:55:44.965528 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:55:44 crc kubenswrapper[4986]: I1203 12:55:44.966311 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"b459b0d69a8744da31d3019da0622a77ccfbb8218fbc14b25c9f80af144a3278"} Dec 03 12:55:44 crc kubenswrapper[4986]: I1203 12:55:44.966321 4986 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 12:55:44 crc kubenswrapper[4986]: I1203 12:55:44.966248 4986 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="b459b0d69a8744da31d3019da0622a77ccfbb8218fbc14b25c9f80af144a3278" exitCode=0 Dec 03 12:55:44 crc kubenswrapper[4986]: I1203 12:55:44.967306 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:55:44 crc kubenswrapper[4986]: I1203 12:55:44.967373 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:55:44 crc kubenswrapper[4986]: I1203 12:55:44.967391 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:55:44 crc kubenswrapper[4986]: I1203 12:55:44.969082 4986 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41" exitCode=0 Dec 03 12:55:44 crc kubenswrapper[4986]: I1203 12:55:44.969212 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41"} Dec 03 12:55:44 crc kubenswrapper[4986]: I1203 12:55:44.969731 4986 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 12:55:44 crc kubenswrapper[4986]: I1203 12:55:44.973877 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:55:44 crc kubenswrapper[4986]: I1203 12:55:44.973922 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:55:44 crc kubenswrapper[4986]: I1203 12:55:44.973939 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:55:44 crc kubenswrapper[4986]: I1203 12:55:44.974587 4986 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="865442eb9652b14d8c0642927e9f59c60344dfcffb5bd2d0cc0ec572ede971c4" exitCode=0 Dec 03 12:55:44 crc kubenswrapper[4986]: I1203 12:55:44.974627 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"865442eb9652b14d8c0642927e9f59c60344dfcffb5bd2d0cc0ec572ede971c4"} Dec 03 12:55:44 crc kubenswrapper[4986]: I1203 12:55:44.974653 4986 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 12:55:44 crc kubenswrapper[4986]: I1203 12:55:44.975660 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:55:44 crc kubenswrapper[4986]: I1203 12:55:44.975680 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:55:44 crc kubenswrapper[4986]: I1203 12:55:44.975692 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:55:44 crc kubenswrapper[4986]: I1203 12:55:44.976055 4986 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 12:55:44 crc kubenswrapper[4986]: I1203 12:55:44.977143 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:55:44 crc kubenswrapper[4986]: I1203 12:55:44.977173 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:55:44 crc kubenswrapper[4986]: I1203 12:55:44.977189 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:55:45 crc kubenswrapper[4986]: I1203 12:55:45.854167 4986 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.112:6443: connect: connection refused Dec 03 12:55:45 crc kubenswrapper[4986]: I1203 12:55:45.979361 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5c0ca216d3d1121b7035d5d7ead25d459f95d56808997c02b3801cac0354c76a"} Dec 03 12:55:45 crc kubenswrapper[4986]: I1203 12:55:45.979401 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"04aa8cf2e01507ace747505514511263d9c560608e33f592a2c10be1edad8674"} Dec 03 12:55:45 crc kubenswrapper[4986]: I1203 12:55:45.979422 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7eb1602f2d542b1998655456e5c49fca8f55a45a07cb075f989d4b10f91f6c95"} Dec 03 12:55:45 crc kubenswrapper[4986]: I1203 12:55:45.982393 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"16164ea50f4f29916a5878eee07e8debc4268fae086f9086aa4c73c4d03dd082"} Dec 03 12:55:45 crc kubenswrapper[4986]: I1203 12:55:45.982415 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"e0d8158a257a7d5b17302a2d6f4c81326b8c24487c18132807f91366cdb3457b"} Dec 03 12:55:45 crc kubenswrapper[4986]: I1203 12:55:45.982425 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"77c695e6dd4e9c2fe64a41b90d59aebe2a6a54a1985702749b0a68eed6551f82"} Dec 03 12:55:45 crc kubenswrapper[4986]: I1203 12:55:45.982544 4986 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 12:55:45 crc kubenswrapper[4986]: I1203 12:55:45.983920 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:55:45 crc kubenswrapper[4986]: I1203 12:55:45.983949 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:55:45 crc kubenswrapper[4986]: I1203 12:55:45.983962 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:55:45 crc kubenswrapper[4986]: I1203 12:55:45.985263 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"d4ba9819f4f1746f03a558617fc6ee14f5627c37c362d1198a303b6f1854c7bf"} Dec 03 12:55:45 crc kubenswrapper[4986]: I1203 12:55:45.985289 4986 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 12:55:45 crc kubenswrapper[4986]: I1203 12:55:45.985301 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"79cba0b7394bf25be648502fc84a916d8f2fa0949edce20b5a748046ce9b6f75"} Dec 03 12:55:45 crc kubenswrapper[4986]: I1203 12:55:45.986124 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:55:45 crc kubenswrapper[4986]: I1203 12:55:45.986149 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:55:45 crc kubenswrapper[4986]: I1203 12:55:45.986159 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:55:45 crc kubenswrapper[4986]: I1203 12:55:45.987559 4986 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="0c66ee903635427e08f96e3f96c512c4d25e79bf2fc14eb0c699b3b9a617cfeb" exitCode=0 Dec 03 12:55:45 crc kubenswrapper[4986]: I1203 12:55:45.987604 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"0c66ee903635427e08f96e3f96c512c4d25e79bf2fc14eb0c699b3b9a617cfeb"} Dec 03 12:55:45 crc kubenswrapper[4986]: I1203 12:55:45.987737 4986 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 12:55:45 crc kubenswrapper[4986]: I1203 12:55:45.988915 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:55:45 crc kubenswrapper[4986]: I1203 12:55:45.988933 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:55:45 crc kubenswrapper[4986]: I1203 12:55:45.988940 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:55:45 crc kubenswrapper[4986]: I1203 12:55:45.989408 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"7998f6aae63a6935e19e2a1f2157aeb88cb8cb14e875d25a9dcc4f80f36f2fbc"} Dec 03 12:55:45 crc kubenswrapper[4986]: I1203 12:55:45.989527 4986 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 12:55:45 crc kubenswrapper[4986]: I1203 12:55:45.990159 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:55:45 crc kubenswrapper[4986]: I1203 12:55:45.990179 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:55:45 crc kubenswrapper[4986]: I1203 12:55:45.990188 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:55:46 crc kubenswrapper[4986]: I1203 12:55:46.822784 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 12:55:46 crc kubenswrapper[4986]: I1203 12:55:46.831555 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 12:55:46 crc kubenswrapper[4986]: I1203 12:55:46.864519 4986 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Dec 03 12:55:46 crc kubenswrapper[4986]: I1203 12:55:46.993196 4986 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="54fb6e4ecd7f73267d49261875a00a51e5e36e6079170f694346e9d6c3df8f31" exitCode=0 Dec 03 12:55:46 crc kubenswrapper[4986]: I1203 12:55:46.993341 4986 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 12:55:46 crc kubenswrapper[4986]: I1203 12:55:46.993325 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"54fb6e4ecd7f73267d49261875a00a51e5e36e6079170f694346e9d6c3df8f31"} Dec 03 12:55:46 crc kubenswrapper[4986]: I1203 12:55:46.995222 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:55:46 crc kubenswrapper[4986]: I1203 12:55:46.995257 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:55:46 crc kubenswrapper[4986]: I1203 12:55:46.995269 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:55:46 crc kubenswrapper[4986]: I1203 12:55:46.998255 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"8c0ed6891f87bfc3c62f0f13739720e012dc8fa3b122a18e799e164e4656abb7"} Dec 03 12:55:46 crc kubenswrapper[4986]: I1203 12:55:46.998332 4986 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 12:55:46 crc kubenswrapper[4986]: I1203 12:55:46.998376 4986 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 12:55:46 crc kubenswrapper[4986]: I1203 12:55:46.998411 4986 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 03 12:55:46 crc kubenswrapper[4986]: I1203 12:55:46.998450 4986 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 12:55:46 crc kubenswrapper[4986]: I1203 12:55:46.998415 4986 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 12:55:46 crc kubenswrapper[4986]: I1203 12:55:46.998329 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"0c8ecb87e56d3fa7d72b57a6f333db4244ee7f60381778e7099a6fc88d91e1e1"} Dec 03 12:55:46 crc kubenswrapper[4986]: I1203 12:55:46.999436 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:55:46 crc kubenswrapper[4986]: I1203 12:55:46.999454 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:55:46 crc kubenswrapper[4986]: I1203 12:55:46.999469 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:55:46 crc kubenswrapper[4986]: I1203 12:55:46.999480 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:55:46 crc kubenswrapper[4986]: I1203 12:55:46.999497 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:55:46 crc kubenswrapper[4986]: I1203 12:55:46.999481 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:55:46 crc kubenswrapper[4986]: I1203 12:55:46.999610 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:55:46 crc kubenswrapper[4986]: I1203 12:55:46.999625 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:55:46 crc kubenswrapper[4986]: I1203 12:55:46.999675 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:55:46 crc kubenswrapper[4986]: I1203 12:55:46.999678 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:55:46 crc kubenswrapper[4986]: I1203 12:55:46.999691 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:55:46 crc kubenswrapper[4986]: I1203 12:55:46.999699 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:55:47 crc kubenswrapper[4986]: I1203 12:55:47.482575 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 12:55:47 crc kubenswrapper[4986]: I1203 12:55:47.598676 4986 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 12:55:47 crc kubenswrapper[4986]: I1203 12:55:47.599691 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:55:47 crc kubenswrapper[4986]: I1203 12:55:47.599719 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:55:47 crc kubenswrapper[4986]: I1203 12:55:47.599730 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:55:47 crc kubenswrapper[4986]: I1203 12:55:47.599752 4986 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 03 12:55:48 crc kubenswrapper[4986]: I1203 12:55:48.008373 4986 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 03 12:55:48 crc kubenswrapper[4986]: I1203 12:55:48.008399 4986 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 12:55:48 crc kubenswrapper[4986]: I1203 12:55:48.008411 4986 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 12:55:48 crc kubenswrapper[4986]: I1203 12:55:48.008816 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d9bb9d940d346f9895e24d98173668ea2d0e68fd3035e5b03475bbc936582958"} Dec 03 12:55:48 crc kubenswrapper[4986]: I1203 12:55:48.008973 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"f6e7d24b5f1325cfb2d210cb9f311026ab370c91deecc5d0d951bcd4eda79816"} Dec 03 12:55:48 crc kubenswrapper[4986]: I1203 12:55:48.008987 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"8fa98006a8736394300cdb9ef3331d2a23e1b3af2e2203f757c51ca9c700d26c"} Dec 03 12:55:48 crc kubenswrapper[4986]: I1203 12:55:48.009004 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 12:55:48 crc kubenswrapper[4986]: I1203 12:55:48.009016 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"1d4007e9ed33227938a5a3656fd74958aef599418fec4cd2048dfa7dd88924dd"} Dec 03 12:55:48 crc kubenswrapper[4986]: I1203 12:55:48.009556 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:55:48 crc kubenswrapper[4986]: I1203 12:55:48.009579 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:55:48 crc kubenswrapper[4986]: I1203 12:55:48.009589 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:55:48 crc kubenswrapper[4986]: I1203 12:55:48.009686 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:55:48 crc kubenswrapper[4986]: I1203 12:55:48.009729 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:55:48 crc kubenswrapper[4986]: I1203 12:55:48.009740 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:55:48 crc kubenswrapper[4986]: I1203 12:55:48.368184 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 12:55:49 crc kubenswrapper[4986]: I1203 12:55:49.014626 4986 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 12:55:49 crc kubenswrapper[4986]: I1203 12:55:49.014707 4986 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 12:55:49 crc kubenswrapper[4986]: I1203 12:55:49.014606 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b7fa0d673f037c8edcc75ee30b1bbeea73da55d5459347dacc9251fffbb38be5"} Dec 03 12:55:49 crc kubenswrapper[4986]: I1203 12:55:49.014822 4986 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 12:55:49 crc kubenswrapper[4986]: I1203 12:55:49.015498 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:55:49 crc kubenswrapper[4986]: I1203 12:55:49.015529 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:55:49 crc kubenswrapper[4986]: I1203 12:55:49.015538 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:55:49 crc kubenswrapper[4986]: I1203 12:55:49.015610 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:55:49 crc kubenswrapper[4986]: I1203 12:55:49.015687 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:55:49 crc kubenswrapper[4986]: I1203 12:55:49.015704 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:55:49 crc kubenswrapper[4986]: I1203 12:55:49.016037 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:55:49 crc kubenswrapper[4986]: I1203 12:55:49.016078 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:55:49 crc kubenswrapper[4986]: I1203 12:55:49.016094 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:55:49 crc kubenswrapper[4986]: I1203 12:55:49.397668 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 12:55:49 crc kubenswrapper[4986]: I1203 12:55:49.578619 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 12:55:50 crc kubenswrapper[4986]: I1203 12:55:50.017705 4986 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 12:55:50 crc kubenswrapper[4986]: I1203 12:55:50.017741 4986 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 12:55:50 crc kubenswrapper[4986]: I1203 12:55:50.017726 4986 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 12:55:50 crc kubenswrapper[4986]: I1203 12:55:50.018914 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:55:50 crc kubenswrapper[4986]: I1203 12:55:50.018949 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:55:50 crc kubenswrapper[4986]: I1203 12:55:50.018963 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:55:50 crc kubenswrapper[4986]: I1203 12:55:50.018944 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:55:50 crc kubenswrapper[4986]: I1203 12:55:50.019038 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:55:50 crc kubenswrapper[4986]: I1203 12:55:50.019064 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:55:50 crc kubenswrapper[4986]: I1203 12:55:50.019251 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:55:50 crc kubenswrapper[4986]: I1203 12:55:50.019313 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:55:50 crc kubenswrapper[4986]: I1203 12:55:50.019354 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:55:50 crc kubenswrapper[4986]: I1203 12:55:50.483385 4986 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 03 12:55:50 crc kubenswrapper[4986]: I1203 12:55:50.483475 4986 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 03 12:55:51 crc kubenswrapper[4986]: I1203 12:55:51.020036 4986 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 12:55:51 crc kubenswrapper[4986]: I1203 12:55:51.021225 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:55:51 crc kubenswrapper[4986]: I1203 12:55:51.021313 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:55:51 crc kubenswrapper[4986]: I1203 12:55:51.021325 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:55:51 crc kubenswrapper[4986]: I1203 12:55:51.092019 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 03 12:55:51 crc kubenswrapper[4986]: I1203 12:55:51.092335 4986 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 12:55:51 crc kubenswrapper[4986]: I1203 12:55:51.093991 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:55:51 crc kubenswrapper[4986]: I1203 12:55:51.094084 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:55:51 crc kubenswrapper[4986]: I1203 12:55:51.094128 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:55:51 crc kubenswrapper[4986]: E1203 12:55:51.304385 4986 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 03 12:55:52 crc kubenswrapper[4986]: I1203 12:55:52.080640 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 12:55:52 crc kubenswrapper[4986]: I1203 12:55:52.081022 4986 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 12:55:52 crc kubenswrapper[4986]: I1203 12:55:52.082409 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:55:52 crc kubenswrapper[4986]: I1203 12:55:52.082499 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:55:52 crc kubenswrapper[4986]: I1203 12:55:52.082597 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:55:53 crc kubenswrapper[4986]: I1203 12:55:53.274926 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Dec 03 12:55:53 crc kubenswrapper[4986]: I1203 12:55:53.275127 4986 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 12:55:53 crc kubenswrapper[4986]: I1203 12:55:53.276220 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:55:53 crc kubenswrapper[4986]: I1203 12:55:53.276247 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:55:53 crc kubenswrapper[4986]: I1203 12:55:53.276258 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:55:53 crc kubenswrapper[4986]: I1203 12:55:53.308532 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Dec 03 12:55:54 crc kubenswrapper[4986]: I1203 12:55:54.025781 4986 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 12:55:54 crc kubenswrapper[4986]: I1203 12:55:54.026510 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:55:54 crc kubenswrapper[4986]: I1203 12:55:54.026538 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:55:54 crc kubenswrapper[4986]: I1203 12:55:54.026546 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:55:56 crc kubenswrapper[4986]: I1203 12:55:56.436248 4986 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 03 12:55:56 crc kubenswrapper[4986]: I1203 12:55:56.436352 4986 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 03 12:55:56 crc kubenswrapper[4986]: I1203 12:55:56.440318 4986 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]","reason":"Forbidden","details":{},"code":403} Dec 03 12:55:56 crc kubenswrapper[4986]: I1203 12:55:56.440378 4986 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 03 12:55:58 crc kubenswrapper[4986]: I1203 12:55:58.372601 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 12:55:58 crc kubenswrapper[4986]: I1203 12:55:58.372754 4986 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 12:55:58 crc kubenswrapper[4986]: I1203 12:55:58.373881 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:55:58 crc kubenswrapper[4986]: I1203 12:55:58.373945 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:55:58 crc kubenswrapper[4986]: I1203 12:55:58.373959 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:55:59 crc kubenswrapper[4986]: I1203 12:55:59.584011 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 12:55:59 crc kubenswrapper[4986]: I1203 12:55:59.584314 4986 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 12:55:59 crc kubenswrapper[4986]: I1203 12:55:59.584860 4986 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 03 12:55:59 crc kubenswrapper[4986]: I1203 12:55:59.584932 4986 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 03 12:55:59 crc kubenswrapper[4986]: I1203 12:55:59.585627 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:55:59 crc kubenswrapper[4986]: I1203 12:55:59.585674 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:55:59 crc kubenswrapper[4986]: I1203 12:55:59.585684 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:55:59 crc kubenswrapper[4986]: I1203 12:55:59.589871 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 12:56:00 crc kubenswrapper[4986]: I1203 12:56:00.040907 4986 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 12:56:00 crc kubenswrapper[4986]: I1203 12:56:00.041348 4986 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 03 12:56:00 crc kubenswrapper[4986]: I1203 12:56:00.041412 4986 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 03 12:56:00 crc kubenswrapper[4986]: I1203 12:56:00.042186 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:00 crc kubenswrapper[4986]: I1203 12:56:00.042221 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:00 crc kubenswrapper[4986]: I1203 12:56:00.042231 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:00 crc kubenswrapper[4986]: I1203 12:56:00.483362 4986 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 03 12:56:00 crc kubenswrapper[4986]: I1203 12:56:00.483444 4986 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 03 12:56:01 crc kubenswrapper[4986]: E1203 12:56:01.305391 4986 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 03 12:56:01 crc kubenswrapper[4986]: E1203 12:56:01.415674 4986 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Dec 03 12:56:01 crc kubenswrapper[4986]: I1203 12:56:01.423725 4986 trace.go:236] Trace[689167298]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (03-Dec-2025 12:55:49.820) (total time: 11602ms): Dec 03 12:56:01 crc kubenswrapper[4986]: Trace[689167298]: ---"Objects listed" error: 11602ms (12:56:01.423) Dec 03 12:56:01 crc kubenswrapper[4986]: Trace[689167298]: [11.602771764s] [11.602771764s] END Dec 03 12:56:01 crc kubenswrapper[4986]: I1203 12:56:01.423751 4986 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 03 12:56:01 crc kubenswrapper[4986]: I1203 12:56:01.424026 4986 trace.go:236] Trace[1546429590]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (03-Dec-2025 12:55:48.315) (total time: 13108ms): Dec 03 12:56:01 crc kubenswrapper[4986]: Trace[1546429590]: ---"Objects listed" error: 13108ms (12:56:01.423) Dec 03 12:56:01 crc kubenswrapper[4986]: Trace[1546429590]: [13.108667974s] [13.108667974s] END Dec 03 12:56:01 crc kubenswrapper[4986]: I1203 12:56:01.424051 4986 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 03 12:56:01 crc kubenswrapper[4986]: I1203 12:56:01.424158 4986 trace.go:236] Trace[566345408]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (03-Dec-2025 12:55:48.675) (total time: 12748ms): Dec 03 12:56:01 crc kubenswrapper[4986]: Trace[566345408]: ---"Objects listed" error: 12748ms (12:56:01.424) Dec 03 12:56:01 crc kubenswrapper[4986]: Trace[566345408]: [12.748682375s] [12.748682375s] END Dec 03 12:56:01 crc kubenswrapper[4986]: I1203 12:56:01.424182 4986 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 03 12:56:01 crc kubenswrapper[4986]: I1203 12:56:01.425817 4986 trace.go:236] Trace[907096214]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (03-Dec-2025 12:55:49.315) (total time: 12110ms): Dec 03 12:56:01 crc kubenswrapper[4986]: Trace[907096214]: ---"Objects listed" error: 12110ms (12:56:01.425) Dec 03 12:56:01 crc kubenswrapper[4986]: Trace[907096214]: [12.110559515s] [12.110559515s] END Dec 03 12:56:01 crc kubenswrapper[4986]: I1203 12:56:01.425850 4986 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 03 12:56:01 crc kubenswrapper[4986]: I1203 12:56:01.428195 4986 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Dec 03 12:56:01 crc kubenswrapper[4986]: I1203 12:56:01.436572 4986 kubelet_node_status.go:115] "Node was previously registered" node="crc" Dec 03 12:56:01 crc kubenswrapper[4986]: I1203 12:56:01.436888 4986 kubelet_node_status.go:79] "Successfully registered node" node="crc" Dec 03 12:56:01 crc kubenswrapper[4986]: I1203 12:56:01.437928 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:01 crc kubenswrapper[4986]: I1203 12:56:01.437955 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:01 crc kubenswrapper[4986]: I1203 12:56:01.437965 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:01 crc kubenswrapper[4986]: I1203 12:56:01.437980 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:01 crc kubenswrapper[4986]: I1203 12:56:01.437991 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:01Z","lastTransitionTime":"2025-12-03T12:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:01 crc kubenswrapper[4986]: I1203 12:56:01.440882 4986 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Dec 03 12:56:01 crc kubenswrapper[4986]: E1203 12:56:01.450798 4986 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b28ccd4-3e47-4e26-a3b8-f87de96f7586\\\",\\\"systemUUID\\\":\\\"52e71ce5-258d-4951-aa26-5d4aac0725ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 12:56:01 crc kubenswrapper[4986]: I1203 12:56:01.454739 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:01 crc kubenswrapper[4986]: I1203 12:56:01.454771 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:01 crc kubenswrapper[4986]: I1203 12:56:01.454781 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:01 crc kubenswrapper[4986]: I1203 12:56:01.454796 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:01 crc kubenswrapper[4986]: I1203 12:56:01.454807 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:01Z","lastTransitionTime":"2025-12-03T12:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:01 crc kubenswrapper[4986]: I1203 12:56:01.456606 4986 csr.go:261] certificate signing request csr-rg8lx is approved, waiting to be issued Dec 03 12:56:01 crc kubenswrapper[4986]: I1203 12:56:01.461383 4986 csr.go:257] certificate signing request csr-rg8lx is issued Dec 03 12:56:01 crc kubenswrapper[4986]: E1203 12:56:01.464219 4986 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b28ccd4-3e47-4e26-a3b8-f87de96f7586\\\",\\\"systemUUID\\\":\\\"52e71ce5-258d-4951-aa26-5d4aac0725ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 12:56:01 crc kubenswrapper[4986]: I1203 12:56:01.467305 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:01 crc kubenswrapper[4986]: I1203 12:56:01.467336 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:01 crc kubenswrapper[4986]: I1203 12:56:01.467345 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:01 crc kubenswrapper[4986]: I1203 12:56:01.467358 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:01 crc kubenswrapper[4986]: I1203 12:56:01.467368 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:01Z","lastTransitionTime":"2025-12-03T12:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:01 crc kubenswrapper[4986]: E1203 12:56:01.476342 4986 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b28ccd4-3e47-4e26-a3b8-f87de96f7586\\\",\\\"systemUUID\\\":\\\"52e71ce5-258d-4951-aa26-5d4aac0725ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 12:56:01 crc kubenswrapper[4986]: I1203 12:56:01.479179 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:01 crc kubenswrapper[4986]: I1203 12:56:01.479207 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:01 crc kubenswrapper[4986]: I1203 12:56:01.479215 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:01 crc kubenswrapper[4986]: I1203 12:56:01.479229 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:01 crc kubenswrapper[4986]: I1203 12:56:01.479238 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:01Z","lastTransitionTime":"2025-12-03T12:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:01 crc kubenswrapper[4986]: E1203 12:56:01.487369 4986 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b28ccd4-3e47-4e26-a3b8-f87de96f7586\\\",\\\"systemUUID\\\":\\\"52e71ce5-258d-4951-aa26-5d4aac0725ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 12:56:01 crc kubenswrapper[4986]: I1203 12:56:01.490437 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:01 crc kubenswrapper[4986]: I1203 12:56:01.490470 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:01 crc kubenswrapper[4986]: I1203 12:56:01.490478 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:01 crc kubenswrapper[4986]: I1203 12:56:01.490490 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:01 crc kubenswrapper[4986]: I1203 12:56:01.490498 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:01Z","lastTransitionTime":"2025-12-03T12:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:01 crc kubenswrapper[4986]: E1203 12:56:01.497951 4986 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b28ccd4-3e47-4e26-a3b8-f87de96f7586\\\",\\\"systemUUID\\\":\\\"52e71ce5-258d-4951-aa26-5d4aac0725ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 12:56:01 crc kubenswrapper[4986]: E1203 12:56:01.498111 4986 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 03 12:56:01 crc kubenswrapper[4986]: I1203 12:56:01.499830 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:01 crc kubenswrapper[4986]: I1203 12:56:01.499874 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:01 crc kubenswrapper[4986]: I1203 12:56:01.499886 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:01 crc kubenswrapper[4986]: I1203 12:56:01.499903 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:01 crc kubenswrapper[4986]: I1203 12:56:01.499917 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:01Z","lastTransitionTime":"2025-12-03T12:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:01 crc kubenswrapper[4986]: I1203 12:56:01.601892 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:01 crc kubenswrapper[4986]: I1203 12:56:01.601962 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:01 crc kubenswrapper[4986]: I1203 12:56:01.601974 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:01 crc kubenswrapper[4986]: I1203 12:56:01.602018 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:01 crc kubenswrapper[4986]: I1203 12:56:01.602030 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:01Z","lastTransitionTime":"2025-12-03T12:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:01 crc kubenswrapper[4986]: I1203 12:56:01.704293 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:01 crc kubenswrapper[4986]: I1203 12:56:01.704335 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:01 crc kubenswrapper[4986]: I1203 12:56:01.704344 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:01 crc kubenswrapper[4986]: I1203 12:56:01.704357 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:01 crc kubenswrapper[4986]: I1203 12:56:01.704368 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:01Z","lastTransitionTime":"2025-12-03T12:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:01 crc kubenswrapper[4986]: I1203 12:56:01.806775 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:01 crc kubenswrapper[4986]: I1203 12:56:01.806802 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:01 crc kubenswrapper[4986]: I1203 12:56:01.806809 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:01 crc kubenswrapper[4986]: I1203 12:56:01.806823 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:01 crc kubenswrapper[4986]: I1203 12:56:01.806834 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:01Z","lastTransitionTime":"2025-12-03T12:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:01 crc kubenswrapper[4986]: I1203 12:56:01.857254 4986 apiserver.go:52] "Watching apiserver" Dec 03 12:56:01 crc kubenswrapper[4986]: I1203 12:56:01.860762 4986 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 03 12:56:01 crc kubenswrapper[4986]: I1203 12:56:01.861010 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-dns/node-resolver-fszqj","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g"] Dec 03 12:56:01 crc kubenswrapper[4986]: I1203 12:56:01.861512 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 03 12:56:01 crc kubenswrapper[4986]: I1203 12:56:01.861594 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 12:56:01 crc kubenswrapper[4986]: I1203 12:56:01.861690 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 12:56:01 crc kubenswrapper[4986]: I1203 12:56:01.861774 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 03 12:56:01 crc kubenswrapper[4986]: I1203 12:56:01.861793 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 03 12:56:01 crc kubenswrapper[4986]: E1203 12:56:01.861852 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 12:56:01 crc kubenswrapper[4986]: I1203 12:56:01.861905 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 12:56:01 crc kubenswrapper[4986]: E1203 12:56:01.861953 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 12:56:01 crc kubenswrapper[4986]: I1203 12:56:01.861960 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-fszqj" Dec 03 12:56:01 crc kubenswrapper[4986]: E1203 12:56:01.861982 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 12:56:01 crc kubenswrapper[4986]: I1203 12:56:01.863936 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 03 12:56:01 crc kubenswrapper[4986]: I1203 12:56:01.864506 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 03 12:56:01 crc kubenswrapper[4986]: I1203 12:56:01.864561 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 03 12:56:01 crc kubenswrapper[4986]: I1203 12:56:01.864741 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 03 12:56:01 crc kubenswrapper[4986]: I1203 12:56:01.865448 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 03 12:56:01 crc kubenswrapper[4986]: I1203 12:56:01.865587 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 03 12:56:01 crc kubenswrapper[4986]: I1203 12:56:01.865592 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 03 12:56:01 crc kubenswrapper[4986]: I1203 12:56:01.865611 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 03 12:56:01 crc kubenswrapper[4986]: I1203 12:56:01.865925 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 03 12:56:01 crc kubenswrapper[4986]: I1203 12:56:01.865998 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 03 12:56:01 crc kubenswrapper[4986]: I1203 12:56:01.868328 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 03 12:56:01 crc kubenswrapper[4986]: I1203 12:56:01.868640 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 03 12:56:01 crc kubenswrapper[4986]: I1203 12:56:01.885738 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 12:56:01 crc kubenswrapper[4986]: I1203 12:56:01.898466 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 12:56:01 crc kubenswrapper[4986]: I1203 12:56:01.909114 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:01 crc kubenswrapper[4986]: I1203 12:56:01.909155 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:01 crc kubenswrapper[4986]: I1203 12:56:01.909167 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:01 crc kubenswrapper[4986]: I1203 12:56:01.909185 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:01 crc kubenswrapper[4986]: I1203 12:56:01.909195 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:01Z","lastTransitionTime":"2025-12-03T12:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:01 crc kubenswrapper[4986]: I1203 12:56:01.910766 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fszqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a76bcf5c-6a62-4360-826d-ecac337c88ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhtrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fszqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 12:56:01 crc kubenswrapper[4986]: I1203 12:56:01.922110 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 12:56:01 crc kubenswrapper[4986]: I1203 12:56:01.935474 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 12:56:01 crc kubenswrapper[4986]: I1203 12:56:01.947496 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 12:56:01 crc kubenswrapper[4986]: I1203 12:56:01.958920 4986 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 03 12:56:01 crc kubenswrapper[4986]: I1203 12:56:01.959531 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 12:56:01 crc kubenswrapper[4986]: I1203 12:56:01.970240 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 12:56:01 crc kubenswrapper[4986]: I1203 12:56:01.978546 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.011798 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.011853 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.011863 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.011876 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.011885 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:02Z","lastTransitionTime":"2025-12-03T12:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.031397 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.031460 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.031484 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.031504 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.031522 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.031546 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.031596 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.031622 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.031647 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.031670 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.031692 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.031714 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.031767 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.031820 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.031844 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.031865 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.031878 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.031919 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.031877 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.031949 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.031976 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.032018 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.032018 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.032050 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.032047 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.032076 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.032066 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.032097 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.032084 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.032140 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.032121 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.032195 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.032217 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.032236 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.032260 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.032302 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.032327 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.032347 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.032370 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.032649 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.032670 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.032692 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.032718 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.032740 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.032763 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.032785 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.032807 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.032829 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.032853 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.032875 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.032897 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.032918 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.032946 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.032972 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.032994 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.033016 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.033037 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.033062 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.033087 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.033109 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.032233 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.032368 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.032654 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.032676 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.032786 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.032857 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.032924 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.033021 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.033041 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.033078 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.033080 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.033131 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.033161 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.033170 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.033386 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.033391 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.033423 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.033454 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.033514 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.033540 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.033564 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.033587 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.033607 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.033628 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.033651 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.033675 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.033697 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.033718 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.033741 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.033767 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.033791 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.033813 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.033895 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.033918 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.033937 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.033962 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.033985 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.034007 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.034026 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.034047 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.034068 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.034094 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.034118 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.034141 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.034165 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.034187 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.034208 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.034230 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.034255 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.034298 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.034325 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.034348 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.034371 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.033456 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.034393 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.033651 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.033670 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.034408 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.034415 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.034477 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.034512 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.034543 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.034568 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.034587 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.034609 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.034631 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.034652 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.034672 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.034689 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.034709 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.034731 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.034750 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.034769 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.034788 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.034808 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.034829 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.034851 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.034873 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.034898 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.034920 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.034941 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.034964 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.034982 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.035004 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.035027 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.035067 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.035093 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.035120 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.035144 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.035170 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.035192 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.035215 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.035238 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.035260 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.035303 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.035325 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.035348 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.035369 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.035391 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.035422 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.035441 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.035465 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.035489 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.035514 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.035542 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.035564 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.035586 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.035609 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.035632 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.035656 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.035680 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.035703 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.035727 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.035753 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.035775 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.035798 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.035820 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.035843 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.035873 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.035897 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.035921 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.035945 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.035972 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.035995 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.036021 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.036044 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.036068 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.036092 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.036119 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.036143 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.036168 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.036192 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.033686 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.036216 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.036241 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.036267 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.036310 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.036542 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.036569 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.036591 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.036614 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.036675 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.036701 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.036724 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.036750 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.036772 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.036796 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.036820 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.036852 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.036876 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.036902 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.036928 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.036953 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.036978 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.037002 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.037026 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.037050 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.037075 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.037099 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.037132 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.037157 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.037182 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.037231 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.037261 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.037308 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhtrn\" (UniqueName: \"kubernetes.io/projected/a76bcf5c-6a62-4360-826d-ecac337c88ae-kube-api-access-lhtrn\") pod \"node-resolver-fszqj\" (UID: \"a76bcf5c-6a62-4360-826d-ecac337c88ae\") " pod="openshift-dns/node-resolver-fszqj" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.037334 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.037361 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.037799 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.037827 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.037847 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.037864 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.037881 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.037900 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.037919 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.037989 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.038017 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.038041 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/a76bcf5c-6a62-4360-826d-ecac337c88ae-hosts-file\") pod \"node-resolver-fszqj\" (UID: \"a76bcf5c-6a62-4360-826d-ecac337c88ae\") " pod="openshift-dns/node-resolver-fszqj" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.038067 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.038146 4986 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.038164 4986 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.038176 4986 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.038190 4986 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.038204 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.038216 4986 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.038230 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.038244 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.038258 4986 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.038271 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.038303 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.038316 4986 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.038329 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.038343 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.038356 4986 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.038368 4986 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.038384 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.038398 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.038409 4986 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.038421 4986 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.038442 4986 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.038456 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.038476 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.038490 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.038505 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.038522 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.038536 4986 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.033695 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.033848 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.033982 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.034049 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.034109 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.034147 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.034167 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.034224 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.034218 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.034238 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.034272 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.034553 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.034709 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.034742 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.034808 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.034842 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.035142 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.035156 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.035236 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.035819 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.035835 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.035862 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.037458 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.037911 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.038038 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.038235 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.038410 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.046770 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.038169 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.038494 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.038643 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.038789 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.038947 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.039147 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.039161 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: E1203 12:56:02.039653 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 12:56:02.539622677 +0000 UTC m=+22.006053868 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.039906 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.039935 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.040308 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.040326 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.040629 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.040678 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.040611 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.040743 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.040841 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.041081 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.041102 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.041398 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.041590 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.041665 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.041813 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.041831 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.041976 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.042376 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.042457 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.042656 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.042835 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.042845 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.043067 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.043086 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.043169 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.043178 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.043336 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.043551 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.043702 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.043801 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.043672 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.044003 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.044254 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.044643 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.045517 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.045766 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.045864 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.046223 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.046226 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.046517 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.046528 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.046916 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.046981 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.047019 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.047059 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.047547 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.047623 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: E1203 12:56:02.050027 4986 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 12:56:02 crc kubenswrapper[4986]: E1203 12:56:02.050106 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 12:56:02.550082822 +0000 UTC m=+22.016514093 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.050144 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 03 12:56:02 crc kubenswrapper[4986]: E1203 12:56:02.050156 4986 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 12:56:02 crc kubenswrapper[4986]: E1203 12:56:02.050242 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 12:56:02.550228115 +0000 UTC m=+22.016659396 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.050525 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.050713 4986 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.051081 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.051137 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.051464 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.051527 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.055169 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.055353 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.055964 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 03 12:56:02 crc kubenswrapper[4986]: E1203 12:56:02.056946 4986 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 12:56:02 crc kubenswrapper[4986]: E1203 12:56:02.056976 4986 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 12:56:02 crc kubenswrapper[4986]: E1203 12:56:02.057023 4986 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 12:56:02 crc kubenswrapper[4986]: E1203 12:56:02.057091 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-03 12:56:02.557070339 +0000 UTC m=+22.023501600 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.057965 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.058687 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.058930 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.059958 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.060095 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.060641 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.060762 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.061682 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.061778 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.062016 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.062182 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.062490 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.063632 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.065752 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.066055 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: E1203 12:56:02.066297 4986 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 12:56:02 crc kubenswrapper[4986]: E1203 12:56:02.066435 4986 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 12:56:02 crc kubenswrapper[4986]: E1203 12:56:02.066453 4986 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 12:56:02 crc kubenswrapper[4986]: E1203 12:56:02.066521 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-03 12:56:02.566501627 +0000 UTC m=+22.032932818 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.068508 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.069238 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.069924 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.071619 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.071793 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.072016 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.073151 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.073240 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.073267 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.073466 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.073542 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.073552 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.073689 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.073963 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.074002 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.074112 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.074991 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.075192 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.075639 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.075952 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.076260 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.076450 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.076524 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.076658 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.077633 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.077805 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.078460 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.078717 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.079012 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.079075 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.079271 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.079419 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.079433 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.079709 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.079885 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.080229 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.080532 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.081355 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.082043 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.082268 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.082336 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.082622 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.082650 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.082648 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.083250 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.089811 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.090155 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.091551 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.091772 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.092040 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.092167 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.092130 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.092327 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.092418 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.092778 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.092568 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.092807 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.093171 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.095475 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.099945 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.105713 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.108504 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.108635 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.108774 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.109661 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.112446 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.114364 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.114462 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.114518 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.114593 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.114651 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:02Z","lastTransitionTime":"2025-12-03T12:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.117456 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.124213 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.128184 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.139098 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/a76bcf5c-6a62-4360-826d-ecac337c88ae-hosts-file\") pod \"node-resolver-fszqj\" (UID: \"a76bcf5c-6a62-4360-826d-ecac337c88ae\") " pod="openshift-dns/node-resolver-fszqj" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.139255 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhtrn\" (UniqueName: \"kubernetes.io/projected/a76bcf5c-6a62-4360-826d-ecac337c88ae-kube-api-access-lhtrn\") pod \"node-resolver-fszqj\" (UID: \"a76bcf5c-6a62-4360-826d-ecac337c88ae\") " pod="openshift-dns/node-resolver-fszqj" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.139369 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.139648 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.139776 4986 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.139861 4986 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.139933 4986 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.140006 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.140074 4986 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.140137 4986 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.140202 4986 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.140266 4986 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.140349 4986 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.140417 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.140478 4986 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.140539 4986 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.140600 4986 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.140654 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.140712 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.140772 4986 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.140872 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.140943 4986 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.141008 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.141064 4986 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.141120 4986 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.141174 4986 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.141224 4986 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.141298 4986 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.141362 4986 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.141419 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.141475 4986 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.141526 4986 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.141576 4986 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.141632 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.141685 4986 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.141740 4986 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.141798 4986 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.141864 4986 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.141929 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.141991 4986 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.142043 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.142099 4986 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.142154 4986 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.142215 4986 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.142270 4986 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.142348 4986 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.142408 4986 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.142466 4986 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.142518 4986 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.142573 4986 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.142629 4986 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.142681 4986 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.142739 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.142794 4986 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.142848 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.142899 4986 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.142968 4986 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.143043 4986 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.143102 4986 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.143153 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.143211 4986 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.143269 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.143359 4986 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.143414 4986 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.143465 4986 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.143527 4986 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.143586 4986 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.143643 4986 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.143697 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.143753 4986 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.143808 4986 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.143869 4986 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.143924 4986 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.143981 4986 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.144036 4986 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.144086 4986 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.144143 4986 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.144195 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.144250 4986 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.144321 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.144381 4986 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.144435 4986 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.144496 4986 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.144553 4986 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.144610 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.144665 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.144715 4986 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.144769 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.144819 4986 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.144895 4986 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.144951 4986 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.145002 4986 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.145055 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.145123 4986 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.145185 4986 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.145244 4986 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.145311 4986 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.145372 4986 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.145428 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.145485 4986 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.145535 4986 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.145593 4986 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.145647 4986 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.145707 4986 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.145757 4986 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.145818 4986 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.145869 4986 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.145924 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.145978 4986 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.146028 4986 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.146085 4986 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.146137 4986 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.146204 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.146311 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.146390 4986 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.146444 4986 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.146505 4986 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.146562 4986 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.146659 4986 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.146711 4986 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.146767 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.146829 4986 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.146884 4986 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.146943 4986 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.147012 4986 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.147080 4986 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.147134 4986 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.147184 4986 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.147232 4986 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.147316 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.147384 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.147467 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.147531 4986 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.147584 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.147634 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.147689 4986 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.147744 4986 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.147800 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.147854 4986 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.148057 4986 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.148114 4986 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.148172 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.148227 4986 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.148300 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.148354 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.148414 4986 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.148469 4986 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.148537 4986 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.148607 4986 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.148663 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.148724 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.148798 4986 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.148970 4986 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.149039 4986 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.149083 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/a76bcf5c-6a62-4360-826d-ecac337c88ae-hosts-file\") pod \"node-resolver-fszqj\" (UID: \"a76bcf5c-6a62-4360-826d-ecac337c88ae\") " pod="openshift-dns/node-resolver-fszqj" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.149095 4986 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.149143 4986 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.149160 4986 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.149174 4986 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.149186 4986 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.148954 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.149197 4986 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.149315 4986 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.149331 4986 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.149344 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.149354 4986 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.149363 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.149372 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.149381 4986 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.149392 4986 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.149402 4986 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.148931 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.149413 4986 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.149466 4986 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.169635 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhtrn\" (UniqueName: \"kubernetes.io/projected/a76bcf5c-6a62-4360-826d-ecac337c88ae-kube-api-access-lhtrn\") pod \"node-resolver-fszqj\" (UID: \"a76bcf5c-6a62-4360-826d-ecac337c88ae\") " pod="openshift-dns/node-resolver-fszqj" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.177211 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.183424 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-fszqj" Dec 03 12:56:02 crc kubenswrapper[4986]: W1203 12:56:02.192590 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-6679e40b7b4621ea42943f6ff6be591db373475df5229c5ef3da7226cee87cb7 WatchSource:0}: Error finding container 6679e40b7b4621ea42943f6ff6be591db373475df5229c5ef3da7226cee87cb7: Status 404 returned error can't find the container with id 6679e40b7b4621ea42943f6ff6be591db373475df5229c5ef3da7226cee87cb7 Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.200340 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 03 12:56:02 crc kubenswrapper[4986]: W1203 12:56:02.202030 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda76bcf5c_6a62_4360_826d_ecac337c88ae.slice/crio-0467fb7c465a26207672582948aef65964c8aa949c974c6445f9547c240d0b4a WatchSource:0}: Error finding container 0467fb7c465a26207672582948aef65964c8aa949c974c6445f9547c240d0b4a: Status 404 returned error can't find the container with id 0467fb7c465a26207672582948aef65964c8aa949c974c6445f9547c240d0b4a Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.206495 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.217417 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.217506 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.217515 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.217530 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.217540 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:02Z","lastTransitionTime":"2025-12-03T12:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.319241 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.319274 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.319297 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.319310 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.319322 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:02Z","lastTransitionTime":"2025-12-03T12:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.421605 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.421701 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.421715 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.421732 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.421744 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:02Z","lastTransitionTime":"2025-12-03T12:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.463224 4986 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-12-03 12:51:01 +0000 UTC, rotation deadline is 2026-08-20 13:57:53.438089345 +0000 UTC Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.463315 4986 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6241h1m50.974783105s for next certificate rotation Dec 03 12:56:02 crc kubenswrapper[4986]: W1203 12:56:02.474213 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-7e04ead1cf6287cfa2065d101873bd6b27e22a874f13cd584fce81d9d7bb9c68 WatchSource:0}: Error finding container 7e04ead1cf6287cfa2065d101873bd6b27e22a874f13cd584fce81d9d7bb9c68: Status 404 returned error can't find the container with id 7e04ead1cf6287cfa2065d101873bd6b27e22a874f13cd584fce81d9d7bb9c68 Dec 03 12:56:02 crc kubenswrapper[4986]: W1203 12:56:02.479771 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-2f9f78d67ef79a76c386be87418106e319aa6604490e448f3a5f4d3274e53584 WatchSource:0}: Error finding container 2f9f78d67ef79a76c386be87418106e319aa6604490e448f3a5f4d3274e53584: Status 404 returned error can't find the container with id 2f9f78d67ef79a76c386be87418106e319aa6604490e448f3a5f4d3274e53584 Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.527031 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.527065 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.527075 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.527090 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.527101 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:02Z","lastTransitionTime":"2025-12-03T12:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.552846 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.552941 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.552971 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 12:56:02 crc kubenswrapper[4986]: E1203 12:56:02.553130 4986 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 12:56:02 crc kubenswrapper[4986]: E1203 12:56:02.553188 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 12:56:03.553171164 +0000 UTC m=+23.019602355 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 12:56:02 crc kubenswrapper[4986]: E1203 12:56:02.553251 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 12:56:03.553241996 +0000 UTC m=+23.019673187 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:56:02 crc kubenswrapper[4986]: E1203 12:56:02.553331 4986 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 12:56:02 crc kubenswrapper[4986]: E1203 12:56:02.553366 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 12:56:03.553357038 +0000 UTC m=+23.019788229 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.629544 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.629593 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.629606 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.629623 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.629635 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:02Z","lastTransitionTime":"2025-12-03T12:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.653484 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.653570 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 12:56:02 crc kubenswrapper[4986]: E1203 12:56:02.653658 4986 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 12:56:02 crc kubenswrapper[4986]: E1203 12:56:02.653684 4986 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 12:56:02 crc kubenswrapper[4986]: E1203 12:56:02.653699 4986 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 12:56:02 crc kubenswrapper[4986]: E1203 12:56:02.653704 4986 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 12:56:02 crc kubenswrapper[4986]: E1203 12:56:02.653722 4986 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 12:56:02 crc kubenswrapper[4986]: E1203 12:56:02.653732 4986 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 12:56:02 crc kubenswrapper[4986]: E1203 12:56:02.653763 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-03 12:56:03.653745939 +0000 UTC m=+23.120177130 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 12:56:02 crc kubenswrapper[4986]: E1203 12:56:02.653782 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-03 12:56:03.65377612 +0000 UTC m=+23.120207311 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.731404 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.731435 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.731442 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.731455 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.731463 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:02Z","lastTransitionTime":"2025-12-03T12:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.833162 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.833201 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.833212 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.833228 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.833239 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:02Z","lastTransitionTime":"2025-12-03T12:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.935551 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.935592 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.935601 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.935616 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.935625 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:02Z","lastTransitionTime":"2025-12-03T12:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.947090 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.947677 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.948595 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.949198 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.949776 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.950252 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.950852 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.951422 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.952065 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.952616 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.953087 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.955920 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.956549 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.957186 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.957810 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.958454 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.959145 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.959640 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.960385 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.961040 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.961596 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.962268 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.965041 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.965856 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.966920 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.967646 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.969081 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.969690 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.970858 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.971424 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.972009 4986 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.972609 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.974633 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.975497 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.976496 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.978225 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.978973 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.980080 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.980852 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.982097 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.982647 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.983814 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.984587 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.985724 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.986301 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.987531 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.988203 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.989604 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.990145 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.992191 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.994235 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.995559 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.996716 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Dec 03 12:56:02 crc kubenswrapper[4986]: I1203 12:56:02.997359 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.038148 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.038196 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.038209 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.038225 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.038238 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:03Z","lastTransitionTime":"2025-12-03T12:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.053176 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-fszqj" event={"ID":"a76bcf5c-6a62-4360-826d-ecac337c88ae","Type":"ContainerStarted","Data":"9d94370a14317ade02cb805783e09ffcd8c6ff07f35e9124e302028e6dfa3c06"} Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.053231 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-fszqj" event={"ID":"a76bcf5c-6a62-4360-826d-ecac337c88ae","Type":"ContainerStarted","Data":"0467fb7c465a26207672582948aef65964c8aa949c974c6445f9547c240d0b4a"} Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.053936 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"2f9f78d67ef79a76c386be87418106e319aa6604490e448f3a5f4d3274e53584"} Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.056703 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"d0d71a3ffdbe593936c83e92e92e286eaff092333bb3dd54cb5bc2825ca46fbd"} Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.056733 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"5a402aeaef03d8d3c684f0d8e7fe8a080bc02d761104e3c1095b3233aade2b9e"} Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.056748 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"7e04ead1cf6287cfa2065d101873bd6b27e22a874f13cd584fce81d9d7bb9c68"} Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.057864 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"8b923e52e6284c8adca87e7d3d801a432a676e4389d7ab3c0cba584c5b7d96f6"} Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.057893 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"6679e40b7b4621ea42943f6ff6be591db373475df5229c5ef3da7226cee87cb7"} Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.060354 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.062060 4986 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="8c0ed6891f87bfc3c62f0f13739720e012dc8fa3b122a18e799e164e4656abb7" exitCode=255 Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.062094 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"8c0ed6891f87bfc3c62f0f13739720e012dc8fa3b122a18e799e164e4656abb7"} Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.066864 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.076590 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.084860 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.096840 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:03Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.109417 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:03Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.122101 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fszqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a76bcf5c-6a62-4360-826d-ecac337c88ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d94370a14317ade02cb805783e09ffcd8c6ff07f35e9124e302028e6dfa3c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhtrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fszqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:03Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.127360 4986 scope.go:117] "RemoveContainer" containerID="8c0ed6891f87bfc3c62f0f13739720e012dc8fa3b122a18e799e164e4656abb7" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.127360 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.134697 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:03Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.140519 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.140545 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.140556 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.140568 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.140578 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:03Z","lastTransitionTime":"2025-12-03T12:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.147459 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fszqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a76bcf5c-6a62-4360-826d-ecac337c88ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d94370a14317ade02cb805783e09ffcd8c6ff07f35e9124e302028e6dfa3c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhtrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fszqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:03Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.163038 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:03Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.178080 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231ed674-1d19-4ee2-b29c-a1b7453ed531\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7eb1602f2d542b1998655456e5c49fca8f55a45a07cb075f989d4b10f91f6c95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0ca216d3d1121b7035d5d7ead25d459f95d56808997c02b3801cac0354c76a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04aa8cf2e01507ace747505514511263d9c560608e33f592a2c10be1edad8674\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c0ed6891f87bfc3c62f0f13739720e012dc8fa3b122a18e799e164e4656abb7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c0ed6891f87bfc3c62f0f13739720e012dc8fa3b122a18e799e164e4656abb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T12:56:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 12:56:01.437086 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 12:56:01.437205 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 12:56:01.438166 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1972143202/tls.crt::/tmp/serving-cert-1972143202/tls.key\\\\\\\"\\\\nI1203 12:56:02.167657 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 12:56:02.174337 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 12:56:02.174360 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 12:56:02.174377 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 12:56:02.174382 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 12:56:02.178753 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 12:56:02.179168 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 12:56:02.179175 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 12:56:02.179180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 12:56:02.179184 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 12:56:02.179187 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 12:56:02.179190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 12:56:02.178775 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 12:56:02.181220 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8ecb87e56d3fa7d72b57a6f333db4244ee7f60381778e7099a6fc88d91e1e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:03Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.196224 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b923e52e6284c8adca87e7d3d801a432a676e4389d7ab3c0cba584c5b7d96f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:03Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.218678 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:03Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.236151 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0d71a3ffdbe593936c83e92e92e286eaff092333bb3dd54cb5bc2825ca46fbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a402aeaef03d8d3c684f0d8e7fe8a080bc02d761104e3c1095b3233aade2b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:03Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.242655 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.242696 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.242704 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.242719 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.242728 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:03Z","lastTransitionTime":"2025-12-03T12:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.248985 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:03Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.255763 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-px97g"] Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.256115 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-px97g" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.257769 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.258097 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.258561 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.258929 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.259673 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.263675 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:03Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.277754 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:03Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.291271 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:03Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.304959 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:03Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.310809 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.317459 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0d71a3ffdbe593936c83e92e92e286eaff092333bb3dd54cb5bc2825ca46fbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a402aeaef03d8d3c684f0d8e7fe8a080bc02d761104e3c1095b3233aade2b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:03Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.323391 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.324798 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.331000 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b923e52e6284c8adca87e7d3d801a432a676e4389d7ab3c0cba584c5b7d96f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:03Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.342048 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fszqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a76bcf5c-6a62-4360-826d-ecac337c88ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d94370a14317ade02cb805783e09ffcd8c6ff07f35e9124e302028e6dfa3c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhtrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fszqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:03Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.344742 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.344792 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.344803 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.344818 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.344828 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:03Z","lastTransitionTime":"2025-12-03T12:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.353507 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:03Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.358539 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/97196b6d-75cc-4de4-8805-f9ce3fbd4230-etc-kubernetes\") pod \"multus-px97g\" (UID: \"97196b6d-75cc-4de4-8805-f9ce3fbd4230\") " pod="openshift-multus/multus-px97g" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.358577 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/97196b6d-75cc-4de4-8805-f9ce3fbd4230-multus-daemon-config\") pod \"multus-px97g\" (UID: \"97196b6d-75cc-4de4-8805-f9ce3fbd4230\") " pod="openshift-multus/multus-px97g" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.358599 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/97196b6d-75cc-4de4-8805-f9ce3fbd4230-cni-binary-copy\") pod \"multus-px97g\" (UID: \"97196b6d-75cc-4de4-8805-f9ce3fbd4230\") " pod="openshift-multus/multus-px97g" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.358620 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/97196b6d-75cc-4de4-8805-f9ce3fbd4230-multus-socket-dir-parent\") pod \"multus-px97g\" (UID: \"97196b6d-75cc-4de4-8805-f9ce3fbd4230\") " pod="openshift-multus/multus-px97g" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.358680 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/97196b6d-75cc-4de4-8805-f9ce3fbd4230-host-var-lib-kubelet\") pod \"multus-px97g\" (UID: \"97196b6d-75cc-4de4-8805-f9ce3fbd4230\") " pod="openshift-multus/multus-px97g" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.358732 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/97196b6d-75cc-4de4-8805-f9ce3fbd4230-hostroot\") pod \"multus-px97g\" (UID: \"97196b6d-75cc-4de4-8805-f9ce3fbd4230\") " pod="openshift-multus/multus-px97g" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.358757 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/97196b6d-75cc-4de4-8805-f9ce3fbd4230-cnibin\") pod \"multus-px97g\" (UID: \"97196b6d-75cc-4de4-8805-f9ce3fbd4230\") " pod="openshift-multus/multus-px97g" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.358774 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/97196b6d-75cc-4de4-8805-f9ce3fbd4230-multus-conf-dir\") pod \"multus-px97g\" (UID: \"97196b6d-75cc-4de4-8805-f9ce3fbd4230\") " pod="openshift-multus/multus-px97g" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.358804 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/97196b6d-75cc-4de4-8805-f9ce3fbd4230-host-run-k8s-cni-cncf-io\") pod \"multus-px97g\" (UID: \"97196b6d-75cc-4de4-8805-f9ce3fbd4230\") " pod="openshift-multus/multus-px97g" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.358819 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/97196b6d-75cc-4de4-8805-f9ce3fbd4230-host-var-lib-cni-multus\") pod \"multus-px97g\" (UID: \"97196b6d-75cc-4de4-8805-f9ce3fbd4230\") " pod="openshift-multus/multus-px97g" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.358882 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/97196b6d-75cc-4de4-8805-f9ce3fbd4230-host-run-multus-certs\") pod \"multus-px97g\" (UID: \"97196b6d-75cc-4de4-8805-f9ce3fbd4230\") " pod="openshift-multus/multus-px97g" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.358947 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/97196b6d-75cc-4de4-8805-f9ce3fbd4230-host-run-netns\") pod \"multus-px97g\" (UID: \"97196b6d-75cc-4de4-8805-f9ce3fbd4230\") " pod="openshift-multus/multus-px97g" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.358971 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/97196b6d-75cc-4de4-8805-f9ce3fbd4230-host-var-lib-cni-bin\") pod \"multus-px97g\" (UID: \"97196b6d-75cc-4de4-8805-f9ce3fbd4230\") " pod="openshift-multus/multus-px97g" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.359023 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4j5x\" (UniqueName: \"kubernetes.io/projected/97196b6d-75cc-4de4-8805-f9ce3fbd4230-kube-api-access-v4j5x\") pod \"multus-px97g\" (UID: \"97196b6d-75cc-4de4-8805-f9ce3fbd4230\") " pod="openshift-multus/multus-px97g" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.359042 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/97196b6d-75cc-4de4-8805-f9ce3fbd4230-system-cni-dir\") pod \"multus-px97g\" (UID: \"97196b6d-75cc-4de4-8805-f9ce3fbd4230\") " pod="openshift-multus/multus-px97g" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.359066 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/97196b6d-75cc-4de4-8805-f9ce3fbd4230-multus-cni-dir\") pod \"multus-px97g\" (UID: \"97196b6d-75cc-4de4-8805-f9ce3fbd4230\") " pod="openshift-multus/multus-px97g" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.359138 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/97196b6d-75cc-4de4-8805-f9ce3fbd4230-os-release\") pod \"multus-px97g\" (UID: \"97196b6d-75cc-4de4-8805-f9ce3fbd4230\") " pod="openshift-multus/multus-px97g" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.364349 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-px97g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97196b6d-75cc-4de4-8805-f9ce3fbd4230\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v4j5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-px97g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:03Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.376581 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231ed674-1d19-4ee2-b29c-a1b7453ed531\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7eb1602f2d542b1998655456e5c49fca8f55a45a07cb075f989d4b10f91f6c95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0ca216d3d1121b7035d5d7ead25d459f95d56808997c02b3801cac0354c76a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04aa8cf2e01507ace747505514511263d9c560608e33f592a2c10be1edad8674\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c0ed6891f87bfc3c62f0f13739720e012dc8fa3b122a18e799e164e4656abb7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c0ed6891f87bfc3c62f0f13739720e012dc8fa3b122a18e799e164e4656abb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T12:56:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 12:56:01.437086 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 12:56:01.437205 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 12:56:01.438166 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1972143202/tls.crt::/tmp/serving-cert-1972143202/tls.key\\\\\\\"\\\\nI1203 12:56:02.167657 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 12:56:02.174337 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 12:56:02.174360 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 12:56:02.174377 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 12:56:02.174382 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 12:56:02.178753 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 12:56:02.179168 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 12:56:02.179175 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 12:56:02.179180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 12:56:02.179184 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 12:56:02.179187 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 12:56:02.179190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 12:56:02.178775 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 12:56:02.181220 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8ecb87e56d3fa7d72b57a6f333db4244ee7f60381778e7099a6fc88d91e1e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:03Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.391645 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231ed674-1d19-4ee2-b29c-a1b7453ed531\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7eb1602f2d542b1998655456e5c49fca8f55a45a07cb075f989d4b10f91f6c95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0ca216d3d1121b7035d5d7ead25d459f95d56808997c02b3801cac0354c76a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04aa8cf2e01507ace747505514511263d9c560608e33f592a2c10be1edad8674\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c0ed6891f87bfc3c62f0f13739720e012dc8fa3b122a18e799e164e4656abb7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c0ed6891f87bfc3c62f0f13739720e012dc8fa3b122a18e799e164e4656abb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T12:56:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 12:56:01.437086 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 12:56:01.437205 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 12:56:01.438166 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1972143202/tls.crt::/tmp/serving-cert-1972143202/tls.key\\\\\\\"\\\\nI1203 12:56:02.167657 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 12:56:02.174337 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 12:56:02.174360 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 12:56:02.174377 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 12:56:02.174382 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 12:56:02.178753 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 12:56:02.179168 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 12:56:02.179175 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 12:56:02.179180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 12:56:02.179184 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 12:56:02.179187 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 12:56:02.179190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 12:56:02.178775 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 12:56:02.181220 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8ecb87e56d3fa7d72b57a6f333db4244ee7f60381778e7099a6fc88d91e1e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:03Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.418470 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d51894dd-38f0-413a-adaf-22d196474bed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa98006a8736394300cdb9ef3331d2a23e1b3af2e2203f757c51ca9c700d26c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6e7d24b5f1325cfb2d210cb9f311026ab370c91deecc5d0d951bcd4eda79816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9bb9d940d346f9895e24d98173668ea2d0e68fd3035e5b03475bbc936582958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7fa0d673f037c8edcc75ee30b1bbeea73da55d5459347dacc9251fffbb38be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d4007e9ed33227938a5a3656fd74958aef599418fec4cd2048dfa7dd88924dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fb412bfaef1de92aa6be2e23c0dd73fe39eebf29b6814fb3bb6fd0281ddbbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fb412bfaef1de92aa6be2e23c0dd73fe39eebf29b6814fb3bb6fd0281ddbbd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c66ee903635427e08f96e3f96c512c4d25e79bf2fc14eb0c699b3b9a617cfeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c66ee903635427e08f96e3f96c512c4d25e79bf2fc14eb0c699b3b9a617cfeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54fb6e4ecd7f73267d49261875a00a51e5e36e6079170f694346e9d6c3df8f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54fb6e4ecd7f73267d49261875a00a51e5e36e6079170f694346e9d6c3df8f31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:03Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.432574 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b923e52e6284c8adca87e7d3d801a432a676e4389d7ab3c0cba584c5b7d96f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:03Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.446634 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fszqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a76bcf5c-6a62-4360-826d-ecac337c88ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d94370a14317ade02cb805783e09ffcd8c6ff07f35e9124e302028e6dfa3c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhtrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fszqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:03Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.446844 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.446859 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.446867 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.446880 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.446889 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:03Z","lastTransitionTime":"2025-12-03T12:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.460575 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/97196b6d-75cc-4de4-8805-f9ce3fbd4230-etc-kubernetes\") pod \"multus-px97g\" (UID: \"97196b6d-75cc-4de4-8805-f9ce3fbd4230\") " pod="openshift-multus/multus-px97g" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.460613 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/97196b6d-75cc-4de4-8805-f9ce3fbd4230-multus-daemon-config\") pod \"multus-px97g\" (UID: \"97196b6d-75cc-4de4-8805-f9ce3fbd4230\") " pod="openshift-multus/multus-px97g" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.460630 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/97196b6d-75cc-4de4-8805-f9ce3fbd4230-host-var-lib-kubelet\") pod \"multus-px97g\" (UID: \"97196b6d-75cc-4de4-8805-f9ce3fbd4230\") " pod="openshift-multus/multus-px97g" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.460646 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/97196b6d-75cc-4de4-8805-f9ce3fbd4230-hostroot\") pod \"multus-px97g\" (UID: \"97196b6d-75cc-4de4-8805-f9ce3fbd4230\") " pod="openshift-multus/multus-px97g" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.460664 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/97196b6d-75cc-4de4-8805-f9ce3fbd4230-cnibin\") pod \"multus-px97g\" (UID: \"97196b6d-75cc-4de4-8805-f9ce3fbd4230\") " pod="openshift-multus/multus-px97g" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.460721 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/97196b6d-75cc-4de4-8805-f9ce3fbd4230-hostroot\") pod \"multus-px97g\" (UID: \"97196b6d-75cc-4de4-8805-f9ce3fbd4230\") " pod="openshift-multus/multus-px97g" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.460774 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/97196b6d-75cc-4de4-8805-f9ce3fbd4230-cni-binary-copy\") pod \"multus-px97g\" (UID: \"97196b6d-75cc-4de4-8805-f9ce3fbd4230\") " pod="openshift-multus/multus-px97g" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.460776 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/97196b6d-75cc-4de4-8805-f9ce3fbd4230-host-var-lib-kubelet\") pod \"multus-px97g\" (UID: \"97196b6d-75cc-4de4-8805-f9ce3fbd4230\") " pod="openshift-multus/multus-px97g" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.460756 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/97196b6d-75cc-4de4-8805-f9ce3fbd4230-etc-kubernetes\") pod \"multus-px97g\" (UID: \"97196b6d-75cc-4de4-8805-f9ce3fbd4230\") " pod="openshift-multus/multus-px97g" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.460861 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/97196b6d-75cc-4de4-8805-f9ce3fbd4230-cnibin\") pod \"multus-px97g\" (UID: \"97196b6d-75cc-4de4-8805-f9ce3fbd4230\") " pod="openshift-multus/multus-px97g" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.460801 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/97196b6d-75cc-4de4-8805-f9ce3fbd4230-multus-socket-dir-parent\") pod \"multus-px97g\" (UID: \"97196b6d-75cc-4de4-8805-f9ce3fbd4230\") " pod="openshift-multus/multus-px97g" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.461082 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/97196b6d-75cc-4de4-8805-f9ce3fbd4230-multus-socket-dir-parent\") pod \"multus-px97g\" (UID: \"97196b6d-75cc-4de4-8805-f9ce3fbd4230\") " pod="openshift-multus/multus-px97g" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.461323 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/97196b6d-75cc-4de4-8805-f9ce3fbd4230-multus-daemon-config\") pod \"multus-px97g\" (UID: \"97196b6d-75cc-4de4-8805-f9ce3fbd4230\") " pod="openshift-multus/multus-px97g" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.461376 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/97196b6d-75cc-4de4-8805-f9ce3fbd4230-cni-binary-copy\") pod \"multus-px97g\" (UID: \"97196b6d-75cc-4de4-8805-f9ce3fbd4230\") " pod="openshift-multus/multus-px97g" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.461435 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/97196b6d-75cc-4de4-8805-f9ce3fbd4230-host-run-k8s-cni-cncf-io\") pod \"multus-px97g\" (UID: \"97196b6d-75cc-4de4-8805-f9ce3fbd4230\") " pod="openshift-multus/multus-px97g" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.461455 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/97196b6d-75cc-4de4-8805-f9ce3fbd4230-multus-conf-dir\") pod \"multus-px97g\" (UID: \"97196b6d-75cc-4de4-8805-f9ce3fbd4230\") " pod="openshift-multus/multus-px97g" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.461543 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/97196b6d-75cc-4de4-8805-f9ce3fbd4230-host-run-k8s-cni-cncf-io\") pod \"multus-px97g\" (UID: \"97196b6d-75cc-4de4-8805-f9ce3fbd4230\") " pod="openshift-multus/multus-px97g" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.461582 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/97196b6d-75cc-4de4-8805-f9ce3fbd4230-host-run-netns\") pod \"multus-px97g\" (UID: \"97196b6d-75cc-4de4-8805-f9ce3fbd4230\") " pod="openshift-multus/multus-px97g" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.461606 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/97196b6d-75cc-4de4-8805-f9ce3fbd4230-host-var-lib-cni-multus\") pod \"multus-px97g\" (UID: \"97196b6d-75cc-4de4-8805-f9ce3fbd4230\") " pod="openshift-multus/multus-px97g" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.461645 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/97196b6d-75cc-4de4-8805-f9ce3fbd4230-host-run-netns\") pod \"multus-px97g\" (UID: \"97196b6d-75cc-4de4-8805-f9ce3fbd4230\") " pod="openshift-multus/multus-px97g" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.461667 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/97196b6d-75cc-4de4-8805-f9ce3fbd4230-host-run-multus-certs\") pod \"multus-px97g\" (UID: \"97196b6d-75cc-4de4-8805-f9ce3fbd4230\") " pod="openshift-multus/multus-px97g" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.461671 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/97196b6d-75cc-4de4-8805-f9ce3fbd4230-multus-conf-dir\") pod \"multus-px97g\" (UID: \"97196b6d-75cc-4de4-8805-f9ce3fbd4230\") " pod="openshift-multus/multus-px97g" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.461726 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/97196b6d-75cc-4de4-8805-f9ce3fbd4230-host-var-lib-cni-multus\") pod \"multus-px97g\" (UID: \"97196b6d-75cc-4de4-8805-f9ce3fbd4230\") " pod="openshift-multus/multus-px97g" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.461735 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/97196b6d-75cc-4de4-8805-f9ce3fbd4230-host-run-multus-certs\") pod \"multus-px97g\" (UID: \"97196b6d-75cc-4de4-8805-f9ce3fbd4230\") " pod="openshift-multus/multus-px97g" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.461703 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/97196b6d-75cc-4de4-8805-f9ce3fbd4230-host-var-lib-cni-bin\") pod \"multus-px97g\" (UID: \"97196b6d-75cc-4de4-8805-f9ce3fbd4230\") " pod="openshift-multus/multus-px97g" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.461771 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/97196b6d-75cc-4de4-8805-f9ce3fbd4230-host-var-lib-cni-bin\") pod \"multus-px97g\" (UID: \"97196b6d-75cc-4de4-8805-f9ce3fbd4230\") " pod="openshift-multus/multus-px97g" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.461782 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4j5x\" (UniqueName: \"kubernetes.io/projected/97196b6d-75cc-4de4-8805-f9ce3fbd4230-kube-api-access-v4j5x\") pod \"multus-px97g\" (UID: \"97196b6d-75cc-4de4-8805-f9ce3fbd4230\") " pod="openshift-multus/multus-px97g" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.461833 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/97196b6d-75cc-4de4-8805-f9ce3fbd4230-system-cni-dir\") pod \"multus-px97g\" (UID: \"97196b6d-75cc-4de4-8805-f9ce3fbd4230\") " pod="openshift-multus/multus-px97g" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.461922 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/97196b6d-75cc-4de4-8805-f9ce3fbd4230-system-cni-dir\") pod \"multus-px97g\" (UID: \"97196b6d-75cc-4de4-8805-f9ce3fbd4230\") " pod="openshift-multus/multus-px97g" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.462069 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/97196b6d-75cc-4de4-8805-f9ce3fbd4230-multus-cni-dir\") pod \"multus-px97g\" (UID: \"97196b6d-75cc-4de4-8805-f9ce3fbd4230\") " pod="openshift-multus/multus-px97g" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.462070 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:03Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.461859 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/97196b6d-75cc-4de4-8805-f9ce3fbd4230-multus-cni-dir\") pod \"multus-px97g\" (UID: \"97196b6d-75cc-4de4-8805-f9ce3fbd4230\") " pod="openshift-multus/multus-px97g" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.462169 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/97196b6d-75cc-4de4-8805-f9ce3fbd4230-os-release\") pod \"multus-px97g\" (UID: \"97196b6d-75cc-4de4-8805-f9ce3fbd4230\") " pod="openshift-multus/multus-px97g" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.462257 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/97196b6d-75cc-4de4-8805-f9ce3fbd4230-os-release\") pod \"multus-px97g\" (UID: \"97196b6d-75cc-4de4-8805-f9ce3fbd4230\") " pod="openshift-multus/multus-px97g" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.476633 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-px97g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97196b6d-75cc-4de4-8805-f9ce3fbd4230\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v4j5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-px97g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:03Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.483609 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4j5x\" (UniqueName: \"kubernetes.io/projected/97196b6d-75cc-4de4-8805-f9ce3fbd4230-kube-api-access-v4j5x\") pod \"multus-px97g\" (UID: \"97196b6d-75cc-4de4-8805-f9ce3fbd4230\") " pod="openshift-multus/multus-px97g" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.495926 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:03Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.507992 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:03Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.521273 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:03Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.533924 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0d71a3ffdbe593936c83e92e92e286eaff092333bb3dd54cb5bc2825ca46fbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a402aeaef03d8d3c684f0d8e7fe8a080bc02d761104e3c1095b3233aade2b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:03Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.548890 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.548941 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.548953 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.548969 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.548981 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:03Z","lastTransitionTime":"2025-12-03T12:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.563363 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.563452 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.563476 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 12:56:03 crc kubenswrapper[4986]: E1203 12:56:03.563513 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 12:56:05.563486254 +0000 UTC m=+25.029917445 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:56:03 crc kubenswrapper[4986]: E1203 12:56:03.563564 4986 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 12:56:03 crc kubenswrapper[4986]: E1203 12:56:03.563627 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 12:56:05.563604647 +0000 UTC m=+25.030035928 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 12:56:03 crc kubenswrapper[4986]: E1203 12:56:03.563656 4986 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 12:56:03 crc kubenswrapper[4986]: E1203 12:56:03.563727 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 12:56:05.563711089 +0000 UTC m=+25.030142280 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.580198 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-px97g" Dec 03 12:56:03 crc kubenswrapper[4986]: W1203 12:56:03.592123 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod97196b6d_75cc_4de4_8805_f9ce3fbd4230.slice/crio-93c7c845eec70b18b82c54d78c6a0793d6b02734c8861992bf78246dddb170ed WatchSource:0}: Error finding container 93c7c845eec70b18b82c54d78c6a0793d6b02734c8861992bf78246dddb170ed: Status 404 returned error can't find the container with id 93c7c845eec70b18b82c54d78c6a0793d6b02734c8861992bf78246dddb170ed Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.611674 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-xggpj"] Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.612219 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-rkgd9"] Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.612342 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.613130 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-9nf52"] Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.613362 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-rkgd9" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.614074 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.614985 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.615310 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.615549 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.616018 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.617268 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.617931 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.617971 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.617972 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.618081 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.618161 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.618212 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.618433 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.618531 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.618844 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.632890 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:03Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.651194 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0d71a3ffdbe593936c83e92e92e286eaff092333bb3dd54cb5bc2825ca46fbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a402aeaef03d8d3c684f0d8e7fe8a080bc02d761104e3c1095b3233aade2b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:03Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.651364 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.651401 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.651411 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.651427 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.651440 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:03Z","lastTransitionTime":"2025-12-03T12:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.664004 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.664083 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 12:56:03 crc kubenswrapper[4986]: E1203 12:56:03.664227 4986 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 12:56:03 crc kubenswrapper[4986]: E1203 12:56:03.664248 4986 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 12:56:03 crc kubenswrapper[4986]: E1203 12:56:03.664261 4986 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 12:56:03 crc kubenswrapper[4986]: E1203 12:56:03.664337 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-03 12:56:05.664319806 +0000 UTC m=+25.130750997 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 12:56:03 crc kubenswrapper[4986]: E1203 12:56:03.664488 4986 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 12:56:03 crc kubenswrapper[4986]: E1203 12:56:03.664506 4986 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 12:56:03 crc kubenswrapper[4986]: E1203 12:56:03.664519 4986 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 12:56:03 crc kubenswrapper[4986]: E1203 12:56:03.664556 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-03 12:56:05.664545971 +0000 UTC m=+25.130977162 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.679550 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d51894dd-38f0-413a-adaf-22d196474bed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa98006a8736394300cdb9ef3331d2a23e1b3af2e2203f757c51ca9c700d26c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6e7d24b5f1325cfb2d210cb9f311026ab370c91deecc5d0d951bcd4eda79816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9bb9d940d346f9895e24d98173668ea2d0e68fd3035e5b03475bbc936582958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7fa0d673f037c8edcc75ee30b1bbeea73da55d5459347dacc9251fffbb38be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d4007e9ed33227938a5a3656fd74958aef599418fec4cd2048dfa7dd88924dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fb412bfaef1de92aa6be2e23c0dd73fe39eebf29b6814fb3bb6fd0281ddbbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fb412bfaef1de92aa6be2e23c0dd73fe39eebf29b6814fb3bb6fd0281ddbbd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c66ee903635427e08f96e3f96c512c4d25e79bf2fc14eb0c699b3b9a617cfeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c66ee903635427e08f96e3f96c512c4d25e79bf2fc14eb0c699b3b9a617cfeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54fb6e4ecd7f73267d49261875a00a51e5e36e6079170f694346e9d6c3df8f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54fb6e4ecd7f73267d49261875a00a51e5e36e6079170f694346e9d6c3df8f31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:03Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.693143 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b923e52e6284c8adca87e7d3d801a432a676e4389d7ab3c0cba584c5b7d96f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:03Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.708464 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:03Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.718839 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgppd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgppd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xggpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:03Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.732376 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231ed674-1d19-4ee2-b29c-a1b7453ed531\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7eb1602f2d542b1998655456e5c49fca8f55a45a07cb075f989d4b10f91f6c95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0ca216d3d1121b7035d5d7ead25d459f95d56808997c02b3801cac0354c76a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04aa8cf2e01507ace747505514511263d9c560608e33f592a2c10be1edad8674\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c0ed6891f87bfc3c62f0f13739720e012dc8fa3b122a18e799e164e4656abb7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c0ed6891f87bfc3c62f0f13739720e012dc8fa3b122a18e799e164e4656abb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T12:56:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 12:56:01.437086 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 12:56:01.437205 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 12:56:01.438166 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1972143202/tls.crt::/tmp/serving-cert-1972143202/tls.key\\\\\\\"\\\\nI1203 12:56:02.167657 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 12:56:02.174337 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 12:56:02.174360 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 12:56:02.174377 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 12:56:02.174382 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 12:56:02.178753 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 12:56:02.179168 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 12:56:02.179175 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 12:56:02.179180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 12:56:02.179184 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 12:56:02.179187 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 12:56:02.179190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 12:56:02.178775 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 12:56:02.181220 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8ecb87e56d3fa7d72b57a6f333db4244ee7f60381778e7099a6fc88d91e1e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:03Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.745262 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:03Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.753386 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.753411 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.753419 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.753433 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.753442 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:03Z","lastTransitionTime":"2025-12-03T12:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.756713 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:03Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.764456 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d3a45156-295b-4093-80e7-2059f81ddbd7-systemd-units\") pod \"ovnkube-node-9nf52\" (UID: \"d3a45156-295b-4093-80e7-2059f81ddbd7\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.764493 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a939a03f-0eec-49cb-9b23-40e359e427d5-cni-binary-copy\") pod \"multus-additional-cni-plugins-rkgd9\" (UID: \"a939a03f-0eec-49cb-9b23-40e359e427d5\") " pod="openshift-multus/multus-additional-cni-plugins-rkgd9" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.764515 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d3a45156-295b-4093-80e7-2059f81ddbd7-host-kubelet\") pod \"ovnkube-node-9nf52\" (UID: \"d3a45156-295b-4093-80e7-2059f81ddbd7\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.764537 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d3a45156-295b-4093-80e7-2059f81ddbd7-env-overrides\") pod \"ovnkube-node-9nf52\" (UID: \"d3a45156-295b-4093-80e7-2059f81ddbd7\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.764558 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d3a45156-295b-4093-80e7-2059f81ddbd7-ovnkube-script-lib\") pod \"ovnkube-node-9nf52\" (UID: \"d3a45156-295b-4093-80e7-2059f81ddbd7\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.764588 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d3a45156-295b-4093-80e7-2059f81ddbd7-host-run-netns\") pod \"ovnkube-node-9nf52\" (UID: \"d3a45156-295b-4093-80e7-2059f81ddbd7\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.764608 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d3a45156-295b-4093-80e7-2059f81ddbd7-host-cni-netd\") pod \"ovnkube-node-9nf52\" (UID: \"d3a45156-295b-4093-80e7-2059f81ddbd7\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.764630 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d3a45156-295b-4093-80e7-2059f81ddbd7-ovn-node-metrics-cert\") pod \"ovnkube-node-9nf52\" (UID: \"d3a45156-295b-4093-80e7-2059f81ddbd7\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.764650 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/a939a03f-0eec-49cb-9b23-40e359e427d5-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-rkgd9\" (UID: \"a939a03f-0eec-49cb-9b23-40e359e427d5\") " pod="openshift-multus/multus-additional-cni-plugins-rkgd9" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.764680 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d3a45156-295b-4093-80e7-2059f81ddbd7-host-cni-bin\") pod \"ovnkube-node-9nf52\" (UID: \"d3a45156-295b-4093-80e7-2059f81ddbd7\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.764701 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d3a45156-295b-4093-80e7-2059f81ddbd7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-9nf52\" (UID: \"d3a45156-295b-4093-80e7-2059f81ddbd7\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.764723 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a939a03f-0eec-49cb-9b23-40e359e427d5-cnibin\") pod \"multus-additional-cni-plugins-rkgd9\" (UID: \"a939a03f-0eec-49cb-9b23-40e359e427d5\") " pod="openshift-multus/multus-additional-cni-plugins-rkgd9" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.764744 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d3a45156-295b-4093-80e7-2059f81ddbd7-etc-openvswitch\") pod \"ovnkube-node-9nf52\" (UID: \"d3a45156-295b-4093-80e7-2059f81ddbd7\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.764763 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d3a45156-295b-4093-80e7-2059f81ddbd7-node-log\") pod \"ovnkube-node-9nf52\" (UID: \"d3a45156-295b-4093-80e7-2059f81ddbd7\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.764782 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d3a45156-295b-4093-80e7-2059f81ddbd7-log-socket\") pod \"ovnkube-node-9nf52\" (UID: \"d3a45156-295b-4093-80e7-2059f81ddbd7\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.764803 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a939a03f-0eec-49cb-9b23-40e359e427d5-system-cni-dir\") pod \"multus-additional-cni-plugins-rkgd9\" (UID: \"a939a03f-0eec-49cb-9b23-40e359e427d5\") " pod="openshift-multus/multus-additional-cni-plugins-rkgd9" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.764822 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d3a45156-295b-4093-80e7-2059f81ddbd7-run-openvswitch\") pod \"ovnkube-node-9nf52\" (UID: \"d3a45156-295b-4093-80e7-2059f81ddbd7\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.764851 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d3a45156-295b-4093-80e7-2059f81ddbd7-ovnkube-config\") pod \"ovnkube-node-9nf52\" (UID: \"d3a45156-295b-4093-80e7-2059f81ddbd7\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.764867 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5j752\" (UniqueName: \"kubernetes.io/projected/d3a45156-295b-4093-80e7-2059f81ddbd7-kube-api-access-5j752\") pod \"ovnkube-node-9nf52\" (UID: \"d3a45156-295b-4093-80e7-2059f81ddbd7\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.764892 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d3a45156-295b-4093-80e7-2059f81ddbd7-var-lib-openvswitch\") pod \"ovnkube-node-9nf52\" (UID: \"d3a45156-295b-4093-80e7-2059f81ddbd7\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.764907 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a939a03f-0eec-49cb-9b23-40e359e427d5-os-release\") pod \"multus-additional-cni-plugins-rkgd9\" (UID: \"a939a03f-0eec-49cb-9b23-40e359e427d5\") " pod="openshift-multus/multus-additional-cni-plugins-rkgd9" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.764920 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9x5m\" (UniqueName: \"kubernetes.io/projected/a939a03f-0eec-49cb-9b23-40e359e427d5-kube-api-access-p9x5m\") pod \"multus-additional-cni-plugins-rkgd9\" (UID: \"a939a03f-0eec-49cb-9b23-40e359e427d5\") " pod="openshift-multus/multus-additional-cni-plugins-rkgd9" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.764971 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c-mcd-auth-proxy-config\") pod \"machine-config-daemon-xggpj\" (UID: \"f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c\") " pod="openshift-machine-config-operator/machine-config-daemon-xggpj" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.764987 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d3a45156-295b-4093-80e7-2059f81ddbd7-run-systemd\") pod \"ovnkube-node-9nf52\" (UID: \"d3a45156-295b-4093-80e7-2059f81ddbd7\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.765009 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c-rootfs\") pod \"machine-config-daemon-xggpj\" (UID: \"f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c\") " pod="openshift-machine-config-operator/machine-config-daemon-xggpj" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.765026 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a939a03f-0eec-49cb-9b23-40e359e427d5-tuning-conf-dir\") pod \"multus-additional-cni-plugins-rkgd9\" (UID: \"a939a03f-0eec-49cb-9b23-40e359e427d5\") " pod="openshift-multus/multus-additional-cni-plugins-rkgd9" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.765040 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d3a45156-295b-4093-80e7-2059f81ddbd7-host-slash\") pod \"ovnkube-node-9nf52\" (UID: \"d3a45156-295b-4093-80e7-2059f81ddbd7\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.765055 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d3a45156-295b-4093-80e7-2059f81ddbd7-host-run-ovn-kubernetes\") pod \"ovnkube-node-9nf52\" (UID: \"d3a45156-295b-4093-80e7-2059f81ddbd7\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.765071 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d3a45156-295b-4093-80e7-2059f81ddbd7-run-ovn\") pod \"ovnkube-node-9nf52\" (UID: \"d3a45156-295b-4093-80e7-2059f81ddbd7\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.765088 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c-proxy-tls\") pod \"machine-config-daemon-xggpj\" (UID: \"f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c\") " pod="openshift-machine-config-operator/machine-config-daemon-xggpj" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.765104 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgppd\" (UniqueName: \"kubernetes.io/projected/f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c-kube-api-access-kgppd\") pod \"machine-config-daemon-xggpj\" (UID: \"f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c\") " pod="openshift-machine-config-operator/machine-config-daemon-xggpj" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.766121 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fszqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a76bcf5c-6a62-4360-826d-ecac337c88ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d94370a14317ade02cb805783e09ffcd8c6ff07f35e9124e302028e6dfa3c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhtrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fszqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:03Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.779501 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-px97g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97196b6d-75cc-4de4-8805-f9ce3fbd4230\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v4j5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-px97g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:03Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.794906 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:03Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.818213 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:03Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.849449 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fszqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a76bcf5c-6a62-4360-826d-ecac337c88ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d94370a14317ade02cb805783e09ffcd8c6ff07f35e9124e302028e6dfa3c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhtrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fszqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:03Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.855527 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.855554 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.855562 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.855574 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.855585 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:03Z","lastTransitionTime":"2025-12-03T12:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.866465 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c-mcd-auth-proxy-config\") pod \"machine-config-daemon-xggpj\" (UID: \"f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c\") " pod="openshift-machine-config-operator/machine-config-daemon-xggpj" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.866505 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d3a45156-295b-4093-80e7-2059f81ddbd7-run-systemd\") pod \"ovnkube-node-9nf52\" (UID: \"d3a45156-295b-4093-80e7-2059f81ddbd7\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.866523 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d3a45156-295b-4093-80e7-2059f81ddbd7-var-lib-openvswitch\") pod \"ovnkube-node-9nf52\" (UID: \"d3a45156-295b-4093-80e7-2059f81ddbd7\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.866539 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a939a03f-0eec-49cb-9b23-40e359e427d5-os-release\") pod \"multus-additional-cni-plugins-rkgd9\" (UID: \"a939a03f-0eec-49cb-9b23-40e359e427d5\") " pod="openshift-multus/multus-additional-cni-plugins-rkgd9" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.866554 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9x5m\" (UniqueName: \"kubernetes.io/projected/a939a03f-0eec-49cb-9b23-40e359e427d5-kube-api-access-p9x5m\") pod \"multus-additional-cni-plugins-rkgd9\" (UID: \"a939a03f-0eec-49cb-9b23-40e359e427d5\") " pod="openshift-multus/multus-additional-cni-plugins-rkgd9" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.866575 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c-rootfs\") pod \"machine-config-daemon-xggpj\" (UID: \"f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c\") " pod="openshift-machine-config-operator/machine-config-daemon-xggpj" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.866590 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d3a45156-295b-4093-80e7-2059f81ddbd7-host-slash\") pod \"ovnkube-node-9nf52\" (UID: \"d3a45156-295b-4093-80e7-2059f81ddbd7\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.866603 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d3a45156-295b-4093-80e7-2059f81ddbd7-host-run-ovn-kubernetes\") pod \"ovnkube-node-9nf52\" (UID: \"d3a45156-295b-4093-80e7-2059f81ddbd7\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.866618 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a939a03f-0eec-49cb-9b23-40e359e427d5-tuning-conf-dir\") pod \"multus-additional-cni-plugins-rkgd9\" (UID: \"a939a03f-0eec-49cb-9b23-40e359e427d5\") " pod="openshift-multus/multus-additional-cni-plugins-rkgd9" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.866636 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c-proxy-tls\") pod \"machine-config-daemon-xggpj\" (UID: \"f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c\") " pod="openshift-machine-config-operator/machine-config-daemon-xggpj" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.866649 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgppd\" (UniqueName: \"kubernetes.io/projected/f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c-kube-api-access-kgppd\") pod \"machine-config-daemon-xggpj\" (UID: \"f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c\") " pod="openshift-machine-config-operator/machine-config-daemon-xggpj" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.866663 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d3a45156-295b-4093-80e7-2059f81ddbd7-run-ovn\") pod \"ovnkube-node-9nf52\" (UID: \"d3a45156-295b-4093-80e7-2059f81ddbd7\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.866677 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d3a45156-295b-4093-80e7-2059f81ddbd7-systemd-units\") pod \"ovnkube-node-9nf52\" (UID: \"d3a45156-295b-4093-80e7-2059f81ddbd7\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.866715 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a939a03f-0eec-49cb-9b23-40e359e427d5-cni-binary-copy\") pod \"multus-additional-cni-plugins-rkgd9\" (UID: \"a939a03f-0eec-49cb-9b23-40e359e427d5\") " pod="openshift-multus/multus-additional-cni-plugins-rkgd9" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.866729 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d3a45156-295b-4093-80e7-2059f81ddbd7-host-kubelet\") pod \"ovnkube-node-9nf52\" (UID: \"d3a45156-295b-4093-80e7-2059f81ddbd7\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.866742 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d3a45156-295b-4093-80e7-2059f81ddbd7-env-overrides\") pod \"ovnkube-node-9nf52\" (UID: \"d3a45156-295b-4093-80e7-2059f81ddbd7\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.866761 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d3a45156-295b-4093-80e7-2059f81ddbd7-ovnkube-script-lib\") pod \"ovnkube-node-9nf52\" (UID: \"d3a45156-295b-4093-80e7-2059f81ddbd7\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.866776 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d3a45156-295b-4093-80e7-2059f81ddbd7-host-run-netns\") pod \"ovnkube-node-9nf52\" (UID: \"d3a45156-295b-4093-80e7-2059f81ddbd7\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.866802 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d3a45156-295b-4093-80e7-2059f81ddbd7-host-cni-netd\") pod \"ovnkube-node-9nf52\" (UID: \"d3a45156-295b-4093-80e7-2059f81ddbd7\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.866822 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d3a45156-295b-4093-80e7-2059f81ddbd7-host-cni-bin\") pod \"ovnkube-node-9nf52\" (UID: \"d3a45156-295b-4093-80e7-2059f81ddbd7\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.866868 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d3a45156-295b-4093-80e7-2059f81ddbd7-ovn-node-metrics-cert\") pod \"ovnkube-node-9nf52\" (UID: \"d3a45156-295b-4093-80e7-2059f81ddbd7\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.866883 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/a939a03f-0eec-49cb-9b23-40e359e427d5-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-rkgd9\" (UID: \"a939a03f-0eec-49cb-9b23-40e359e427d5\") " pod="openshift-multus/multus-additional-cni-plugins-rkgd9" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.866899 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d3a45156-295b-4093-80e7-2059f81ddbd7-etc-openvswitch\") pod \"ovnkube-node-9nf52\" (UID: \"d3a45156-295b-4093-80e7-2059f81ddbd7\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.866914 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d3a45156-295b-4093-80e7-2059f81ddbd7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-9nf52\" (UID: \"d3a45156-295b-4093-80e7-2059f81ddbd7\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.866930 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a939a03f-0eec-49cb-9b23-40e359e427d5-cnibin\") pod \"multus-additional-cni-plugins-rkgd9\" (UID: \"a939a03f-0eec-49cb-9b23-40e359e427d5\") " pod="openshift-multus/multus-additional-cni-plugins-rkgd9" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.866945 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d3a45156-295b-4093-80e7-2059f81ddbd7-run-openvswitch\") pod \"ovnkube-node-9nf52\" (UID: \"d3a45156-295b-4093-80e7-2059f81ddbd7\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.866959 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d3a45156-295b-4093-80e7-2059f81ddbd7-node-log\") pod \"ovnkube-node-9nf52\" (UID: \"d3a45156-295b-4093-80e7-2059f81ddbd7\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.866973 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d3a45156-295b-4093-80e7-2059f81ddbd7-log-socket\") pod \"ovnkube-node-9nf52\" (UID: \"d3a45156-295b-4093-80e7-2059f81ddbd7\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.866989 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a939a03f-0eec-49cb-9b23-40e359e427d5-system-cni-dir\") pod \"multus-additional-cni-plugins-rkgd9\" (UID: \"a939a03f-0eec-49cb-9b23-40e359e427d5\") " pod="openshift-multus/multus-additional-cni-plugins-rkgd9" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.867004 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d3a45156-295b-4093-80e7-2059f81ddbd7-ovnkube-config\") pod \"ovnkube-node-9nf52\" (UID: \"d3a45156-295b-4093-80e7-2059f81ddbd7\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.867021 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5j752\" (UniqueName: \"kubernetes.io/projected/d3a45156-295b-4093-80e7-2059f81ddbd7-kube-api-access-5j752\") pod \"ovnkube-node-9nf52\" (UID: \"d3a45156-295b-4093-80e7-2059f81ddbd7\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.867861 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c-mcd-auth-proxy-config\") pod \"machine-config-daemon-xggpj\" (UID: \"f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c\") " pod="openshift-machine-config-operator/machine-config-daemon-xggpj" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.867906 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d3a45156-295b-4093-80e7-2059f81ddbd7-run-systemd\") pod \"ovnkube-node-9nf52\" (UID: \"d3a45156-295b-4093-80e7-2059f81ddbd7\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.867930 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d3a45156-295b-4093-80e7-2059f81ddbd7-var-lib-openvswitch\") pod \"ovnkube-node-9nf52\" (UID: \"d3a45156-295b-4093-80e7-2059f81ddbd7\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.867971 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a939a03f-0eec-49cb-9b23-40e359e427d5-os-release\") pod \"multus-additional-cni-plugins-rkgd9\" (UID: \"a939a03f-0eec-49cb-9b23-40e359e427d5\") " pod="openshift-multus/multus-additional-cni-plugins-rkgd9" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.868103 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c-rootfs\") pod \"machine-config-daemon-xggpj\" (UID: \"f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c\") " pod="openshift-machine-config-operator/machine-config-daemon-xggpj" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.868132 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d3a45156-295b-4093-80e7-2059f81ddbd7-host-slash\") pod \"ovnkube-node-9nf52\" (UID: \"d3a45156-295b-4093-80e7-2059f81ddbd7\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.868152 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d3a45156-295b-4093-80e7-2059f81ddbd7-host-run-ovn-kubernetes\") pod \"ovnkube-node-9nf52\" (UID: \"d3a45156-295b-4093-80e7-2059f81ddbd7\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.868205 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a939a03f-0eec-49cb-9b23-40e359e427d5-tuning-conf-dir\") pod \"multus-additional-cni-plugins-rkgd9\" (UID: \"a939a03f-0eec-49cb-9b23-40e359e427d5\") " pod="openshift-multus/multus-additional-cni-plugins-rkgd9" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.868778 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d3a45156-295b-4093-80e7-2059f81ddbd7-node-log\") pod \"ovnkube-node-9nf52\" (UID: \"d3a45156-295b-4093-80e7-2059f81ddbd7\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.868825 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d3a45156-295b-4093-80e7-2059f81ddbd7-log-socket\") pod \"ovnkube-node-9nf52\" (UID: \"d3a45156-295b-4093-80e7-2059f81ddbd7\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.868817 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d3a45156-295b-4093-80e7-2059f81ddbd7-run-openvswitch\") pod \"ovnkube-node-9nf52\" (UID: \"d3a45156-295b-4093-80e7-2059f81ddbd7\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.868861 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a939a03f-0eec-49cb-9b23-40e359e427d5-system-cni-dir\") pod \"multus-additional-cni-plugins-rkgd9\" (UID: \"a939a03f-0eec-49cb-9b23-40e359e427d5\") " pod="openshift-multus/multus-additional-cni-plugins-rkgd9" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.868867 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-px97g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97196b6d-75cc-4de4-8805-f9ce3fbd4230\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v4j5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-px97g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:03Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.868903 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d3a45156-295b-4093-80e7-2059f81ddbd7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-9nf52\" (UID: \"d3a45156-295b-4093-80e7-2059f81ddbd7\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.868899 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d3a45156-295b-4093-80e7-2059f81ddbd7-etc-openvswitch\") pod \"ovnkube-node-9nf52\" (UID: \"d3a45156-295b-4093-80e7-2059f81ddbd7\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.868944 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a939a03f-0eec-49cb-9b23-40e359e427d5-cnibin\") pod \"multus-additional-cni-plugins-rkgd9\" (UID: \"a939a03f-0eec-49cb-9b23-40e359e427d5\") " pod="openshift-multus/multus-additional-cni-plugins-rkgd9" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.869010 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d3a45156-295b-4093-80e7-2059f81ddbd7-host-run-netns\") pod \"ovnkube-node-9nf52\" (UID: \"d3a45156-295b-4093-80e7-2059f81ddbd7\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.869134 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d3a45156-295b-4093-80e7-2059f81ddbd7-systemd-units\") pod \"ovnkube-node-9nf52\" (UID: \"d3a45156-295b-4093-80e7-2059f81ddbd7\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.869187 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d3a45156-295b-4093-80e7-2059f81ddbd7-host-cni-bin\") pod \"ovnkube-node-9nf52\" (UID: \"d3a45156-295b-4093-80e7-2059f81ddbd7\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.869223 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d3a45156-295b-4093-80e7-2059f81ddbd7-host-kubelet\") pod \"ovnkube-node-9nf52\" (UID: \"d3a45156-295b-4093-80e7-2059f81ddbd7\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.869216 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d3a45156-295b-4093-80e7-2059f81ddbd7-host-cni-netd\") pod \"ovnkube-node-9nf52\" (UID: \"d3a45156-295b-4093-80e7-2059f81ddbd7\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.869327 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d3a45156-295b-4093-80e7-2059f81ddbd7-run-ovn\") pod \"ovnkube-node-9nf52\" (UID: \"d3a45156-295b-4093-80e7-2059f81ddbd7\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.870001 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a939a03f-0eec-49cb-9b23-40e359e427d5-cni-binary-copy\") pod \"multus-additional-cni-plugins-rkgd9\" (UID: \"a939a03f-0eec-49cb-9b23-40e359e427d5\") " pod="openshift-multus/multus-additional-cni-plugins-rkgd9" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.870027 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/a939a03f-0eec-49cb-9b23-40e359e427d5-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-rkgd9\" (UID: \"a939a03f-0eec-49cb-9b23-40e359e427d5\") " pod="openshift-multus/multus-additional-cni-plugins-rkgd9" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.872659 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d3a45156-295b-4093-80e7-2059f81ddbd7-ovnkube-config\") pod \"ovnkube-node-9nf52\" (UID: \"d3a45156-295b-4093-80e7-2059f81ddbd7\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.873108 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c-proxy-tls\") pod \"machine-config-daemon-xggpj\" (UID: \"f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c\") " pod="openshift-machine-config-operator/machine-config-daemon-xggpj" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.873960 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d3a45156-295b-4093-80e7-2059f81ddbd7-ovn-node-metrics-cert\") pod \"ovnkube-node-9nf52\" (UID: \"d3a45156-295b-4093-80e7-2059f81ddbd7\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.874047 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d3a45156-295b-4093-80e7-2059f81ddbd7-env-overrides\") pod \"ovnkube-node-9nf52\" (UID: \"d3a45156-295b-4093-80e7-2059f81ddbd7\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.874083 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d3a45156-295b-4093-80e7-2059f81ddbd7-ovnkube-script-lib\") pod \"ovnkube-node-9nf52\" (UID: \"d3a45156-295b-4093-80e7-2059f81ddbd7\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.886481 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5j752\" (UniqueName: \"kubernetes.io/projected/d3a45156-295b-4093-80e7-2059f81ddbd7-kube-api-access-5j752\") pod \"ovnkube-node-9nf52\" (UID: \"d3a45156-295b-4093-80e7-2059f81ddbd7\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.889497 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9x5m\" (UniqueName: \"kubernetes.io/projected/a939a03f-0eec-49cb-9b23-40e359e427d5-kube-api-access-p9x5m\") pod \"multus-additional-cni-plugins-rkgd9\" (UID: \"a939a03f-0eec-49cb-9b23-40e359e427d5\") " pod="openshift-multus/multus-additional-cni-plugins-rkgd9" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.892408 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgppd\" (UniqueName: \"kubernetes.io/projected/f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c-kube-api-access-kgppd\") pod \"machine-config-daemon-xggpj\" (UID: \"f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c\") " pod="openshift-machine-config-operator/machine-config-daemon-xggpj" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.905068 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3a45156-295b-4093-80e7-2059f81ddbd7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9nf52\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:03Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.917963 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:03Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.928865 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0d71a3ffdbe593936c83e92e92e286eaff092333bb3dd54cb5bc2825ca46fbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a402aeaef03d8d3c684f0d8e7fe8a080bc02d761104e3c1095b3233aade2b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:03Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.942333 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 12:56:03 crc kubenswrapper[4986]: E1203 12:56:03.942452 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.942782 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 12:56:03 crc kubenswrapper[4986]: E1203 12:56:03.942854 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.942896 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 12:56:03 crc kubenswrapper[4986]: E1203 12:56:03.942958 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.943236 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rkgd9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a939a03f-0eec-49cb-9b23-40e359e427d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rkgd9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:03Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.945913 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.958022 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.958061 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.958070 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.958084 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.958094 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:03Z","lastTransitionTime":"2025-12-03T12:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.963493 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d51894dd-38f0-413a-adaf-22d196474bed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa98006a8736394300cdb9ef3331d2a23e1b3af2e2203f757c51ca9c700d26c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6e7d24b5f1325cfb2d210cb9f311026ab370c91deecc5d0d951bcd4eda79816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9bb9d940d346f9895e24d98173668ea2d0e68fd3035e5b03475bbc936582958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7fa0d673f037c8edcc75ee30b1bbeea73da55d5459347dacc9251fffbb38be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d4007e9ed33227938a5a3656fd74958aef599418fec4cd2048dfa7dd88924dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fb412bfaef1de92aa6be2e23c0dd73fe39eebf29b6814fb3bb6fd0281ddbbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fb412bfaef1de92aa6be2e23c0dd73fe39eebf29b6814fb3bb6fd0281ddbbd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c66ee903635427e08f96e3f96c512c4d25e79bf2fc14eb0c699b3b9a617cfeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c66ee903635427e08f96e3f96c512c4d25e79bf2fc14eb0c699b3b9a617cfeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54fb6e4ecd7f73267d49261875a00a51e5e36e6079170f694346e9d6c3df8f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54fb6e4ecd7f73267d49261875a00a51e5e36e6079170f694346e9d6c3df8f31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:03Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.972712 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-rkgd9" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.981929 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" Dec 03 12:56:03 crc kubenswrapper[4986]: I1203 12:56:03.984969 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b923e52e6284c8adca87e7d3d801a432a676e4389d7ab3c0cba584c5b7d96f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:03Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:03 crc kubenswrapper[4986]: W1203 12:56:03.985582 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda939a03f_0eec_49cb_9b23_40e359e427d5.slice/crio-b5c0cee27e6aef9e707a1fd2b1929a634b4d59bd98be9e1289601c1cb9be5419 WatchSource:0}: Error finding container b5c0cee27e6aef9e707a1fd2b1929a634b4d59bd98be9e1289601c1cb9be5419: Status 404 returned error can't find the container with id b5c0cee27e6aef9e707a1fd2b1929a634b4d59bd98be9e1289601c1cb9be5419 Dec 03 12:56:04 crc kubenswrapper[4986]: W1203 12:56:04.007108 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3a45156_295b_4093_80e7_2059f81ddbd7.slice/crio-842e8329ae9a18171900a2867fc679837cb8e1260a3bbce3d1c6edb300d657ae WatchSource:0}: Error finding container 842e8329ae9a18171900a2867fc679837cb8e1260a3bbce3d1c6edb300d657ae: Status 404 returned error can't find the container with id 842e8329ae9a18171900a2867fc679837cb8e1260a3bbce3d1c6edb300d657ae Dec 03 12:56:04 crc kubenswrapper[4986]: I1203 12:56:04.010431 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:04Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:04 crc kubenswrapper[4986]: I1203 12:56:04.025893 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgppd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgppd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xggpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:04Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:04 crc kubenswrapper[4986]: I1203 12:56:04.044572 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231ed674-1d19-4ee2-b29c-a1b7453ed531\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7eb1602f2d542b1998655456e5c49fca8f55a45a07cb075f989d4b10f91f6c95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0ca216d3d1121b7035d5d7ead25d459f95d56808997c02b3801cac0354c76a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04aa8cf2e01507ace747505514511263d9c560608e33f592a2c10be1edad8674\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c0ed6891f87bfc3c62f0f13739720e012dc8fa3b122a18e799e164e4656abb7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c0ed6891f87bfc3c62f0f13739720e012dc8fa3b122a18e799e164e4656abb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T12:56:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 12:56:01.437086 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 12:56:01.437205 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 12:56:01.438166 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1972143202/tls.crt::/tmp/serving-cert-1972143202/tls.key\\\\\\\"\\\\nI1203 12:56:02.167657 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 12:56:02.174337 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 12:56:02.174360 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 12:56:02.174377 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 12:56:02.174382 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 12:56:02.178753 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 12:56:02.179168 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 12:56:02.179175 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 12:56:02.179180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 12:56:02.179184 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 12:56:02.179187 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 12:56:02.179190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 12:56:02.178775 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 12:56:02.181220 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8ecb87e56d3fa7d72b57a6f333db4244ee7f60381778e7099a6fc88d91e1e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:04Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:04 crc kubenswrapper[4986]: I1203 12:56:04.063658 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:04 crc kubenswrapper[4986]: I1203 12:56:04.063698 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:04 crc kubenswrapper[4986]: I1203 12:56:04.063707 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:04 crc kubenswrapper[4986]: I1203 12:56:04.063722 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:04 crc kubenswrapper[4986]: I1203 12:56:04.063731 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:04Z","lastTransitionTime":"2025-12-03T12:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:04 crc kubenswrapper[4986]: I1203 12:56:04.066732 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 03 12:56:04 crc kubenswrapper[4986]: I1203 12:56:04.070299 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"70f0aa85e0be54890d4367deaf1a95970a7b0ac9d95afdd07c431680bdf7f15e"} Dec 03 12:56:04 crc kubenswrapper[4986]: I1203 12:56:04.071036 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 12:56:04 crc kubenswrapper[4986]: I1203 12:56:04.072448 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" event={"ID":"d3a45156-295b-4093-80e7-2059f81ddbd7","Type":"ContainerStarted","Data":"842e8329ae9a18171900a2867fc679837cb8e1260a3bbce3d1c6edb300d657ae"} Dec 03 12:56:04 crc kubenswrapper[4986]: I1203 12:56:04.081859 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" event={"ID":"f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c","Type":"ContainerStarted","Data":"4d30cbe96ae88382400b4962b77ae74d4c4dec516f301604ca12a6aa8b9b54e6"} Dec 03 12:56:04 crc kubenswrapper[4986]: I1203 12:56:04.084625 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rkgd9" event={"ID":"a939a03f-0eec-49cb-9b23-40e359e427d5","Type":"ContainerStarted","Data":"b5c0cee27e6aef9e707a1fd2b1929a634b4d59bd98be9e1289601c1cb9be5419"} Dec 03 12:56:04 crc kubenswrapper[4986]: I1203 12:56:04.087070 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-px97g" event={"ID":"97196b6d-75cc-4de4-8805-f9ce3fbd4230","Type":"ContainerStarted","Data":"c28bf1aff27ba408262b5fc561e084ff3813ce95a7c14b57fea24d409eb33bea"} Dec 03 12:56:04 crc kubenswrapper[4986]: I1203 12:56:04.087823 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-px97g" event={"ID":"97196b6d-75cc-4de4-8805-f9ce3fbd4230","Type":"ContainerStarted","Data":"93c7c845eec70b18b82c54d78c6a0793d6b02734c8861992bf78246dddb170ed"} Dec 03 12:56:04 crc kubenswrapper[4986]: I1203 12:56:04.088980 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:04Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:04 crc kubenswrapper[4986]: I1203 12:56:04.109401 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0d71a3ffdbe593936c83e92e92e286eaff092333bb3dd54cb5bc2825ca46fbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a402aeaef03d8d3c684f0d8e7fe8a080bc02d761104e3c1095b3233aade2b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:04Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:04 crc kubenswrapper[4986]: I1203 12:56:04.124552 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rkgd9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a939a03f-0eec-49cb-9b23-40e359e427d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rkgd9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:04Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:04 crc kubenswrapper[4986]: I1203 12:56:04.145925 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231ed674-1d19-4ee2-b29c-a1b7453ed531\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7eb1602f2d542b1998655456e5c49fca8f55a45a07cb075f989d4b10f91f6c95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0ca216d3d1121b7035d5d7ead25d459f95d56808997c02b3801cac0354c76a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04aa8cf2e01507ace747505514511263d9c560608e33f592a2c10be1edad8674\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70f0aa85e0be54890d4367deaf1a95970a7b0ac9d95afdd07c431680bdf7f15e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c0ed6891f87bfc3c62f0f13739720e012dc8fa3b122a18e799e164e4656abb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T12:56:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 12:56:01.437086 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 12:56:01.437205 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 12:56:01.438166 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1972143202/tls.crt::/tmp/serving-cert-1972143202/tls.key\\\\\\\"\\\\nI1203 12:56:02.167657 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 12:56:02.174337 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 12:56:02.174360 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 12:56:02.174377 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 12:56:02.174382 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 12:56:02.178753 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 12:56:02.179168 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 12:56:02.179175 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 12:56:02.179180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 12:56:02.179184 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 12:56:02.179187 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 12:56:02.179190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 12:56:02.178775 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 12:56:02.181220 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8ecb87e56d3fa7d72b57a6f333db4244ee7f60381778e7099a6fc88d91e1e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:04Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:04 crc kubenswrapper[4986]: I1203 12:56:04.166940 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:04 crc kubenswrapper[4986]: I1203 12:56:04.166975 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:04 crc kubenswrapper[4986]: I1203 12:56:04.166987 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:04 crc kubenswrapper[4986]: I1203 12:56:04.167001 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:04 crc kubenswrapper[4986]: I1203 12:56:04.167011 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:04Z","lastTransitionTime":"2025-12-03T12:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:04 crc kubenswrapper[4986]: I1203 12:56:04.170981 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d51894dd-38f0-413a-adaf-22d196474bed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa98006a8736394300cdb9ef3331d2a23e1b3af2e2203f757c51ca9c700d26c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6e7d24b5f1325cfb2d210cb9f311026ab370c91deecc5d0d951bcd4eda79816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9bb9d940d346f9895e24d98173668ea2d0e68fd3035e5b03475bbc936582958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7fa0d673f037c8edcc75ee30b1bbeea73da55d5459347dacc9251fffbb38be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d4007e9ed33227938a5a3656fd74958aef599418fec4cd2048dfa7dd88924dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fb412bfaef1de92aa6be2e23c0dd73fe39eebf29b6814fb3bb6fd0281ddbbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fb412bfaef1de92aa6be2e23c0dd73fe39eebf29b6814fb3bb6fd0281ddbbd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c66ee903635427e08f96e3f96c512c4d25e79bf2fc14eb0c699b3b9a617cfeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c66ee903635427e08f96e3f96c512c4d25e79bf2fc14eb0c699b3b9a617cfeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54fb6e4ecd7f73267d49261875a00a51e5e36e6079170f694346e9d6c3df8f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54fb6e4ecd7f73267d49261875a00a51e5e36e6079170f694346e9d6c3df8f31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:04Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:04 crc kubenswrapper[4986]: I1203 12:56:04.186836 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b923e52e6284c8adca87e7d3d801a432a676e4389d7ab3c0cba584c5b7d96f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:04Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:04 crc kubenswrapper[4986]: I1203 12:56:04.205825 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:04Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:04 crc kubenswrapper[4986]: I1203 12:56:04.216937 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgppd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgppd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xggpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:04Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:04 crc kubenswrapper[4986]: I1203 12:56:04.245240 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:04Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:04 crc kubenswrapper[4986]: I1203 12:56:04.269215 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:04 crc kubenswrapper[4986]: I1203 12:56:04.269260 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:04 crc kubenswrapper[4986]: I1203 12:56:04.269271 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:04 crc kubenswrapper[4986]: I1203 12:56:04.269306 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:04 crc kubenswrapper[4986]: I1203 12:56:04.269318 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:04Z","lastTransitionTime":"2025-12-03T12:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:04 crc kubenswrapper[4986]: I1203 12:56:04.288988 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:04Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:04 crc kubenswrapper[4986]: I1203 12:56:04.323579 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fszqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a76bcf5c-6a62-4360-826d-ecac337c88ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d94370a14317ade02cb805783e09ffcd8c6ff07f35e9124e302028e6dfa3c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhtrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fszqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:04Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:04 crc kubenswrapper[4986]: I1203 12:56:04.367168 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-px97g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97196b6d-75cc-4de4-8805-f9ce3fbd4230\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v4j5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-px97g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:04Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:04 crc kubenswrapper[4986]: I1203 12:56:04.372198 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:04 crc kubenswrapper[4986]: I1203 12:56:04.372243 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:04 crc kubenswrapper[4986]: I1203 12:56:04.372256 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:04 crc kubenswrapper[4986]: I1203 12:56:04.372272 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:04 crc kubenswrapper[4986]: I1203 12:56:04.372302 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:04Z","lastTransitionTime":"2025-12-03T12:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:04 crc kubenswrapper[4986]: I1203 12:56:04.410630 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3a45156-295b-4093-80e7-2059f81ddbd7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9nf52\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:04Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:04 crc kubenswrapper[4986]: I1203 12:56:04.441082 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fszqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a76bcf5c-6a62-4360-826d-ecac337c88ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d94370a14317ade02cb805783e09ffcd8c6ff07f35e9124e302028e6dfa3c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhtrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fszqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:04Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:04 crc kubenswrapper[4986]: I1203 12:56:04.474827 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:04 crc kubenswrapper[4986]: I1203 12:56:04.474869 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:04 crc kubenswrapper[4986]: I1203 12:56:04.474881 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:04 crc kubenswrapper[4986]: I1203 12:56:04.474898 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:04 crc kubenswrapper[4986]: I1203 12:56:04.474910 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:04Z","lastTransitionTime":"2025-12-03T12:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:04 crc kubenswrapper[4986]: I1203 12:56:04.486996 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-px97g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97196b6d-75cc-4de4-8805-f9ce3fbd4230\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c28bf1aff27ba408262b5fc561e084ff3813ce95a7c14b57fea24d409eb33bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v4j5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-px97g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:04Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:04 crc kubenswrapper[4986]: I1203 12:56:04.531622 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3a45156-295b-4093-80e7-2059f81ddbd7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9nf52\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:04Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:04 crc kubenswrapper[4986]: I1203 12:56:04.564468 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:04Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:04 crc kubenswrapper[4986]: I1203 12:56:04.577374 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:04 crc kubenswrapper[4986]: I1203 12:56:04.577406 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:04 crc kubenswrapper[4986]: I1203 12:56:04.577415 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:04 crc kubenswrapper[4986]: I1203 12:56:04.577427 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:04 crc kubenswrapper[4986]: I1203 12:56:04.577437 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:04Z","lastTransitionTime":"2025-12-03T12:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:04 crc kubenswrapper[4986]: I1203 12:56:04.606055 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0d71a3ffdbe593936c83e92e92e286eaff092333bb3dd54cb5bc2825ca46fbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a402aeaef03d8d3c684f0d8e7fe8a080bc02d761104e3c1095b3233aade2b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:04Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:04 crc kubenswrapper[4986]: I1203 12:56:04.651725 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rkgd9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a939a03f-0eec-49cb-9b23-40e359e427d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rkgd9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:04Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:04 crc kubenswrapper[4986]: I1203 12:56:04.681402 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:04 crc kubenswrapper[4986]: I1203 12:56:04.681682 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:04 crc kubenswrapper[4986]: I1203 12:56:04.681771 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:04 crc kubenswrapper[4986]: I1203 12:56:04.681864 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:04 crc kubenswrapper[4986]: I1203 12:56:04.681943 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:04Z","lastTransitionTime":"2025-12-03T12:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:04 crc kubenswrapper[4986]: I1203 12:56:04.692266 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d51894dd-38f0-413a-adaf-22d196474bed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa98006a8736394300cdb9ef3331d2a23e1b3af2e2203f757c51ca9c700d26c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6e7d24b5f1325cfb2d210cb9f311026ab370c91deecc5d0d951bcd4eda79816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9bb9d940d346f9895e24d98173668ea2d0e68fd3035e5b03475bbc936582958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7fa0d673f037c8edcc75ee30b1bbeea73da55d5459347dacc9251fffbb38be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d4007e9ed33227938a5a3656fd74958aef599418fec4cd2048dfa7dd88924dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fb412bfaef1de92aa6be2e23c0dd73fe39eebf29b6814fb3bb6fd0281ddbbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fb412bfaef1de92aa6be2e23c0dd73fe39eebf29b6814fb3bb6fd0281ddbbd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c66ee903635427e08f96e3f96c512c4d25e79bf2fc14eb0c699b3b9a617cfeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c66ee903635427e08f96e3f96c512c4d25e79bf2fc14eb0c699b3b9a617cfeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54fb6e4ecd7f73267d49261875a00a51e5e36e6079170f694346e9d6c3df8f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54fb6e4ecd7f73267d49261875a00a51e5e36e6079170f694346e9d6c3df8f31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:04Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:04 crc kubenswrapper[4986]: I1203 12:56:04.733654 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b923e52e6284c8adca87e7d3d801a432a676e4389d7ab3c0cba584c5b7d96f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:04Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:04 crc kubenswrapper[4986]: I1203 12:56:04.742420 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-8lcrn"] Dec 03 12:56:04 crc kubenswrapper[4986]: I1203 12:56:04.742833 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-8lcrn" Dec 03 12:56:04 crc kubenswrapper[4986]: I1203 12:56:04.765881 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:04Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:04 crc kubenswrapper[4986]: I1203 12:56:04.775457 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 03 12:56:04 crc kubenswrapper[4986]: I1203 12:56:04.788889 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:04 crc kubenswrapper[4986]: I1203 12:56:04.788924 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:04 crc kubenswrapper[4986]: I1203 12:56:04.788932 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:04 crc kubenswrapper[4986]: I1203 12:56:04.788946 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:04 crc kubenswrapper[4986]: I1203 12:56:04.788955 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:04Z","lastTransitionTime":"2025-12-03T12:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:04 crc kubenswrapper[4986]: I1203 12:56:04.794941 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 03 12:56:04 crc kubenswrapper[4986]: I1203 12:56:04.815923 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 03 12:56:04 crc kubenswrapper[4986]: I1203 12:56:04.835755 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 03 12:56:04 crc kubenswrapper[4986]: I1203 12:56:04.875671 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/75e4c223-9952-4d1b-b83b-5c45cfc51432-host\") pod \"node-ca-8lcrn\" (UID: \"75e4c223-9952-4d1b-b83b-5c45cfc51432\") " pod="openshift-image-registry/node-ca-8lcrn" Dec 03 12:56:04 crc kubenswrapper[4986]: I1203 12:56:04.875729 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnr67\" (UniqueName: \"kubernetes.io/projected/75e4c223-9952-4d1b-b83b-5c45cfc51432-kube-api-access-dnr67\") pod \"node-ca-8lcrn\" (UID: \"75e4c223-9952-4d1b-b83b-5c45cfc51432\") " pod="openshift-image-registry/node-ca-8lcrn" Dec 03 12:56:04 crc kubenswrapper[4986]: I1203 12:56:04.875779 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/75e4c223-9952-4d1b-b83b-5c45cfc51432-serviceca\") pod \"node-ca-8lcrn\" (UID: \"75e4c223-9952-4d1b-b83b-5c45cfc51432\") " pod="openshift-image-registry/node-ca-8lcrn" Dec 03 12:56:04 crc kubenswrapper[4986]: I1203 12:56:04.890899 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:04 crc kubenswrapper[4986]: I1203 12:56:04.890938 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:04 crc kubenswrapper[4986]: I1203 12:56:04.890948 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:04 crc kubenswrapper[4986]: I1203 12:56:04.890966 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:04 crc kubenswrapper[4986]: I1203 12:56:04.890979 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:04Z","lastTransitionTime":"2025-12-03T12:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:04 crc kubenswrapper[4986]: I1203 12:56:04.906721 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgppd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgppd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xggpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:04Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:04 crc kubenswrapper[4986]: I1203 12:56:04.926378 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231ed674-1d19-4ee2-b29c-a1b7453ed531\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7eb1602f2d542b1998655456e5c49fca8f55a45a07cb075f989d4b10f91f6c95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0ca216d3d1121b7035d5d7ead25d459f95d56808997c02b3801cac0354c76a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04aa8cf2e01507ace747505514511263d9c560608e33f592a2c10be1edad8674\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70f0aa85e0be54890d4367deaf1a95970a7b0ac9d95afdd07c431680bdf7f15e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c0ed6891f87bfc3c62f0f13739720e012dc8fa3b122a18e799e164e4656abb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T12:56:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 12:56:01.437086 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 12:56:01.437205 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 12:56:01.438166 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1972143202/tls.crt::/tmp/serving-cert-1972143202/tls.key\\\\\\\"\\\\nI1203 12:56:02.167657 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 12:56:02.174337 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 12:56:02.174360 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 12:56:02.174377 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 12:56:02.174382 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 12:56:02.178753 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 12:56:02.179168 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 12:56:02.179175 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 12:56:02.179180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 12:56:02.179184 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 12:56:02.179187 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 12:56:02.179190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 12:56:02.178775 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 12:56:02.181220 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8ecb87e56d3fa7d72b57a6f333db4244ee7f60381778e7099a6fc88d91e1e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:04Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:04 crc kubenswrapper[4986]: I1203 12:56:04.962489 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:04Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:04 crc kubenswrapper[4986]: I1203 12:56:04.976505 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/75e4c223-9952-4d1b-b83b-5c45cfc51432-serviceca\") pod \"node-ca-8lcrn\" (UID: \"75e4c223-9952-4d1b-b83b-5c45cfc51432\") " pod="openshift-image-registry/node-ca-8lcrn" Dec 03 12:56:04 crc kubenswrapper[4986]: I1203 12:56:04.976552 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/75e4c223-9952-4d1b-b83b-5c45cfc51432-host\") pod \"node-ca-8lcrn\" (UID: \"75e4c223-9952-4d1b-b83b-5c45cfc51432\") " pod="openshift-image-registry/node-ca-8lcrn" Dec 03 12:56:04 crc kubenswrapper[4986]: I1203 12:56:04.976570 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnr67\" (UniqueName: \"kubernetes.io/projected/75e4c223-9952-4d1b-b83b-5c45cfc51432-kube-api-access-dnr67\") pod \"node-ca-8lcrn\" (UID: \"75e4c223-9952-4d1b-b83b-5c45cfc51432\") " pod="openshift-image-registry/node-ca-8lcrn" Dec 03 12:56:04 crc kubenswrapper[4986]: I1203 12:56:04.976678 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/75e4c223-9952-4d1b-b83b-5c45cfc51432-host\") pod \"node-ca-8lcrn\" (UID: \"75e4c223-9952-4d1b-b83b-5c45cfc51432\") " pod="openshift-image-registry/node-ca-8lcrn" Dec 03 12:56:04 crc kubenswrapper[4986]: I1203 12:56:04.977684 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/75e4c223-9952-4d1b-b83b-5c45cfc51432-serviceca\") pod \"node-ca-8lcrn\" (UID: \"75e4c223-9952-4d1b-b83b-5c45cfc51432\") " pod="openshift-image-registry/node-ca-8lcrn" Dec 03 12:56:04 crc kubenswrapper[4986]: I1203 12:56:04.992917 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:04 crc kubenswrapper[4986]: I1203 12:56:04.992975 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:04 crc kubenswrapper[4986]: I1203 12:56:04.992987 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:04 crc kubenswrapper[4986]: I1203 12:56:04.992999 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:04 crc kubenswrapper[4986]: I1203 12:56:04.993008 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:04Z","lastTransitionTime":"2025-12-03T12:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:05 crc kubenswrapper[4986]: I1203 12:56:05.007333 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:05Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:05 crc kubenswrapper[4986]: I1203 12:56:05.031823 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnr67\" (UniqueName: \"kubernetes.io/projected/75e4c223-9952-4d1b-b83b-5c45cfc51432-kube-api-access-dnr67\") pod \"node-ca-8lcrn\" (UID: \"75e4c223-9952-4d1b-b83b-5c45cfc51432\") " pod="openshift-image-registry/node-ca-8lcrn" Dec 03 12:56:05 crc kubenswrapper[4986]: I1203 12:56:05.074895 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3a45156-295b-4093-80e7-2059f81ddbd7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9nf52\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:05Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:05 crc kubenswrapper[4986]: I1203 12:56:05.093883 4986 generic.go:334] "Generic (PLEG): container finished" podID="d3a45156-295b-4093-80e7-2059f81ddbd7" containerID="9c34a338f506e8dd09757984f92a29fd76ddd2d25fc00d680cc4883c4766080e" exitCode=0 Dec 03 12:56:05 crc kubenswrapper[4986]: I1203 12:56:05.093988 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" event={"ID":"d3a45156-295b-4093-80e7-2059f81ddbd7","Type":"ContainerDied","Data":"9c34a338f506e8dd09757984f92a29fd76ddd2d25fc00d680cc4883c4766080e"} Dec 03 12:56:05 crc kubenswrapper[4986]: I1203 12:56:05.095554 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:05 crc kubenswrapper[4986]: I1203 12:56:05.095595 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:05 crc kubenswrapper[4986]: I1203 12:56:05.095603 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:05 crc kubenswrapper[4986]: I1203 12:56:05.095614 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:05 crc kubenswrapper[4986]: I1203 12:56:05.095627 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:05Z","lastTransitionTime":"2025-12-03T12:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:05 crc kubenswrapper[4986]: I1203 12:56:05.096311 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" event={"ID":"f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c","Type":"ContainerStarted","Data":"535b7b05dbe9b47808f3d358a1f370b696a5d5bb4dd96b049ac2cbe3d18ea065"} Dec 03 12:56:05 crc kubenswrapper[4986]: I1203 12:56:05.096334 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" event={"ID":"f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c","Type":"ContainerStarted","Data":"d0ab87b22153ba31fa79d2d210e695b274c2ddc878ad679c605aa8fb716534d1"} Dec 03 12:56:05 crc kubenswrapper[4986]: I1203 12:56:05.097480 4986 generic.go:334] "Generic (PLEG): container finished" podID="a939a03f-0eec-49cb-9b23-40e359e427d5" containerID="fc042debcddb7ea2233d6c224020a32acd8c996d5334d8e6eaae8d1329bb3774" exitCode=0 Dec 03 12:56:05 crc kubenswrapper[4986]: I1203 12:56:05.098224 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rkgd9" event={"ID":"a939a03f-0eec-49cb-9b23-40e359e427d5","Type":"ContainerDied","Data":"fc042debcddb7ea2233d6c224020a32acd8c996d5334d8e6eaae8d1329bb3774"} Dec 03 12:56:05 crc kubenswrapper[4986]: I1203 12:56:05.105299 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fszqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a76bcf5c-6a62-4360-826d-ecac337c88ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d94370a14317ade02cb805783e09ffcd8c6ff07f35e9124e302028e6dfa3c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhtrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fszqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:05Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:05 crc kubenswrapper[4986]: I1203 12:56:05.150778 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-px97g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97196b6d-75cc-4de4-8805-f9ce3fbd4230\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c28bf1aff27ba408262b5fc561e084ff3813ce95a7c14b57fea24d409eb33bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v4j5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-px97g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:05Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:05 crc kubenswrapper[4986]: I1203 12:56:05.171840 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-8lcrn" Dec 03 12:56:05 crc kubenswrapper[4986]: I1203 12:56:05.186133 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rkgd9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a939a03f-0eec-49cb-9b23-40e359e427d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rkgd9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:05Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:05 crc kubenswrapper[4986]: I1203 12:56:05.201347 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:05 crc kubenswrapper[4986]: I1203 12:56:05.201391 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:05 crc kubenswrapper[4986]: I1203 12:56:05.201404 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:05 crc kubenswrapper[4986]: I1203 12:56:05.201421 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:05 crc kubenswrapper[4986]: I1203 12:56:05.201433 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:05Z","lastTransitionTime":"2025-12-03T12:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:05 crc kubenswrapper[4986]: I1203 12:56:05.225220 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:05Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:05 crc kubenswrapper[4986]: I1203 12:56:05.263686 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0d71a3ffdbe593936c83e92e92e286eaff092333bb3dd54cb5bc2825ca46fbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a402aeaef03d8d3c684f0d8e7fe8a080bc02d761104e3c1095b3233aade2b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:05Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:05 crc kubenswrapper[4986]: I1203 12:56:05.303468 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:05Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:05 crc kubenswrapper[4986]: I1203 12:56:05.304647 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:05 crc kubenswrapper[4986]: I1203 12:56:05.304684 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:05 crc kubenswrapper[4986]: I1203 12:56:05.304693 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:05 crc kubenswrapper[4986]: I1203 12:56:05.304709 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:05 crc kubenswrapper[4986]: I1203 12:56:05.304725 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:05Z","lastTransitionTime":"2025-12-03T12:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:05 crc kubenswrapper[4986]: I1203 12:56:05.346198 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgppd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgppd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xggpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:05Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:05 crc kubenswrapper[4986]: I1203 12:56:05.386031 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231ed674-1d19-4ee2-b29c-a1b7453ed531\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7eb1602f2d542b1998655456e5c49fca8f55a45a07cb075f989d4b10f91f6c95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0ca216d3d1121b7035d5d7ead25d459f95d56808997c02b3801cac0354c76a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04aa8cf2e01507ace747505514511263d9c560608e33f592a2c10be1edad8674\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70f0aa85e0be54890d4367deaf1a95970a7b0ac9d95afdd07c431680bdf7f15e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c0ed6891f87bfc3c62f0f13739720e012dc8fa3b122a18e799e164e4656abb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T12:56:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 12:56:01.437086 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 12:56:01.437205 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 12:56:01.438166 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1972143202/tls.crt::/tmp/serving-cert-1972143202/tls.key\\\\\\\"\\\\nI1203 12:56:02.167657 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 12:56:02.174337 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 12:56:02.174360 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 12:56:02.174377 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 12:56:02.174382 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 12:56:02.178753 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 12:56:02.179168 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 12:56:02.179175 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 12:56:02.179180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 12:56:02.179184 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 12:56:02.179187 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 12:56:02.179190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 12:56:02.178775 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 12:56:02.181220 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8ecb87e56d3fa7d72b57a6f333db4244ee7f60381778e7099a6fc88d91e1e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:05Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:05 crc kubenswrapper[4986]: I1203 12:56:05.406540 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:05 crc kubenswrapper[4986]: I1203 12:56:05.406580 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:05 crc kubenswrapper[4986]: I1203 12:56:05.406588 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:05 crc kubenswrapper[4986]: I1203 12:56:05.406601 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:05 crc kubenswrapper[4986]: I1203 12:56:05.406611 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:05Z","lastTransitionTime":"2025-12-03T12:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:05 crc kubenswrapper[4986]: I1203 12:56:05.433120 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d51894dd-38f0-413a-adaf-22d196474bed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa98006a8736394300cdb9ef3331d2a23e1b3af2e2203f757c51ca9c700d26c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6e7d24b5f1325cfb2d210cb9f311026ab370c91deecc5d0d951bcd4eda79816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9bb9d940d346f9895e24d98173668ea2d0e68fd3035e5b03475bbc936582958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7fa0d673f037c8edcc75ee30b1bbeea73da55d5459347dacc9251fffbb38be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d4007e9ed33227938a5a3656fd74958aef599418fec4cd2048dfa7dd88924dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fb412bfaef1de92aa6be2e23c0dd73fe39eebf29b6814fb3bb6fd0281ddbbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fb412bfaef1de92aa6be2e23c0dd73fe39eebf29b6814fb3bb6fd0281ddbbd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c66ee903635427e08f96e3f96c512c4d25e79bf2fc14eb0c699b3b9a617cfeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c66ee903635427e08f96e3f96c512c4d25e79bf2fc14eb0c699b3b9a617cfeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54fb6e4ecd7f73267d49261875a00a51e5e36e6079170f694346e9d6c3df8f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54fb6e4ecd7f73267d49261875a00a51e5e36e6079170f694346e9d6c3df8f31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:05Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:05 crc kubenswrapper[4986]: I1203 12:56:05.469776 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b923e52e6284c8adca87e7d3d801a432a676e4389d7ab3c0cba584c5b7d96f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:05Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:05 crc kubenswrapper[4986]: I1203 12:56:05.504057 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8lcrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75e4c223-9952-4d1b-b83b-5c45cfc51432\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnr67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8lcrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:05Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:05 crc kubenswrapper[4986]: I1203 12:56:05.508887 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:05 crc kubenswrapper[4986]: I1203 12:56:05.508926 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:05 crc kubenswrapper[4986]: I1203 12:56:05.508936 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:05 crc kubenswrapper[4986]: I1203 12:56:05.508950 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:05 crc kubenswrapper[4986]: I1203 12:56:05.508959 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:05Z","lastTransitionTime":"2025-12-03T12:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:05 crc kubenswrapper[4986]: I1203 12:56:05.544622 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:05Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:05 crc kubenswrapper[4986]: I1203 12:56:05.582881 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 12:56:05 crc kubenswrapper[4986]: I1203 12:56:05.583008 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 12:56:05 crc kubenswrapper[4986]: E1203 12:56:05.583083 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 12:56:09.583044675 +0000 UTC m=+29.049475876 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:56:05 crc kubenswrapper[4986]: I1203 12:56:05.583155 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 12:56:05 crc kubenswrapper[4986]: E1203 12:56:05.583160 4986 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 12:56:05 crc kubenswrapper[4986]: E1203 12:56:05.583258 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 12:56:09.58323276 +0000 UTC m=+29.049664051 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 12:56:05 crc kubenswrapper[4986]: E1203 12:56:05.583338 4986 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 12:56:05 crc kubenswrapper[4986]: E1203 12:56:05.583380 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 12:56:09.583372403 +0000 UTC m=+29.049803594 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 12:56:05 crc kubenswrapper[4986]: I1203 12:56:05.584998 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:05Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:05 crc kubenswrapper[4986]: I1203 12:56:05.611118 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:05 crc kubenswrapper[4986]: I1203 12:56:05.611163 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:05 crc kubenswrapper[4986]: I1203 12:56:05.611175 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:05 crc kubenswrapper[4986]: I1203 12:56:05.611192 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:05 crc kubenswrapper[4986]: I1203 12:56:05.611206 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:05Z","lastTransitionTime":"2025-12-03T12:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:05 crc kubenswrapper[4986]: I1203 12:56:05.627860 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0d71a3ffdbe593936c83e92e92e286eaff092333bb3dd54cb5bc2825ca46fbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a402aeaef03d8d3c684f0d8e7fe8a080bc02d761104e3c1095b3233aade2b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:05Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:05 crc kubenswrapper[4986]: I1203 12:56:05.664902 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rkgd9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a939a03f-0eec-49cb-9b23-40e359e427d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc042debcddb7ea2233d6c224020a32acd8c996d5334d8e6eaae8d1329bb3774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc042debcddb7ea2233d6c224020a32acd8c996d5334d8e6eaae8d1329bb3774\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rkgd9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:05Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:05 crc kubenswrapper[4986]: I1203 12:56:05.684433 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 12:56:05 crc kubenswrapper[4986]: I1203 12:56:05.684481 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 12:56:05 crc kubenswrapper[4986]: E1203 12:56:05.684600 4986 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 12:56:05 crc kubenswrapper[4986]: E1203 12:56:05.684617 4986 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 12:56:05 crc kubenswrapper[4986]: E1203 12:56:05.684627 4986 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 12:56:05 crc kubenswrapper[4986]: E1203 12:56:05.684637 4986 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 12:56:05 crc kubenswrapper[4986]: E1203 12:56:05.684670 4986 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 12:56:05 crc kubenswrapper[4986]: E1203 12:56:05.684683 4986 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 12:56:05 crc kubenswrapper[4986]: E1203 12:56:05.684672 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-03 12:56:09.684658496 +0000 UTC m=+29.151089687 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 12:56:05 crc kubenswrapper[4986]: E1203 12:56:05.684744 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-03 12:56:09.684729689 +0000 UTC m=+29.151160880 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 12:56:05 crc kubenswrapper[4986]: I1203 12:56:05.704084 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:05Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:05 crc kubenswrapper[4986]: I1203 12:56:05.715399 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:05 crc kubenswrapper[4986]: I1203 12:56:05.715428 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:05 crc kubenswrapper[4986]: I1203 12:56:05.715438 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:05 crc kubenswrapper[4986]: I1203 12:56:05.715451 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:05 crc kubenswrapper[4986]: I1203 12:56:05.715460 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:05Z","lastTransitionTime":"2025-12-03T12:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:05 crc kubenswrapper[4986]: I1203 12:56:05.744646 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b923e52e6284c8adca87e7d3d801a432a676e4389d7ab3c0cba584c5b7d96f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:05Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:05 crc kubenswrapper[4986]: I1203 12:56:05.784005 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:05Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:05 crc kubenswrapper[4986]: I1203 12:56:05.817848 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:05 crc kubenswrapper[4986]: I1203 12:56:05.817879 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:05 crc kubenswrapper[4986]: I1203 12:56:05.817888 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:05 crc kubenswrapper[4986]: I1203 12:56:05.817900 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:05 crc kubenswrapper[4986]: I1203 12:56:05.817909 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:05Z","lastTransitionTime":"2025-12-03T12:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:05 crc kubenswrapper[4986]: I1203 12:56:05.822818 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://535b7b05dbe9b47808f3d358a1f370b696a5d5bb4dd96b049ac2cbe3d18ea065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgppd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0ab87b22153ba31fa79d2d210e695b274c2ddc878ad679c605aa8fb716534d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgppd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xggpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:05Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:05 crc kubenswrapper[4986]: I1203 12:56:05.863376 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231ed674-1d19-4ee2-b29c-a1b7453ed531\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7eb1602f2d542b1998655456e5c49fca8f55a45a07cb075f989d4b10f91f6c95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0ca216d3d1121b7035d5d7ead25d459f95d56808997c02b3801cac0354c76a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04aa8cf2e01507ace747505514511263d9c560608e33f592a2c10be1edad8674\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70f0aa85e0be54890d4367deaf1a95970a7b0ac9d95afdd07c431680bdf7f15e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c0ed6891f87bfc3c62f0f13739720e012dc8fa3b122a18e799e164e4656abb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T12:56:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 12:56:01.437086 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 12:56:01.437205 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 12:56:01.438166 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1972143202/tls.crt::/tmp/serving-cert-1972143202/tls.key\\\\\\\"\\\\nI1203 12:56:02.167657 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 12:56:02.174337 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 12:56:02.174360 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 12:56:02.174377 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 12:56:02.174382 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 12:56:02.178753 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 12:56:02.179168 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 12:56:02.179175 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 12:56:02.179180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 12:56:02.179184 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 12:56:02.179187 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 12:56:02.179190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 12:56:02.178775 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 12:56:02.181220 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8ecb87e56d3fa7d72b57a6f333db4244ee7f60381778e7099a6fc88d91e1e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:05Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:05 crc kubenswrapper[4986]: I1203 12:56:05.912436 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d51894dd-38f0-413a-adaf-22d196474bed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa98006a8736394300cdb9ef3331d2a23e1b3af2e2203f757c51ca9c700d26c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6e7d24b5f1325cfb2d210cb9f311026ab370c91deecc5d0d951bcd4eda79816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9bb9d940d346f9895e24d98173668ea2d0e68fd3035e5b03475bbc936582958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7fa0d673f037c8edcc75ee30b1bbeea73da55d5459347dacc9251fffbb38be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d4007e9ed33227938a5a3656fd74958aef599418fec4cd2048dfa7dd88924dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fb412bfaef1de92aa6be2e23c0dd73fe39eebf29b6814fb3bb6fd0281ddbbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fb412bfaef1de92aa6be2e23c0dd73fe39eebf29b6814fb3bb6fd0281ddbbd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c66ee903635427e08f96e3f96c512c4d25e79bf2fc14eb0c699b3b9a617cfeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c66ee903635427e08f96e3f96c512c4d25e79bf2fc14eb0c699b3b9a617cfeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54fb6e4ecd7f73267d49261875a00a51e5e36e6079170f694346e9d6c3df8f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54fb6e4ecd7f73267d49261875a00a51e5e36e6079170f694346e9d6c3df8f31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:05Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:05 crc kubenswrapper[4986]: I1203 12:56:05.920397 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:05 crc kubenswrapper[4986]: I1203 12:56:05.920442 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:05 crc kubenswrapper[4986]: I1203 12:56:05.920455 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:05 crc kubenswrapper[4986]: I1203 12:56:05.920472 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:05 crc kubenswrapper[4986]: I1203 12:56:05.920481 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:05Z","lastTransitionTime":"2025-12-03T12:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:05 crc kubenswrapper[4986]: I1203 12:56:05.943258 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 12:56:05 crc kubenswrapper[4986]: I1203 12:56:05.943339 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 12:56:05 crc kubenswrapper[4986]: I1203 12:56:05.943265 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 12:56:05 crc kubenswrapper[4986]: E1203 12:56:05.943383 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 12:56:05 crc kubenswrapper[4986]: E1203 12:56:05.943467 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 12:56:05 crc kubenswrapper[4986]: E1203 12:56:05.943565 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 12:56:05 crc kubenswrapper[4986]: I1203 12:56:05.945231 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:05Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:05 crc kubenswrapper[4986]: I1203 12:56:05.983635 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8lcrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75e4c223-9952-4d1b-b83b-5c45cfc51432\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnr67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8lcrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:05Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:06 crc kubenswrapper[4986]: I1203 12:56:06.022934 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:06 crc kubenswrapper[4986]: I1203 12:56:06.022965 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:06 crc kubenswrapper[4986]: I1203 12:56:06.022978 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:06 crc kubenswrapper[4986]: I1203 12:56:06.022991 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:06 crc kubenswrapper[4986]: I1203 12:56:06.023001 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:06Z","lastTransitionTime":"2025-12-03T12:56:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:06 crc kubenswrapper[4986]: I1203 12:56:06.025433 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:06Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:06 crc kubenswrapper[4986]: I1203 12:56:06.062907 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-px97g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97196b6d-75cc-4de4-8805-f9ce3fbd4230\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c28bf1aff27ba408262b5fc561e084ff3813ce95a7c14b57fea24d409eb33bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v4j5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-px97g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:06Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:06 crc kubenswrapper[4986]: I1203 12:56:06.104780 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" event={"ID":"d3a45156-295b-4093-80e7-2059f81ddbd7","Type":"ContainerStarted","Data":"7d94eca13821749d7e15a350c411a831d2887f4211ed092a4fb57e26e49fcd9d"} Dec 03 12:56:06 crc kubenswrapper[4986]: I1203 12:56:06.104820 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" event={"ID":"d3a45156-295b-4093-80e7-2059f81ddbd7","Type":"ContainerStarted","Data":"2ff8c1f1602e57602cae3f46dd3d2bf8d7abad6bd0952f4347152678e8606d92"} Dec 03 12:56:06 crc kubenswrapper[4986]: I1203 12:56:06.106331 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rkgd9" event={"ID":"a939a03f-0eec-49cb-9b23-40e359e427d5","Type":"ContainerStarted","Data":"7128a1dd657b9829c1af31ef1a2f5ad08d2bfd65e5d352cf5a69a5be4e9aae9a"} Dec 03 12:56:06 crc kubenswrapper[4986]: I1203 12:56:06.108087 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-8lcrn" event={"ID":"75e4c223-9952-4d1b-b83b-5c45cfc51432","Type":"ContainerStarted","Data":"6b2af5d43c7842f129b6521e5b3b1963f6060b3e0c552a771c7df6a2a9ef867d"} Dec 03 12:56:06 crc kubenswrapper[4986]: I1203 12:56:06.108124 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-8lcrn" event={"ID":"75e4c223-9952-4d1b-b83b-5c45cfc51432","Type":"ContainerStarted","Data":"2a3feb27504d284851e1dd8130ff2b795d65cd24aed2bcdd98517ffa0cae9267"} Dec 03 12:56:06 crc kubenswrapper[4986]: I1203 12:56:06.109540 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"0d66d06e79ffb2ecd12acc0380e5e4839904147f9194e48277ca667df13a0cc2"} Dec 03 12:56:06 crc kubenswrapper[4986]: I1203 12:56:06.111352 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3a45156-295b-4093-80e7-2059f81ddbd7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c34a338f506e8dd09757984f92a29fd76ddd2d25fc00d680cc4883c4766080e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c34a338f506e8dd09757984f92a29fd76ddd2d25fc00d680cc4883c4766080e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9nf52\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:06Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:06 crc kubenswrapper[4986]: I1203 12:56:06.125780 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:06 crc kubenswrapper[4986]: I1203 12:56:06.125825 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:06 crc kubenswrapper[4986]: I1203 12:56:06.125834 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:06 crc kubenswrapper[4986]: I1203 12:56:06.125848 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:06 crc kubenswrapper[4986]: I1203 12:56:06.125857 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:06Z","lastTransitionTime":"2025-12-03T12:56:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:06 crc kubenswrapper[4986]: I1203 12:56:06.142081 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fszqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a76bcf5c-6a62-4360-826d-ecac337c88ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d94370a14317ade02cb805783e09ffcd8c6ff07f35e9124e302028e6dfa3c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhtrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fszqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:06Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:06 crc kubenswrapper[4986]: I1203 12:56:06.186693 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:06Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:06 crc kubenswrapper[4986]: I1203 12:56:06.223389 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0d71a3ffdbe593936c83e92e92e286eaff092333bb3dd54cb5bc2825ca46fbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a402aeaef03d8d3c684f0d8e7fe8a080bc02d761104e3c1095b3233aade2b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:06Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:06 crc kubenswrapper[4986]: I1203 12:56:06.227957 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:06 crc kubenswrapper[4986]: I1203 12:56:06.228010 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:06 crc kubenswrapper[4986]: I1203 12:56:06.228024 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:06 crc kubenswrapper[4986]: I1203 12:56:06.228043 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:06 crc kubenswrapper[4986]: I1203 12:56:06.228055 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:06Z","lastTransitionTime":"2025-12-03T12:56:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:06 crc kubenswrapper[4986]: I1203 12:56:06.264372 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rkgd9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a939a03f-0eec-49cb-9b23-40e359e427d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc042debcddb7ea2233d6c224020a32acd8c996d5334d8e6eaae8d1329bb3774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc042debcddb7ea2233d6c224020a32acd8c996d5334d8e6eaae8d1329bb3774\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7128a1dd657b9829c1af31ef1a2f5ad08d2bfd65e5d352cf5a69a5be4e9aae9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rkgd9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:06Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:06 crc kubenswrapper[4986]: I1203 12:56:06.302835 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://535b7b05dbe9b47808f3d358a1f370b696a5d5bb4dd96b049ac2cbe3d18ea065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgppd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0ab87b22153ba31fa79d2d210e695b274c2ddc878ad679c605aa8fb716534d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgppd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xggpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:06Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:06 crc kubenswrapper[4986]: I1203 12:56:06.330629 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:06 crc kubenswrapper[4986]: I1203 12:56:06.330662 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:06 crc kubenswrapper[4986]: I1203 12:56:06.330670 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:06 crc kubenswrapper[4986]: I1203 12:56:06.330688 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:06 crc kubenswrapper[4986]: I1203 12:56:06.330698 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:06Z","lastTransitionTime":"2025-12-03T12:56:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:06 crc kubenswrapper[4986]: I1203 12:56:06.347421 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231ed674-1d19-4ee2-b29c-a1b7453ed531\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7eb1602f2d542b1998655456e5c49fca8f55a45a07cb075f989d4b10f91f6c95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0ca216d3d1121b7035d5d7ead25d459f95d56808997c02b3801cac0354c76a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04aa8cf2e01507ace747505514511263d9c560608e33f592a2c10be1edad8674\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70f0aa85e0be54890d4367deaf1a95970a7b0ac9d95afdd07c431680bdf7f15e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c0ed6891f87bfc3c62f0f13739720e012dc8fa3b122a18e799e164e4656abb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T12:56:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 12:56:01.437086 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 12:56:01.437205 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 12:56:01.438166 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1972143202/tls.crt::/tmp/serving-cert-1972143202/tls.key\\\\\\\"\\\\nI1203 12:56:02.167657 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 12:56:02.174337 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 12:56:02.174360 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 12:56:02.174377 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 12:56:02.174382 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 12:56:02.178753 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 12:56:02.179168 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 12:56:02.179175 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 12:56:02.179180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 12:56:02.179184 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 12:56:02.179187 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 12:56:02.179190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 12:56:02.178775 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 12:56:02.181220 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8ecb87e56d3fa7d72b57a6f333db4244ee7f60381778e7099a6fc88d91e1e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:06Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:06 crc kubenswrapper[4986]: I1203 12:56:06.390824 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d51894dd-38f0-413a-adaf-22d196474bed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa98006a8736394300cdb9ef3331d2a23e1b3af2e2203f757c51ca9c700d26c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6e7d24b5f1325cfb2d210cb9f311026ab370c91deecc5d0d951bcd4eda79816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9bb9d940d346f9895e24d98173668ea2d0e68fd3035e5b03475bbc936582958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7fa0d673f037c8edcc75ee30b1bbeea73da55d5459347dacc9251fffbb38be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d4007e9ed33227938a5a3656fd74958aef599418fec4cd2048dfa7dd88924dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fb412bfaef1de92aa6be2e23c0dd73fe39eebf29b6814fb3bb6fd0281ddbbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fb412bfaef1de92aa6be2e23c0dd73fe39eebf29b6814fb3bb6fd0281ddbbd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c66ee903635427e08f96e3f96c512c4d25e79bf2fc14eb0c699b3b9a617cfeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c66ee903635427e08f96e3f96c512c4d25e79bf2fc14eb0c699b3b9a617cfeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54fb6e4ecd7f73267d49261875a00a51e5e36e6079170f694346e9d6c3df8f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54fb6e4ecd7f73267d49261875a00a51e5e36e6079170f694346e9d6c3df8f31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:06Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:06 crc kubenswrapper[4986]: I1203 12:56:06.426396 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b923e52e6284c8adca87e7d3d801a432a676e4389d7ab3c0cba584c5b7d96f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:06Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:06 crc kubenswrapper[4986]: I1203 12:56:06.432962 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:06 crc kubenswrapper[4986]: I1203 12:56:06.432991 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:06 crc kubenswrapper[4986]: I1203 12:56:06.433000 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:06 crc kubenswrapper[4986]: I1203 12:56:06.433013 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:06 crc kubenswrapper[4986]: I1203 12:56:06.433022 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:06Z","lastTransitionTime":"2025-12-03T12:56:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:06 crc kubenswrapper[4986]: I1203 12:56:06.463258 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d66d06e79ffb2ecd12acc0380e5e4839904147f9194e48277ca667df13a0cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:06Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:06 crc kubenswrapper[4986]: I1203 12:56:06.528946 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:06Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:06 crc kubenswrapper[4986]: I1203 12:56:06.535216 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:06 crc kubenswrapper[4986]: I1203 12:56:06.535259 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:06 crc kubenswrapper[4986]: I1203 12:56:06.535270 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:06 crc kubenswrapper[4986]: I1203 12:56:06.535304 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:06 crc kubenswrapper[4986]: I1203 12:56:06.535321 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:06Z","lastTransitionTime":"2025-12-03T12:56:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:06 crc kubenswrapper[4986]: I1203 12:56:06.545995 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:06Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:06 crc kubenswrapper[4986]: I1203 12:56:06.581344 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8lcrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75e4c223-9952-4d1b-b83b-5c45cfc51432\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b2af5d43c7842f129b6521e5b3b1963f6060b3e0c552a771c7df6a2a9ef867d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnr67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8lcrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:06Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:06 crc kubenswrapper[4986]: I1203 12:56:06.621601 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fszqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a76bcf5c-6a62-4360-826d-ecac337c88ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d94370a14317ade02cb805783e09ffcd8c6ff07f35e9124e302028e6dfa3c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhtrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fszqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:06Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:06 crc kubenswrapper[4986]: I1203 12:56:06.638069 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:06 crc kubenswrapper[4986]: I1203 12:56:06.638373 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:06 crc kubenswrapper[4986]: I1203 12:56:06.638472 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:06 crc kubenswrapper[4986]: I1203 12:56:06.638539 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:06 crc kubenswrapper[4986]: I1203 12:56:06.638600 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:06Z","lastTransitionTime":"2025-12-03T12:56:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:06 crc kubenswrapper[4986]: I1203 12:56:06.665027 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-px97g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97196b6d-75cc-4de4-8805-f9ce3fbd4230\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c28bf1aff27ba408262b5fc561e084ff3813ce95a7c14b57fea24d409eb33bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v4j5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-px97g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:06Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:06 crc kubenswrapper[4986]: I1203 12:56:06.710312 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3a45156-295b-4093-80e7-2059f81ddbd7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c34a338f506e8dd09757984f92a29fd76ddd2d25fc00d680cc4883c4766080e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c34a338f506e8dd09757984f92a29fd76ddd2d25fc00d680cc4883c4766080e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9nf52\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:06Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:06 crc kubenswrapper[4986]: I1203 12:56:06.740839 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:06 crc kubenswrapper[4986]: I1203 12:56:06.741055 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:06 crc kubenswrapper[4986]: I1203 12:56:06.741113 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:06 crc kubenswrapper[4986]: I1203 12:56:06.741196 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:06 crc kubenswrapper[4986]: I1203 12:56:06.741259 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:06Z","lastTransitionTime":"2025-12-03T12:56:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:06 crc kubenswrapper[4986]: I1203 12:56:06.843355 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:06 crc kubenswrapper[4986]: I1203 12:56:06.843720 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:06 crc kubenswrapper[4986]: I1203 12:56:06.843906 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:06 crc kubenswrapper[4986]: I1203 12:56:06.844088 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:06 crc kubenswrapper[4986]: I1203 12:56:06.844316 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:06Z","lastTransitionTime":"2025-12-03T12:56:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:06 crc kubenswrapper[4986]: I1203 12:56:06.946468 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:06 crc kubenswrapper[4986]: I1203 12:56:06.946856 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:06 crc kubenswrapper[4986]: I1203 12:56:06.946865 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:06 crc kubenswrapper[4986]: I1203 12:56:06.946879 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:06 crc kubenswrapper[4986]: I1203 12:56:06.946890 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:06Z","lastTransitionTime":"2025-12-03T12:56:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:07 crc kubenswrapper[4986]: I1203 12:56:07.048794 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:07 crc kubenswrapper[4986]: I1203 12:56:07.048842 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:07 crc kubenswrapper[4986]: I1203 12:56:07.048853 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:07 crc kubenswrapper[4986]: I1203 12:56:07.048870 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:07 crc kubenswrapper[4986]: I1203 12:56:07.048882 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:07Z","lastTransitionTime":"2025-12-03T12:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:07 crc kubenswrapper[4986]: I1203 12:56:07.114057 4986 generic.go:334] "Generic (PLEG): container finished" podID="a939a03f-0eec-49cb-9b23-40e359e427d5" containerID="7128a1dd657b9829c1af31ef1a2f5ad08d2bfd65e5d352cf5a69a5be4e9aae9a" exitCode=0 Dec 03 12:56:07 crc kubenswrapper[4986]: I1203 12:56:07.114111 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rkgd9" event={"ID":"a939a03f-0eec-49cb-9b23-40e359e427d5","Type":"ContainerDied","Data":"7128a1dd657b9829c1af31ef1a2f5ad08d2bfd65e5d352cf5a69a5be4e9aae9a"} Dec 03 12:56:07 crc kubenswrapper[4986]: I1203 12:56:07.117753 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" event={"ID":"d3a45156-295b-4093-80e7-2059f81ddbd7","Type":"ContainerStarted","Data":"d77d078572dbe7f11e91ece0d25e76bd17004167dabce267a2b568e798aae158"} Dec 03 12:56:07 crc kubenswrapper[4986]: I1203 12:56:07.117792 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" event={"ID":"d3a45156-295b-4093-80e7-2059f81ddbd7","Type":"ContainerStarted","Data":"2843fb3fcbadb926b104a9f7c4084468b36293aebed6c692c29419a0457b62a2"} Dec 03 12:56:07 crc kubenswrapper[4986]: I1203 12:56:07.117807 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" event={"ID":"d3a45156-295b-4093-80e7-2059f81ddbd7","Type":"ContainerStarted","Data":"6be6405facfa7726cbe56c48184cba0123774e8e34d3ceb9e18daed6f35560c6"} Dec 03 12:56:07 crc kubenswrapper[4986]: I1203 12:56:07.117819 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" event={"ID":"d3a45156-295b-4093-80e7-2059f81ddbd7","Type":"ContainerStarted","Data":"5c79f84fc019d109bac446dd058b4bafb0b516a13fae47b1924e4f7690658ba8"} Dec 03 12:56:07 crc kubenswrapper[4986]: I1203 12:56:07.128023 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:07Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:07 crc kubenswrapper[4986]: I1203 12:56:07.143453 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0d71a3ffdbe593936c83e92e92e286eaff092333bb3dd54cb5bc2825ca46fbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a402aeaef03d8d3c684f0d8e7fe8a080bc02d761104e3c1095b3233aade2b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:07Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:07 crc kubenswrapper[4986]: I1203 12:56:07.156419 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:07 crc kubenswrapper[4986]: I1203 12:56:07.156472 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:07 crc kubenswrapper[4986]: I1203 12:56:07.156482 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:07 crc kubenswrapper[4986]: I1203 12:56:07.156498 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:07 crc kubenswrapper[4986]: I1203 12:56:07.156508 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:07Z","lastTransitionTime":"2025-12-03T12:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:07 crc kubenswrapper[4986]: I1203 12:56:07.158250 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rkgd9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a939a03f-0eec-49cb-9b23-40e359e427d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc042debcddb7ea2233d6c224020a32acd8c996d5334d8e6eaae8d1329bb3774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc042debcddb7ea2233d6c224020a32acd8c996d5334d8e6eaae8d1329bb3774\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7128a1dd657b9829c1af31ef1a2f5ad08d2bfd65e5d352cf5a69a5be4e9aae9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7128a1dd657b9829c1af31ef1a2f5ad08d2bfd65e5d352cf5a69a5be4e9aae9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rkgd9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:07Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:07 crc kubenswrapper[4986]: I1203 12:56:07.177503 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d51894dd-38f0-413a-adaf-22d196474bed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa98006a8736394300cdb9ef3331d2a23e1b3af2e2203f757c51ca9c700d26c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6e7d24b5f1325cfb2d210cb9f311026ab370c91deecc5d0d951bcd4eda79816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9bb9d940d346f9895e24d98173668ea2d0e68fd3035e5b03475bbc936582958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7fa0d673f037c8edcc75ee30b1bbeea73da55d5459347dacc9251fffbb38be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d4007e9ed33227938a5a3656fd74958aef599418fec4cd2048dfa7dd88924dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fb412bfaef1de92aa6be2e23c0dd73fe39eebf29b6814fb3bb6fd0281ddbbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fb412bfaef1de92aa6be2e23c0dd73fe39eebf29b6814fb3bb6fd0281ddbbd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c66ee903635427e08f96e3f96c512c4d25e79bf2fc14eb0c699b3b9a617cfeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c66ee903635427e08f96e3f96c512c4d25e79bf2fc14eb0c699b3b9a617cfeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54fb6e4ecd7f73267d49261875a00a51e5e36e6079170f694346e9d6c3df8f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54fb6e4ecd7f73267d49261875a00a51e5e36e6079170f694346e9d6c3df8f31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:07Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:07 crc kubenswrapper[4986]: I1203 12:56:07.190797 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b923e52e6284c8adca87e7d3d801a432a676e4389d7ab3c0cba584c5b7d96f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:07Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:07 crc kubenswrapper[4986]: I1203 12:56:07.202631 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d66d06e79ffb2ecd12acc0380e5e4839904147f9194e48277ca667df13a0cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:07Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:07 crc kubenswrapper[4986]: I1203 12:56:07.214091 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://535b7b05dbe9b47808f3d358a1f370b696a5d5bb4dd96b049ac2cbe3d18ea065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgppd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0ab87b22153ba31fa79d2d210e695b274c2ddc878ad679c605aa8fb716534d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgppd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xggpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:07Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:07 crc kubenswrapper[4986]: I1203 12:56:07.229021 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231ed674-1d19-4ee2-b29c-a1b7453ed531\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7eb1602f2d542b1998655456e5c49fca8f55a45a07cb075f989d4b10f91f6c95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0ca216d3d1121b7035d5d7ead25d459f95d56808997c02b3801cac0354c76a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04aa8cf2e01507ace747505514511263d9c560608e33f592a2c10be1edad8674\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70f0aa85e0be54890d4367deaf1a95970a7b0ac9d95afdd07c431680bdf7f15e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c0ed6891f87bfc3c62f0f13739720e012dc8fa3b122a18e799e164e4656abb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T12:56:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 12:56:01.437086 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 12:56:01.437205 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 12:56:01.438166 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1972143202/tls.crt::/tmp/serving-cert-1972143202/tls.key\\\\\\\"\\\\nI1203 12:56:02.167657 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 12:56:02.174337 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 12:56:02.174360 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 12:56:02.174377 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 12:56:02.174382 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 12:56:02.178753 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 12:56:02.179168 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 12:56:02.179175 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 12:56:02.179180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 12:56:02.179184 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 12:56:02.179187 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 12:56:02.179190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 12:56:02.178775 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 12:56:02.181220 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8ecb87e56d3fa7d72b57a6f333db4244ee7f60381778e7099a6fc88d91e1e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:07Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:07 crc kubenswrapper[4986]: I1203 12:56:07.242963 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:07Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:07 crc kubenswrapper[4986]: I1203 12:56:07.253805 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:07Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:07 crc kubenswrapper[4986]: I1203 12:56:07.258710 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:07 crc kubenswrapper[4986]: I1203 12:56:07.258735 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:07 crc kubenswrapper[4986]: I1203 12:56:07.258745 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:07 crc kubenswrapper[4986]: I1203 12:56:07.258758 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:07 crc kubenswrapper[4986]: I1203 12:56:07.258774 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:07Z","lastTransitionTime":"2025-12-03T12:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:07 crc kubenswrapper[4986]: I1203 12:56:07.262361 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8lcrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75e4c223-9952-4d1b-b83b-5c45cfc51432\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b2af5d43c7842f129b6521e5b3b1963f6060b3e0c552a771c7df6a2a9ef867d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnr67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8lcrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:07Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:07 crc kubenswrapper[4986]: I1203 12:56:07.272232 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fszqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a76bcf5c-6a62-4360-826d-ecac337c88ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d94370a14317ade02cb805783e09ffcd8c6ff07f35e9124e302028e6dfa3c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhtrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fszqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:07Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:07 crc kubenswrapper[4986]: I1203 12:56:07.285151 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-px97g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97196b6d-75cc-4de4-8805-f9ce3fbd4230\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c28bf1aff27ba408262b5fc561e084ff3813ce95a7c14b57fea24d409eb33bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v4j5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-px97g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:07Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:07 crc kubenswrapper[4986]: I1203 12:56:07.305082 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3a45156-295b-4093-80e7-2059f81ddbd7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c34a338f506e8dd09757984f92a29fd76ddd2d25fc00d680cc4883c4766080e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c34a338f506e8dd09757984f92a29fd76ddd2d25fc00d680cc4883c4766080e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9nf52\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:07Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:07 crc kubenswrapper[4986]: I1203 12:56:07.361064 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:07 crc kubenswrapper[4986]: I1203 12:56:07.361114 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:07 crc kubenswrapper[4986]: I1203 12:56:07.361123 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:07 crc kubenswrapper[4986]: I1203 12:56:07.361138 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:07 crc kubenswrapper[4986]: I1203 12:56:07.361150 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:07Z","lastTransitionTime":"2025-12-03T12:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:07 crc kubenswrapper[4986]: I1203 12:56:07.463739 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:07 crc kubenswrapper[4986]: I1203 12:56:07.463790 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:07 crc kubenswrapper[4986]: I1203 12:56:07.463801 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:07 crc kubenswrapper[4986]: I1203 12:56:07.463820 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:07 crc kubenswrapper[4986]: I1203 12:56:07.463832 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:07Z","lastTransitionTime":"2025-12-03T12:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:07 crc kubenswrapper[4986]: I1203 12:56:07.487719 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 12:56:07 crc kubenswrapper[4986]: I1203 12:56:07.491171 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 12:56:07 crc kubenswrapper[4986]: I1203 12:56:07.495978 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Dec 03 12:56:07 crc kubenswrapper[4986]: I1203 12:56:07.500222 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:07Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:07 crc kubenswrapper[4986]: I1203 12:56:07.511892 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0d71a3ffdbe593936c83e92e92e286eaff092333bb3dd54cb5bc2825ca46fbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a402aeaef03d8d3c684f0d8e7fe8a080bc02d761104e3c1095b3233aade2b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:07Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:07 crc kubenswrapper[4986]: I1203 12:56:07.528159 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rkgd9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a939a03f-0eec-49cb-9b23-40e359e427d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc042debcddb7ea2233d6c224020a32acd8c996d5334d8e6eaae8d1329bb3774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc042debcddb7ea2233d6c224020a32acd8c996d5334d8e6eaae8d1329bb3774\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7128a1dd657b9829c1af31ef1a2f5ad08d2bfd65e5d352cf5a69a5be4e9aae9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7128a1dd657b9829c1af31ef1a2f5ad08d2bfd65e5d352cf5a69a5be4e9aae9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rkgd9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:07Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:07 crc kubenswrapper[4986]: I1203 12:56:07.542866 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231ed674-1d19-4ee2-b29c-a1b7453ed531\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7eb1602f2d542b1998655456e5c49fca8f55a45a07cb075f989d4b10f91f6c95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0ca216d3d1121b7035d5d7ead25d459f95d56808997c02b3801cac0354c76a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04aa8cf2e01507ace747505514511263d9c560608e33f592a2c10be1edad8674\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70f0aa85e0be54890d4367deaf1a95970a7b0ac9d95afdd07c431680bdf7f15e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c0ed6891f87bfc3c62f0f13739720e012dc8fa3b122a18e799e164e4656abb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T12:56:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 12:56:01.437086 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 12:56:01.437205 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 12:56:01.438166 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1972143202/tls.crt::/tmp/serving-cert-1972143202/tls.key\\\\\\\"\\\\nI1203 12:56:02.167657 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 12:56:02.174337 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 12:56:02.174360 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 12:56:02.174377 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 12:56:02.174382 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 12:56:02.178753 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 12:56:02.179168 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 12:56:02.179175 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 12:56:02.179180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 12:56:02.179184 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 12:56:02.179187 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 12:56:02.179190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 12:56:02.178775 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 12:56:02.181220 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8ecb87e56d3fa7d72b57a6f333db4244ee7f60381778e7099a6fc88d91e1e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:07Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:07 crc kubenswrapper[4986]: I1203 12:56:07.563723 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d51894dd-38f0-413a-adaf-22d196474bed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa98006a8736394300cdb9ef3331d2a23e1b3af2e2203f757c51ca9c700d26c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6e7d24b5f1325cfb2d210cb9f311026ab370c91deecc5d0d951bcd4eda79816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9bb9d940d346f9895e24d98173668ea2d0e68fd3035e5b03475bbc936582958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7fa0d673f037c8edcc75ee30b1bbeea73da55d5459347dacc9251fffbb38be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d4007e9ed33227938a5a3656fd74958aef599418fec4cd2048dfa7dd88924dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fb412bfaef1de92aa6be2e23c0dd73fe39eebf29b6814fb3bb6fd0281ddbbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fb412bfaef1de92aa6be2e23c0dd73fe39eebf29b6814fb3bb6fd0281ddbbd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c66ee903635427e08f96e3f96c512c4d25e79bf2fc14eb0c699b3b9a617cfeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c66ee903635427e08f96e3f96c512c4d25e79bf2fc14eb0c699b3b9a617cfeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54fb6e4ecd7f73267d49261875a00a51e5e36e6079170f694346e9d6c3df8f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54fb6e4ecd7f73267d49261875a00a51e5e36e6079170f694346e9d6c3df8f31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:07Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:07 crc kubenswrapper[4986]: I1203 12:56:07.565866 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:07 crc kubenswrapper[4986]: I1203 12:56:07.565901 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:07 crc kubenswrapper[4986]: I1203 12:56:07.565912 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:07 crc kubenswrapper[4986]: I1203 12:56:07.565929 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:07 crc kubenswrapper[4986]: I1203 12:56:07.565939 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:07Z","lastTransitionTime":"2025-12-03T12:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:07 crc kubenswrapper[4986]: I1203 12:56:07.575820 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b923e52e6284c8adca87e7d3d801a432a676e4389d7ab3c0cba584c5b7d96f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:07Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:07 crc kubenswrapper[4986]: I1203 12:56:07.586276 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d66d06e79ffb2ecd12acc0380e5e4839904147f9194e48277ca667df13a0cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:07Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:07 crc kubenswrapper[4986]: I1203 12:56:07.602657 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://535b7b05dbe9b47808f3d358a1f370b696a5d5bb4dd96b049ac2cbe3d18ea065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgppd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0ab87b22153ba31fa79d2d210e695b274c2ddc878ad679c605aa8fb716534d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgppd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xggpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:07Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:07 crc kubenswrapper[4986]: I1203 12:56:07.644384 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:07Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:07 crc kubenswrapper[4986]: I1203 12:56:07.668730 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:07 crc kubenswrapper[4986]: I1203 12:56:07.668771 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:07 crc kubenswrapper[4986]: I1203 12:56:07.668779 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:07 crc kubenswrapper[4986]: I1203 12:56:07.668793 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:07 crc kubenswrapper[4986]: I1203 12:56:07.668805 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:07Z","lastTransitionTime":"2025-12-03T12:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:07 crc kubenswrapper[4986]: I1203 12:56:07.687200 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:07Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:07 crc kubenswrapper[4986]: I1203 12:56:07.721822 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8lcrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75e4c223-9952-4d1b-b83b-5c45cfc51432\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b2af5d43c7842f129b6521e5b3b1963f6060b3e0c552a771c7df6a2a9ef867d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnr67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8lcrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:07Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:07 crc kubenswrapper[4986]: I1203 12:56:07.762076 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fszqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a76bcf5c-6a62-4360-826d-ecac337c88ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d94370a14317ade02cb805783e09ffcd8c6ff07f35e9124e302028e6dfa3c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhtrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fszqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:07Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:07 crc kubenswrapper[4986]: I1203 12:56:07.770591 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:07 crc kubenswrapper[4986]: I1203 12:56:07.770621 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:07 crc kubenswrapper[4986]: I1203 12:56:07.770630 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:07 crc kubenswrapper[4986]: I1203 12:56:07.770644 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:07 crc kubenswrapper[4986]: I1203 12:56:07.770652 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:07Z","lastTransitionTime":"2025-12-03T12:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:07 crc kubenswrapper[4986]: I1203 12:56:07.804138 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-px97g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97196b6d-75cc-4de4-8805-f9ce3fbd4230\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c28bf1aff27ba408262b5fc561e084ff3813ce95a7c14b57fea24d409eb33bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v4j5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-px97g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:07Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:07 crc kubenswrapper[4986]: I1203 12:56:07.847422 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3a45156-295b-4093-80e7-2059f81ddbd7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c34a338f506e8dd09757984f92a29fd76ddd2d25fc00d680cc4883c4766080e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c34a338f506e8dd09757984f92a29fd76ddd2d25fc00d680cc4883c4766080e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9nf52\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:07Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:07 crc kubenswrapper[4986]: I1203 12:56:07.873126 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:07 crc kubenswrapper[4986]: I1203 12:56:07.873178 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:07 crc kubenswrapper[4986]: I1203 12:56:07.873195 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:07 crc kubenswrapper[4986]: I1203 12:56:07.873211 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:07 crc kubenswrapper[4986]: I1203 12:56:07.873224 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:07Z","lastTransitionTime":"2025-12-03T12:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:07 crc kubenswrapper[4986]: I1203 12:56:07.881044 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fszqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a76bcf5c-6a62-4360-826d-ecac337c88ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d94370a14317ade02cb805783e09ffcd8c6ff07f35e9124e302028e6dfa3c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhtrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fszqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:07Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:07 crc kubenswrapper[4986]: I1203 12:56:07.924084 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-px97g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97196b6d-75cc-4de4-8805-f9ce3fbd4230\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c28bf1aff27ba408262b5fc561e084ff3813ce95a7c14b57fea24d409eb33bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v4j5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-px97g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:07Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:07 crc kubenswrapper[4986]: I1203 12:56:07.942888 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 12:56:07 crc kubenswrapper[4986]: I1203 12:56:07.942927 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 12:56:07 crc kubenswrapper[4986]: E1203 12:56:07.943037 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 12:56:07 crc kubenswrapper[4986]: E1203 12:56:07.943189 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 12:56:07 crc kubenswrapper[4986]: I1203 12:56:07.943503 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 12:56:07 crc kubenswrapper[4986]: E1203 12:56:07.943716 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 12:56:07 crc kubenswrapper[4986]: I1203 12:56:07.969665 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3a45156-295b-4093-80e7-2059f81ddbd7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c34a338f506e8dd09757984f92a29fd76ddd2d25fc00d680cc4883c4766080e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c34a338f506e8dd09757984f92a29fd76ddd2d25fc00d680cc4883c4766080e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9nf52\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:07Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:07 crc kubenswrapper[4986]: I1203 12:56:07.975253 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:07 crc kubenswrapper[4986]: I1203 12:56:07.975313 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:07 crc kubenswrapper[4986]: I1203 12:56:07.975326 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:07 crc kubenswrapper[4986]: I1203 12:56:07.975341 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:07 crc kubenswrapper[4986]: I1203 12:56:07.975362 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:07Z","lastTransitionTime":"2025-12-03T12:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:08 crc kubenswrapper[4986]: I1203 12:56:08.002805 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:08Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:08 crc kubenswrapper[4986]: I1203 12:56:08.042342 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0d71a3ffdbe593936c83e92e92e286eaff092333bb3dd54cb5bc2825ca46fbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a402aeaef03d8d3c684f0d8e7fe8a080bc02d761104e3c1095b3233aade2b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:08Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:08 crc kubenswrapper[4986]: I1203 12:56:08.077794 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:08 crc kubenswrapper[4986]: I1203 12:56:08.077851 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:08 crc kubenswrapper[4986]: I1203 12:56:08.077869 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:08 crc kubenswrapper[4986]: I1203 12:56:08.077890 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:08 crc kubenswrapper[4986]: I1203 12:56:08.077905 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:08Z","lastTransitionTime":"2025-12-03T12:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:08 crc kubenswrapper[4986]: I1203 12:56:08.086148 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rkgd9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a939a03f-0eec-49cb-9b23-40e359e427d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc042debcddb7ea2233d6c224020a32acd8c996d5334d8e6eaae8d1329bb3774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc042debcddb7ea2233d6c224020a32acd8c996d5334d8e6eaae8d1329bb3774\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7128a1dd657b9829c1af31ef1a2f5ad08d2bfd65e5d352cf5a69a5be4e9aae9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7128a1dd657b9829c1af31ef1a2f5ad08d2bfd65e5d352cf5a69a5be4e9aae9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rkgd9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:08Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:08 crc kubenswrapper[4986]: I1203 12:56:08.122254 4986 generic.go:334] "Generic (PLEG): container finished" podID="a939a03f-0eec-49cb-9b23-40e359e427d5" containerID="5cd633caedf88a1bec633cef61b8b30ae422b8c8d91a012395831a09d398f0f7" exitCode=0 Dec 03 12:56:08 crc kubenswrapper[4986]: I1203 12:56:08.122340 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rkgd9" event={"ID":"a939a03f-0eec-49cb-9b23-40e359e427d5","Type":"ContainerDied","Data":"5cd633caedf88a1bec633cef61b8b30ae422b8c8d91a012395831a09d398f0f7"} Dec 03 12:56:08 crc kubenswrapper[4986]: I1203 12:56:08.127772 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee22d4f3-8488-4497-a757-a6dd51e84539\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4fc247caf16eab425363e12219a76c40022f0daf0bc9c8b52f4eb8e3726f242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86d62d991d6516ed42f7cad655018e1ebc85c8c1b100d624c1c2e5e8f616cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4ba9819f4f1746f03a558617fc6ee14f5627c37c362d1198a303b6f1854c7bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79cba0b7394bf25be648502fc84a916d8f2fa0949edce20b5a748046ce9b6f75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:08Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:08 crc kubenswrapper[4986]: E1203 12:56:08.140164 4986 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 12:56:08 crc kubenswrapper[4986]: I1203 12:56:08.180316 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:08 crc kubenswrapper[4986]: I1203 12:56:08.180356 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:08 crc kubenswrapper[4986]: I1203 12:56:08.180365 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:08 crc kubenswrapper[4986]: I1203 12:56:08.180379 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:08 crc kubenswrapper[4986]: I1203 12:56:08.180390 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:08Z","lastTransitionTime":"2025-12-03T12:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:08 crc kubenswrapper[4986]: I1203 12:56:08.189840 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d51894dd-38f0-413a-adaf-22d196474bed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa98006a8736394300cdb9ef3331d2a23e1b3af2e2203f757c51ca9c700d26c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6e7d24b5f1325cfb2d210cb9f311026ab370c91deecc5d0d951bcd4eda79816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9bb9d940d346f9895e24d98173668ea2d0e68fd3035e5b03475bbc936582958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7fa0d673f037c8edcc75ee30b1bbeea73da55d5459347dacc9251fffbb38be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d4007e9ed33227938a5a3656fd74958aef599418fec4cd2048dfa7dd88924dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fb412bfaef1de92aa6be2e23c0dd73fe39eebf29b6814fb3bb6fd0281ddbbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fb412bfaef1de92aa6be2e23c0dd73fe39eebf29b6814fb3bb6fd0281ddbbd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c66ee903635427e08f96e3f96c512c4d25e79bf2fc14eb0c699b3b9a617cfeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c66ee903635427e08f96e3f96c512c4d25e79bf2fc14eb0c699b3b9a617cfeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54fb6e4ecd7f73267d49261875a00a51e5e36e6079170f694346e9d6c3df8f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54fb6e4ecd7f73267d49261875a00a51e5e36e6079170f694346e9d6c3df8f31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:08Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:08 crc kubenswrapper[4986]: I1203 12:56:08.225274 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b923e52e6284c8adca87e7d3d801a432a676e4389d7ab3c0cba584c5b7d96f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:08Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:08 crc kubenswrapper[4986]: I1203 12:56:08.263599 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d66d06e79ffb2ecd12acc0380e5e4839904147f9194e48277ca667df13a0cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:08Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:08 crc kubenswrapper[4986]: I1203 12:56:08.282549 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:08 crc kubenswrapper[4986]: I1203 12:56:08.282587 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:08 crc kubenswrapper[4986]: I1203 12:56:08.282595 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:08 crc kubenswrapper[4986]: I1203 12:56:08.282608 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:08 crc kubenswrapper[4986]: I1203 12:56:08.282618 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:08Z","lastTransitionTime":"2025-12-03T12:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:08 crc kubenswrapper[4986]: I1203 12:56:08.303263 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://535b7b05dbe9b47808f3d358a1f370b696a5d5bb4dd96b049ac2cbe3d18ea065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgppd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0ab87b22153ba31fa79d2d210e695b274c2ddc878ad679c605aa8fb716534d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgppd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xggpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:08Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:08 crc kubenswrapper[4986]: I1203 12:56:08.345923 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231ed674-1d19-4ee2-b29c-a1b7453ed531\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7eb1602f2d542b1998655456e5c49fca8f55a45a07cb075f989d4b10f91f6c95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0ca216d3d1121b7035d5d7ead25d459f95d56808997c02b3801cac0354c76a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04aa8cf2e01507ace747505514511263d9c560608e33f592a2c10be1edad8674\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70f0aa85e0be54890d4367deaf1a95970a7b0ac9d95afdd07c431680bdf7f15e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c0ed6891f87bfc3c62f0f13739720e012dc8fa3b122a18e799e164e4656abb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T12:56:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 12:56:01.437086 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 12:56:01.437205 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 12:56:01.438166 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1972143202/tls.crt::/tmp/serving-cert-1972143202/tls.key\\\\\\\"\\\\nI1203 12:56:02.167657 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 12:56:02.174337 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 12:56:02.174360 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 12:56:02.174377 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 12:56:02.174382 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 12:56:02.178753 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 12:56:02.179168 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 12:56:02.179175 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 12:56:02.179180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 12:56:02.179184 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 12:56:02.179187 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 12:56:02.179190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 12:56:02.178775 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 12:56:02.181220 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8ecb87e56d3fa7d72b57a6f333db4244ee7f60381778e7099a6fc88d91e1e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:08Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:08 crc kubenswrapper[4986]: I1203 12:56:08.383301 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:08Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:08 crc kubenswrapper[4986]: I1203 12:56:08.385103 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:08 crc kubenswrapper[4986]: I1203 12:56:08.385143 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:08 crc kubenswrapper[4986]: I1203 12:56:08.385157 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:08 crc kubenswrapper[4986]: I1203 12:56:08.385174 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:08 crc kubenswrapper[4986]: I1203 12:56:08.385185 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:08Z","lastTransitionTime":"2025-12-03T12:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:08 crc kubenswrapper[4986]: I1203 12:56:08.423629 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:08Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:08 crc kubenswrapper[4986]: I1203 12:56:08.461278 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8lcrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75e4c223-9952-4d1b-b83b-5c45cfc51432\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b2af5d43c7842f129b6521e5b3b1963f6060b3e0c552a771c7df6a2a9ef867d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnr67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8lcrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:08Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:08 crc kubenswrapper[4986]: I1203 12:56:08.487974 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:08 crc kubenswrapper[4986]: I1203 12:56:08.488046 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:08 crc kubenswrapper[4986]: I1203 12:56:08.488075 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:08 crc kubenswrapper[4986]: I1203 12:56:08.488102 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:08 crc kubenswrapper[4986]: I1203 12:56:08.488123 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:08Z","lastTransitionTime":"2025-12-03T12:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:08 crc kubenswrapper[4986]: I1203 12:56:08.507136 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:08Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:08 crc kubenswrapper[4986]: I1203 12:56:08.543495 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:08Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:08 crc kubenswrapper[4986]: I1203 12:56:08.583308 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8lcrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75e4c223-9952-4d1b-b83b-5c45cfc51432\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b2af5d43c7842f129b6521e5b3b1963f6060b3e0c552a771c7df6a2a9ef867d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnr67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8lcrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:08Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:08 crc kubenswrapper[4986]: I1203 12:56:08.589902 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:08 crc kubenswrapper[4986]: I1203 12:56:08.589959 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:08 crc kubenswrapper[4986]: I1203 12:56:08.589971 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:08 crc kubenswrapper[4986]: I1203 12:56:08.589989 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:08 crc kubenswrapper[4986]: I1203 12:56:08.590011 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:08Z","lastTransitionTime":"2025-12-03T12:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:08 crc kubenswrapper[4986]: I1203 12:56:08.622434 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fszqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a76bcf5c-6a62-4360-826d-ecac337c88ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d94370a14317ade02cb805783e09ffcd8c6ff07f35e9124e302028e6dfa3c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhtrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fszqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:08Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:08 crc kubenswrapper[4986]: I1203 12:56:08.663236 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-px97g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97196b6d-75cc-4de4-8805-f9ce3fbd4230\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c28bf1aff27ba408262b5fc561e084ff3813ce95a7c14b57fea24d409eb33bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v4j5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-px97g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:08Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:08 crc kubenswrapper[4986]: I1203 12:56:08.691809 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:08 crc kubenswrapper[4986]: I1203 12:56:08.691855 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:08 crc kubenswrapper[4986]: I1203 12:56:08.691868 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:08 crc kubenswrapper[4986]: I1203 12:56:08.691886 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:08 crc kubenswrapper[4986]: I1203 12:56:08.691898 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:08Z","lastTransitionTime":"2025-12-03T12:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:08 crc kubenswrapper[4986]: I1203 12:56:08.710985 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3a45156-295b-4093-80e7-2059f81ddbd7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c34a338f506e8dd09757984f92a29fd76ddd2d25fc00d680cc4883c4766080e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c34a338f506e8dd09757984f92a29fd76ddd2d25fc00d680cc4883c4766080e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9nf52\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:08Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:08 crc kubenswrapper[4986]: I1203 12:56:08.744989 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee22d4f3-8488-4497-a757-a6dd51e84539\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4fc247caf16eab425363e12219a76c40022f0daf0bc9c8b52f4eb8e3726f242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86d62d991d6516ed42f7cad655018e1ebc85c8c1b100d624c1c2e5e8f616cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4ba9819f4f1746f03a558617fc6ee14f5627c37c362d1198a303b6f1854c7bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79cba0b7394bf25be648502fc84a916d8f2fa0949edce20b5a748046ce9b6f75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:08Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:08 crc kubenswrapper[4986]: I1203 12:56:08.785535 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:08Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:08 crc kubenswrapper[4986]: I1203 12:56:08.794276 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:08 crc kubenswrapper[4986]: I1203 12:56:08.794353 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:08 crc kubenswrapper[4986]: I1203 12:56:08.794384 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:08 crc kubenswrapper[4986]: I1203 12:56:08.794404 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:08 crc kubenswrapper[4986]: I1203 12:56:08.794415 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:08Z","lastTransitionTime":"2025-12-03T12:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:08 crc kubenswrapper[4986]: I1203 12:56:08.824308 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0d71a3ffdbe593936c83e92e92e286eaff092333bb3dd54cb5bc2825ca46fbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a402aeaef03d8d3c684f0d8e7fe8a080bc02d761104e3c1095b3233aade2b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:08Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:08 crc kubenswrapper[4986]: I1203 12:56:08.866388 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rkgd9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a939a03f-0eec-49cb-9b23-40e359e427d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc042debcddb7ea2233d6c224020a32acd8c996d5334d8e6eaae8d1329bb3774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc042debcddb7ea2233d6c224020a32acd8c996d5334d8e6eaae8d1329bb3774\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7128a1dd657b9829c1af31ef1a2f5ad08d2bfd65e5d352cf5a69a5be4e9aae9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7128a1dd657b9829c1af31ef1a2f5ad08d2bfd65e5d352cf5a69a5be4e9aae9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cd633caedf88a1bec633cef61b8b30ae422b8c8d91a012395831a09d398f0f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cd633caedf88a1bec633cef61b8b30ae422b8c8d91a012395831a09d398f0f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rkgd9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:08Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:08 crc kubenswrapper[4986]: I1203 12:56:08.896844 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:08 crc kubenswrapper[4986]: I1203 12:56:08.896883 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:08 crc kubenswrapper[4986]: I1203 12:56:08.896894 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:08 crc kubenswrapper[4986]: I1203 12:56:08.896910 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:08 crc kubenswrapper[4986]: I1203 12:56:08.896921 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:08Z","lastTransitionTime":"2025-12-03T12:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:08 crc kubenswrapper[4986]: I1203 12:56:08.907963 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231ed674-1d19-4ee2-b29c-a1b7453ed531\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7eb1602f2d542b1998655456e5c49fca8f55a45a07cb075f989d4b10f91f6c95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0ca216d3d1121b7035d5d7ead25d459f95d56808997c02b3801cac0354c76a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04aa8cf2e01507ace747505514511263d9c560608e33f592a2c10be1edad8674\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70f0aa85e0be54890d4367deaf1a95970a7b0ac9d95afdd07c431680bdf7f15e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c0ed6891f87bfc3c62f0f13739720e012dc8fa3b122a18e799e164e4656abb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T12:56:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 12:56:01.437086 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 12:56:01.437205 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 12:56:01.438166 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1972143202/tls.crt::/tmp/serving-cert-1972143202/tls.key\\\\\\\"\\\\nI1203 12:56:02.167657 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 12:56:02.174337 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 12:56:02.174360 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 12:56:02.174377 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 12:56:02.174382 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 12:56:02.178753 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 12:56:02.179168 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 12:56:02.179175 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 12:56:02.179180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 12:56:02.179184 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 12:56:02.179187 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 12:56:02.179190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 12:56:02.178775 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 12:56:02.181220 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8ecb87e56d3fa7d72b57a6f333db4244ee7f60381778e7099a6fc88d91e1e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:08Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:08 crc kubenswrapper[4986]: I1203 12:56:08.950392 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d51894dd-38f0-413a-adaf-22d196474bed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa98006a8736394300cdb9ef3331d2a23e1b3af2e2203f757c51ca9c700d26c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6e7d24b5f1325cfb2d210cb9f311026ab370c91deecc5d0d951bcd4eda79816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9bb9d940d346f9895e24d98173668ea2d0e68fd3035e5b03475bbc936582958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7fa0d673f037c8edcc75ee30b1bbeea73da55d5459347dacc9251fffbb38be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d4007e9ed33227938a5a3656fd74958aef599418fec4cd2048dfa7dd88924dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fb412bfaef1de92aa6be2e23c0dd73fe39eebf29b6814fb3bb6fd0281ddbbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fb412bfaef1de92aa6be2e23c0dd73fe39eebf29b6814fb3bb6fd0281ddbbd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c66ee903635427e08f96e3f96c512c4d25e79bf2fc14eb0c699b3b9a617cfeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c66ee903635427e08f96e3f96c512c4d25e79bf2fc14eb0c699b3b9a617cfeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54fb6e4ecd7f73267d49261875a00a51e5e36e6079170f694346e9d6c3df8f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54fb6e4ecd7f73267d49261875a00a51e5e36e6079170f694346e9d6c3df8f31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:08Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:08 crc kubenswrapper[4986]: I1203 12:56:08.984978 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b923e52e6284c8adca87e7d3d801a432a676e4389d7ab3c0cba584c5b7d96f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:08Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:08 crc kubenswrapper[4986]: I1203 12:56:08.999193 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:08 crc kubenswrapper[4986]: I1203 12:56:08.999223 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:08 crc kubenswrapper[4986]: I1203 12:56:08.999265 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:08 crc kubenswrapper[4986]: I1203 12:56:08.999308 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:08 crc kubenswrapper[4986]: I1203 12:56:08.999328 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:08Z","lastTransitionTime":"2025-12-03T12:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:09 crc kubenswrapper[4986]: I1203 12:56:09.022469 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d66d06e79ffb2ecd12acc0380e5e4839904147f9194e48277ca667df13a0cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:09Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:09 crc kubenswrapper[4986]: I1203 12:56:09.063832 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://535b7b05dbe9b47808f3d358a1f370b696a5d5bb4dd96b049ac2cbe3d18ea065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgppd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0ab87b22153ba31fa79d2d210e695b274c2ddc878ad679c605aa8fb716534d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgppd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xggpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:09Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:09 crc kubenswrapper[4986]: I1203 12:56:09.102299 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:09 crc kubenswrapper[4986]: I1203 12:56:09.102355 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:09 crc kubenswrapper[4986]: I1203 12:56:09.102382 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:09 crc kubenswrapper[4986]: I1203 12:56:09.102402 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:09 crc kubenswrapper[4986]: I1203 12:56:09.102412 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:09Z","lastTransitionTime":"2025-12-03T12:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:09 crc kubenswrapper[4986]: I1203 12:56:09.127953 4986 generic.go:334] "Generic (PLEG): container finished" podID="a939a03f-0eec-49cb-9b23-40e359e427d5" containerID="d1b9e8e1f9326660fe6d7c89af18c5f7dd1134e9f2e866da8ba9c795e13cb57d" exitCode=0 Dec 03 12:56:09 crc kubenswrapper[4986]: I1203 12:56:09.128024 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rkgd9" event={"ID":"a939a03f-0eec-49cb-9b23-40e359e427d5","Type":"ContainerDied","Data":"d1b9e8e1f9326660fe6d7c89af18c5f7dd1134e9f2e866da8ba9c795e13cb57d"} Dec 03 12:56:09 crc kubenswrapper[4986]: I1203 12:56:09.133095 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" event={"ID":"d3a45156-295b-4093-80e7-2059f81ddbd7","Type":"ContainerStarted","Data":"9dfc7520e7cb1e09eb10bf9615dbae11d7e73474f1b73eed481d18269b65aef9"} Dec 03 12:56:09 crc kubenswrapper[4986]: I1203 12:56:09.139087 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:09Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:09 crc kubenswrapper[4986]: I1203 12:56:09.152764 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:09Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:09 crc kubenswrapper[4986]: I1203 12:56:09.181723 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8lcrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75e4c223-9952-4d1b-b83b-5c45cfc51432\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b2af5d43c7842f129b6521e5b3b1963f6060b3e0c552a771c7df6a2a9ef867d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnr67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8lcrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:09Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:09 crc kubenswrapper[4986]: I1203 12:56:09.204725 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:09 crc kubenswrapper[4986]: I1203 12:56:09.204759 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:09 crc kubenswrapper[4986]: I1203 12:56:09.204767 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:09 crc kubenswrapper[4986]: I1203 12:56:09.204781 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:09 crc kubenswrapper[4986]: I1203 12:56:09.204793 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:09Z","lastTransitionTime":"2025-12-03T12:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:09 crc kubenswrapper[4986]: I1203 12:56:09.221214 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fszqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a76bcf5c-6a62-4360-826d-ecac337c88ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d94370a14317ade02cb805783e09ffcd8c6ff07f35e9124e302028e6dfa3c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhtrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fszqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:09Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:09 crc kubenswrapper[4986]: I1203 12:56:09.266460 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-px97g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97196b6d-75cc-4de4-8805-f9ce3fbd4230\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c28bf1aff27ba408262b5fc561e084ff3813ce95a7c14b57fea24d409eb33bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v4j5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-px97g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:09Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:09 crc kubenswrapper[4986]: I1203 12:56:09.308085 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:09 crc kubenswrapper[4986]: I1203 12:56:09.308121 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:09 crc kubenswrapper[4986]: I1203 12:56:09.308131 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:09 crc kubenswrapper[4986]: I1203 12:56:09.308149 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:09 crc kubenswrapper[4986]: I1203 12:56:09.308159 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:09Z","lastTransitionTime":"2025-12-03T12:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:09 crc kubenswrapper[4986]: I1203 12:56:09.311372 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3a45156-295b-4093-80e7-2059f81ddbd7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c34a338f506e8dd09757984f92a29fd76ddd2d25fc00d680cc4883c4766080e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c34a338f506e8dd09757984f92a29fd76ddd2d25fc00d680cc4883c4766080e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9nf52\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:09Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:09 crc kubenswrapper[4986]: I1203 12:56:09.342989 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:09Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:09 crc kubenswrapper[4986]: I1203 12:56:09.384501 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0d71a3ffdbe593936c83e92e92e286eaff092333bb3dd54cb5bc2825ca46fbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a402aeaef03d8d3c684f0d8e7fe8a080bc02d761104e3c1095b3233aade2b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:09Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:09 crc kubenswrapper[4986]: I1203 12:56:09.409862 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:09 crc kubenswrapper[4986]: I1203 12:56:09.409895 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:09 crc kubenswrapper[4986]: I1203 12:56:09.409907 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:09 crc kubenswrapper[4986]: I1203 12:56:09.409922 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:09 crc kubenswrapper[4986]: I1203 12:56:09.409933 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:09Z","lastTransitionTime":"2025-12-03T12:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:09 crc kubenswrapper[4986]: I1203 12:56:09.426192 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rkgd9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a939a03f-0eec-49cb-9b23-40e359e427d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc042debcddb7ea2233d6c224020a32acd8c996d5334d8e6eaae8d1329bb3774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc042debcddb7ea2233d6c224020a32acd8c996d5334d8e6eaae8d1329bb3774\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7128a1dd657b9829c1af31ef1a2f5ad08d2bfd65e5d352cf5a69a5be4e9aae9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7128a1dd657b9829c1af31ef1a2f5ad08d2bfd65e5d352cf5a69a5be4e9aae9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cd633caedf88a1bec633cef61b8b30ae422b8c8d91a012395831a09d398f0f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cd633caedf88a1bec633cef61b8b30ae422b8c8d91a012395831a09d398f0f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1b9e8e1f9326660fe6d7c89af18c5f7dd1134e9f2e866da8ba9c795e13cb57d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1b9e8e1f9326660fe6d7c89af18c5f7dd1134e9f2e866da8ba9c795e13cb57d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rkgd9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:09Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:09 crc kubenswrapper[4986]: I1203 12:56:09.467698 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee22d4f3-8488-4497-a757-a6dd51e84539\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4fc247caf16eab425363e12219a76c40022f0daf0bc9c8b52f4eb8e3726f242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86d62d991d6516ed42f7cad655018e1ebc85c8c1b100d624c1c2e5e8f616cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4ba9819f4f1746f03a558617fc6ee14f5627c37c362d1198a303b6f1854c7bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79cba0b7394bf25be648502fc84a916d8f2fa0949edce20b5a748046ce9b6f75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:09Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:09 crc kubenswrapper[4986]: I1203 12:56:09.512339 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:09 crc kubenswrapper[4986]: I1203 12:56:09.512368 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:09 crc kubenswrapper[4986]: I1203 12:56:09.512376 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:09 crc kubenswrapper[4986]: I1203 12:56:09.512388 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:09 crc kubenswrapper[4986]: I1203 12:56:09.512397 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:09Z","lastTransitionTime":"2025-12-03T12:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:09 crc kubenswrapper[4986]: I1203 12:56:09.513734 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d51894dd-38f0-413a-adaf-22d196474bed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa98006a8736394300cdb9ef3331d2a23e1b3af2e2203f757c51ca9c700d26c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6e7d24b5f1325cfb2d210cb9f311026ab370c91deecc5d0d951bcd4eda79816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9bb9d940d346f9895e24d98173668ea2d0e68fd3035e5b03475bbc936582958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7fa0d673f037c8edcc75ee30b1bbeea73da55d5459347dacc9251fffbb38be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d4007e9ed33227938a5a3656fd74958aef599418fec4cd2048dfa7dd88924dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fb412bfaef1de92aa6be2e23c0dd73fe39eebf29b6814fb3bb6fd0281ddbbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fb412bfaef1de92aa6be2e23c0dd73fe39eebf29b6814fb3bb6fd0281ddbbd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c66ee903635427e08f96e3f96c512c4d25e79bf2fc14eb0c699b3b9a617cfeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c66ee903635427e08f96e3f96c512c4d25e79bf2fc14eb0c699b3b9a617cfeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54fb6e4ecd7f73267d49261875a00a51e5e36e6079170f694346e9d6c3df8f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54fb6e4ecd7f73267d49261875a00a51e5e36e6079170f694346e9d6c3df8f31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:09Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:09 crc kubenswrapper[4986]: I1203 12:56:09.543904 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b923e52e6284c8adca87e7d3d801a432a676e4389d7ab3c0cba584c5b7d96f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:09Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:09 crc kubenswrapper[4986]: I1203 12:56:09.584822 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d66d06e79ffb2ecd12acc0380e5e4839904147f9194e48277ca667df13a0cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:09Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:09 crc kubenswrapper[4986]: I1203 12:56:09.615398 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:09 crc kubenswrapper[4986]: I1203 12:56:09.615443 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:09 crc kubenswrapper[4986]: I1203 12:56:09.615454 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:09 crc kubenswrapper[4986]: I1203 12:56:09.615469 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:09 crc kubenswrapper[4986]: I1203 12:56:09.615478 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:09Z","lastTransitionTime":"2025-12-03T12:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:09 crc kubenswrapper[4986]: I1203 12:56:09.618751 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 12:56:09 crc kubenswrapper[4986]: E1203 12:56:09.618914 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 12:56:17.618893155 +0000 UTC m=+37.085324346 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:56:09 crc kubenswrapper[4986]: I1203 12:56:09.618951 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 12:56:09 crc kubenswrapper[4986]: I1203 12:56:09.619000 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 12:56:09 crc kubenswrapper[4986]: E1203 12:56:09.619018 4986 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 12:56:09 crc kubenswrapper[4986]: E1203 12:56:09.619054 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 12:56:17.619045209 +0000 UTC m=+37.085476400 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 12:56:09 crc kubenswrapper[4986]: E1203 12:56:09.619080 4986 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 12:56:09 crc kubenswrapper[4986]: E1203 12:56:09.619113 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 12:56:17.619106201 +0000 UTC m=+37.085537382 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 12:56:09 crc kubenswrapper[4986]: I1203 12:56:09.623808 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://535b7b05dbe9b47808f3d358a1f370b696a5d5bb4dd96b049ac2cbe3d18ea065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgppd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0ab87b22153ba31fa79d2d210e695b274c2ddc878ad679c605aa8fb716534d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgppd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xggpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:09Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:09 crc kubenswrapper[4986]: I1203 12:56:09.667035 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231ed674-1d19-4ee2-b29c-a1b7453ed531\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7eb1602f2d542b1998655456e5c49fca8f55a45a07cb075f989d4b10f91f6c95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0ca216d3d1121b7035d5d7ead25d459f95d56808997c02b3801cac0354c76a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04aa8cf2e01507ace747505514511263d9c560608e33f592a2c10be1edad8674\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70f0aa85e0be54890d4367deaf1a95970a7b0ac9d95afdd07c431680bdf7f15e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c0ed6891f87bfc3c62f0f13739720e012dc8fa3b122a18e799e164e4656abb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T12:56:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 12:56:01.437086 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 12:56:01.437205 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 12:56:01.438166 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1972143202/tls.crt::/tmp/serving-cert-1972143202/tls.key\\\\\\\"\\\\nI1203 12:56:02.167657 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 12:56:02.174337 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 12:56:02.174360 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 12:56:02.174377 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 12:56:02.174382 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 12:56:02.178753 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 12:56:02.179168 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 12:56:02.179175 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 12:56:02.179180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 12:56:02.179184 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 12:56:02.179187 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 12:56:02.179190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 12:56:02.178775 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 12:56:02.181220 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8ecb87e56d3fa7d72b57a6f333db4244ee7f60381778e7099a6fc88d91e1e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:09Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:09 crc kubenswrapper[4986]: I1203 12:56:09.718219 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:09 crc kubenswrapper[4986]: I1203 12:56:09.718264 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:09 crc kubenswrapper[4986]: I1203 12:56:09.718275 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:09 crc kubenswrapper[4986]: I1203 12:56:09.718311 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:09 crc kubenswrapper[4986]: I1203 12:56:09.718326 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:09Z","lastTransitionTime":"2025-12-03T12:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:09 crc kubenswrapper[4986]: I1203 12:56:09.720040 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 12:56:09 crc kubenswrapper[4986]: I1203 12:56:09.720252 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 12:56:09 crc kubenswrapper[4986]: E1203 12:56:09.720298 4986 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 12:56:09 crc kubenswrapper[4986]: E1203 12:56:09.720799 4986 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 12:56:09 crc kubenswrapper[4986]: E1203 12:56:09.720964 4986 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 12:56:09 crc kubenswrapper[4986]: E1203 12:56:09.721153 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-03 12:56:17.721124252 +0000 UTC m=+37.187555483 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 12:56:09 crc kubenswrapper[4986]: E1203 12:56:09.720382 4986 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 12:56:09 crc kubenswrapper[4986]: E1203 12:56:09.721457 4986 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 12:56:09 crc kubenswrapper[4986]: E1203 12:56:09.721568 4986 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 12:56:09 crc kubenswrapper[4986]: E1203 12:56:09.721737 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-03 12:56:17.721719518 +0000 UTC m=+37.188150749 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 12:56:09 crc kubenswrapper[4986]: I1203 12:56:09.821510 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:09 crc kubenswrapper[4986]: I1203 12:56:09.821553 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:09 crc kubenswrapper[4986]: I1203 12:56:09.821566 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:09 crc kubenswrapper[4986]: I1203 12:56:09.821585 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:09 crc kubenswrapper[4986]: I1203 12:56:09.821608 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:09Z","lastTransitionTime":"2025-12-03T12:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:09 crc kubenswrapper[4986]: I1203 12:56:09.924081 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:09 crc kubenswrapper[4986]: I1203 12:56:09.924469 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:09 crc kubenswrapper[4986]: I1203 12:56:09.924541 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:09 crc kubenswrapper[4986]: I1203 12:56:09.924846 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:09 crc kubenswrapper[4986]: I1203 12:56:09.925013 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:09Z","lastTransitionTime":"2025-12-03T12:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:09 crc kubenswrapper[4986]: I1203 12:56:09.943126 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 12:56:09 crc kubenswrapper[4986]: I1203 12:56:09.943172 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 12:56:09 crc kubenswrapper[4986]: E1203 12:56:09.943250 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 12:56:09 crc kubenswrapper[4986]: I1203 12:56:09.943303 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 12:56:09 crc kubenswrapper[4986]: E1203 12:56:09.943421 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 12:56:09 crc kubenswrapper[4986]: E1203 12:56:09.943477 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 12:56:10 crc kubenswrapper[4986]: I1203 12:56:10.027519 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:10 crc kubenswrapper[4986]: I1203 12:56:10.027575 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:10 crc kubenswrapper[4986]: I1203 12:56:10.027587 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:10 crc kubenswrapper[4986]: I1203 12:56:10.027606 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:10 crc kubenswrapper[4986]: I1203 12:56:10.027619 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:10Z","lastTransitionTime":"2025-12-03T12:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:10 crc kubenswrapper[4986]: I1203 12:56:10.130240 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:10 crc kubenswrapper[4986]: I1203 12:56:10.130277 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:10 crc kubenswrapper[4986]: I1203 12:56:10.130319 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:10 crc kubenswrapper[4986]: I1203 12:56:10.130337 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:10 crc kubenswrapper[4986]: I1203 12:56:10.130349 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:10Z","lastTransitionTime":"2025-12-03T12:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:10 crc kubenswrapper[4986]: I1203 12:56:10.233262 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:10 crc kubenswrapper[4986]: I1203 12:56:10.233352 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:10 crc kubenswrapper[4986]: I1203 12:56:10.233367 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:10 crc kubenswrapper[4986]: I1203 12:56:10.233401 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:10 crc kubenswrapper[4986]: I1203 12:56:10.233418 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:10Z","lastTransitionTime":"2025-12-03T12:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:10 crc kubenswrapper[4986]: I1203 12:56:10.336272 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:10 crc kubenswrapper[4986]: I1203 12:56:10.336393 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:10 crc kubenswrapper[4986]: I1203 12:56:10.336413 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:10 crc kubenswrapper[4986]: I1203 12:56:10.336438 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:10 crc kubenswrapper[4986]: I1203 12:56:10.336454 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:10Z","lastTransitionTime":"2025-12-03T12:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:10 crc kubenswrapper[4986]: I1203 12:56:10.438997 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:10 crc kubenswrapper[4986]: I1203 12:56:10.439059 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:10 crc kubenswrapper[4986]: I1203 12:56:10.439077 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:10 crc kubenswrapper[4986]: I1203 12:56:10.439101 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:10 crc kubenswrapper[4986]: I1203 12:56:10.439120 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:10Z","lastTransitionTime":"2025-12-03T12:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:10 crc kubenswrapper[4986]: I1203 12:56:10.541556 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:10 crc kubenswrapper[4986]: I1203 12:56:10.541627 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:10 crc kubenswrapper[4986]: I1203 12:56:10.541640 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:10 crc kubenswrapper[4986]: I1203 12:56:10.541656 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:10 crc kubenswrapper[4986]: I1203 12:56:10.541669 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:10Z","lastTransitionTime":"2025-12-03T12:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:10 crc kubenswrapper[4986]: I1203 12:56:10.644240 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:10 crc kubenswrapper[4986]: I1203 12:56:10.644338 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:10 crc kubenswrapper[4986]: I1203 12:56:10.644357 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:10 crc kubenswrapper[4986]: I1203 12:56:10.644376 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:10 crc kubenswrapper[4986]: I1203 12:56:10.644387 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:10Z","lastTransitionTime":"2025-12-03T12:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:10 crc kubenswrapper[4986]: I1203 12:56:10.746485 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:10 crc kubenswrapper[4986]: I1203 12:56:10.746533 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:10 crc kubenswrapper[4986]: I1203 12:56:10.746543 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:10 crc kubenswrapper[4986]: I1203 12:56:10.746555 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:10 crc kubenswrapper[4986]: I1203 12:56:10.746564 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:10Z","lastTransitionTime":"2025-12-03T12:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:10 crc kubenswrapper[4986]: I1203 12:56:10.759201 4986 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 03 12:56:10 crc kubenswrapper[4986]: I1203 12:56:10.848507 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:10 crc kubenswrapper[4986]: I1203 12:56:10.848553 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:10 crc kubenswrapper[4986]: I1203 12:56:10.848563 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:10 crc kubenswrapper[4986]: I1203 12:56:10.848576 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:10 crc kubenswrapper[4986]: I1203 12:56:10.848584 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:10Z","lastTransitionTime":"2025-12-03T12:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:10 crc kubenswrapper[4986]: I1203 12:56:10.951429 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:10 crc kubenswrapper[4986]: I1203 12:56:10.951843 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:10 crc kubenswrapper[4986]: I1203 12:56:10.951854 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:10 crc kubenswrapper[4986]: I1203 12:56:10.951868 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:10 crc kubenswrapper[4986]: I1203 12:56:10.951878 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:10Z","lastTransitionTime":"2025-12-03T12:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:10 crc kubenswrapper[4986]: I1203 12:56:10.955948 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8lcrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75e4c223-9952-4d1b-b83b-5c45cfc51432\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b2af5d43c7842f129b6521e5b3b1963f6060b3e0c552a771c7df6a2a9ef867d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnr67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8lcrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:10Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:10 crc kubenswrapper[4986]: I1203 12:56:10.971186 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:10Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:10 crc kubenswrapper[4986]: I1203 12:56:10.988974 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:10Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:11 crc kubenswrapper[4986]: I1203 12:56:11.014061 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3a45156-295b-4093-80e7-2059f81ddbd7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c34a338f506e8dd09757984f92a29fd76ddd2d25fc00d680cc4883c4766080e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c34a338f506e8dd09757984f92a29fd76ddd2d25fc00d680cc4883c4766080e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9nf52\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:11Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:11 crc kubenswrapper[4986]: I1203 12:56:11.026415 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fszqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a76bcf5c-6a62-4360-826d-ecac337c88ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d94370a14317ade02cb805783e09ffcd8c6ff07f35e9124e302028e6dfa3c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhtrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fszqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:11Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:11 crc kubenswrapper[4986]: I1203 12:56:11.038949 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-px97g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97196b6d-75cc-4de4-8805-f9ce3fbd4230\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c28bf1aff27ba408262b5fc561e084ff3813ce95a7c14b57fea24d409eb33bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v4j5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-px97g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:11Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:11 crc kubenswrapper[4986]: I1203 12:56:11.053224 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rkgd9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a939a03f-0eec-49cb-9b23-40e359e427d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc042debcddb7ea2233d6c224020a32acd8c996d5334d8e6eaae8d1329bb3774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc042debcddb7ea2233d6c224020a32acd8c996d5334d8e6eaae8d1329bb3774\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7128a1dd657b9829c1af31ef1a2f5ad08d2bfd65e5d352cf5a69a5be4e9aae9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7128a1dd657b9829c1af31ef1a2f5ad08d2bfd65e5d352cf5a69a5be4e9aae9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cd633caedf88a1bec633cef61b8b30ae422b8c8d91a012395831a09d398f0f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cd633caedf88a1bec633cef61b8b30ae422b8c8d91a012395831a09d398f0f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1b9e8e1f9326660fe6d7c89af18c5f7dd1134e9f2e866da8ba9c795e13cb57d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1b9e8e1f9326660fe6d7c89af18c5f7dd1134e9f2e866da8ba9c795e13cb57d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rkgd9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:11Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:11 crc kubenswrapper[4986]: I1203 12:56:11.054740 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:11 crc kubenswrapper[4986]: I1203 12:56:11.054780 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:11 crc kubenswrapper[4986]: I1203 12:56:11.054793 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:11 crc kubenswrapper[4986]: I1203 12:56:11.054809 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:11 crc kubenswrapper[4986]: I1203 12:56:11.054820 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:11Z","lastTransitionTime":"2025-12-03T12:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:11 crc kubenswrapper[4986]: I1203 12:56:11.065719 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee22d4f3-8488-4497-a757-a6dd51e84539\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4fc247caf16eab425363e12219a76c40022f0daf0bc9c8b52f4eb8e3726f242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86d62d991d6516ed42f7cad655018e1ebc85c8c1b100d624c1c2e5e8f616cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4ba9819f4f1746f03a558617fc6ee14f5627c37c362d1198a303b6f1854c7bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79cba0b7394bf25be648502fc84a916d8f2fa0949edce20b5a748046ce9b6f75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:11Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:11 crc kubenswrapper[4986]: I1203 12:56:11.080558 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:11Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:11 crc kubenswrapper[4986]: I1203 12:56:11.098952 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0d71a3ffdbe593936c83e92e92e286eaff092333bb3dd54cb5bc2825ca46fbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a402aeaef03d8d3c684f0d8e7fe8a080bc02d761104e3c1095b3233aade2b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:11Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:11 crc kubenswrapper[4986]: I1203 12:56:11.116505 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d66d06e79ffb2ecd12acc0380e5e4839904147f9194e48277ca667df13a0cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:11Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:11 crc kubenswrapper[4986]: I1203 12:56:11.133447 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://535b7b05dbe9b47808f3d358a1f370b696a5d5bb4dd96b049ac2cbe3d18ea065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgppd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0ab87b22153ba31fa79d2d210e695b274c2ddc878ad679c605aa8fb716534d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgppd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xggpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:11Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:11 crc kubenswrapper[4986]: I1203 12:56:11.141660 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rkgd9" event={"ID":"a939a03f-0eec-49cb-9b23-40e359e427d5","Type":"ContainerStarted","Data":"55ef54ee9f22072e2b437f38f6d06f24e6f1da02555505bc65c488b5818c87a0"} Dec 03 12:56:11 crc kubenswrapper[4986]: I1203 12:56:11.156776 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:11 crc kubenswrapper[4986]: I1203 12:56:11.156851 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:11 crc kubenswrapper[4986]: I1203 12:56:11.156865 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:11 crc kubenswrapper[4986]: I1203 12:56:11.156888 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:11 crc kubenswrapper[4986]: I1203 12:56:11.156905 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:11Z","lastTransitionTime":"2025-12-03T12:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:11 crc kubenswrapper[4986]: I1203 12:56:11.165735 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231ed674-1d19-4ee2-b29c-a1b7453ed531\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7eb1602f2d542b1998655456e5c49fca8f55a45a07cb075f989d4b10f91f6c95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0ca216d3d1121b7035d5d7ead25d459f95d56808997c02b3801cac0354c76a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04aa8cf2e01507ace747505514511263d9c560608e33f592a2c10be1edad8674\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70f0aa85e0be54890d4367deaf1a95970a7b0ac9d95afdd07c431680bdf7f15e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c0ed6891f87bfc3c62f0f13739720e012dc8fa3b122a18e799e164e4656abb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T12:56:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 12:56:01.437086 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 12:56:01.437205 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 12:56:01.438166 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1972143202/tls.crt::/tmp/serving-cert-1972143202/tls.key\\\\\\\"\\\\nI1203 12:56:02.167657 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 12:56:02.174337 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 12:56:02.174360 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 12:56:02.174377 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 12:56:02.174382 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 12:56:02.178753 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 12:56:02.179168 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 12:56:02.179175 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 12:56:02.179180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 12:56:02.179184 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 12:56:02.179187 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 12:56:02.179190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 12:56:02.178775 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 12:56:02.181220 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8ecb87e56d3fa7d72b57a6f333db4244ee7f60381778e7099a6fc88d91e1e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:11Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:11 crc kubenswrapper[4986]: I1203 12:56:11.190252 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d51894dd-38f0-413a-adaf-22d196474bed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa98006a8736394300cdb9ef3331d2a23e1b3af2e2203f757c51ca9c700d26c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6e7d24b5f1325cfb2d210cb9f311026ab370c91deecc5d0d951bcd4eda79816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9bb9d940d346f9895e24d98173668ea2d0e68fd3035e5b03475bbc936582958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7fa0d673f037c8edcc75ee30b1bbeea73da55d5459347dacc9251fffbb38be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d4007e9ed33227938a5a3656fd74958aef599418fec4cd2048dfa7dd88924dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fb412bfaef1de92aa6be2e23c0dd73fe39eebf29b6814fb3bb6fd0281ddbbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fb412bfaef1de92aa6be2e23c0dd73fe39eebf29b6814fb3bb6fd0281ddbbd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c66ee903635427e08f96e3f96c512c4d25e79bf2fc14eb0c699b3b9a617cfeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c66ee903635427e08f96e3f96c512c4d25e79bf2fc14eb0c699b3b9a617cfeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54fb6e4ecd7f73267d49261875a00a51e5e36e6079170f694346e9d6c3df8f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54fb6e4ecd7f73267d49261875a00a51e5e36e6079170f694346e9d6c3df8f31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:11Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:11 crc kubenswrapper[4986]: I1203 12:56:11.205442 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b923e52e6284c8adca87e7d3d801a432a676e4389d7ab3c0cba584c5b7d96f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:11Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:11 crc kubenswrapper[4986]: I1203 12:56:11.216639 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:11Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:11 crc kubenswrapper[4986]: I1203 12:56:11.225576 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8lcrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75e4c223-9952-4d1b-b83b-5c45cfc51432\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b2af5d43c7842f129b6521e5b3b1963f6060b3e0c552a771c7df6a2a9ef867d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnr67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8lcrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:11Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:11 crc kubenswrapper[4986]: I1203 12:56:11.236059 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:11Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:11 crc kubenswrapper[4986]: I1203 12:56:11.248018 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-px97g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97196b6d-75cc-4de4-8805-f9ce3fbd4230\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c28bf1aff27ba408262b5fc561e084ff3813ce95a7c14b57fea24d409eb33bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v4j5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-px97g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:11Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:11 crc kubenswrapper[4986]: I1203 12:56:11.259096 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:11 crc kubenswrapper[4986]: I1203 12:56:11.259128 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:11 crc kubenswrapper[4986]: I1203 12:56:11.259137 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:11 crc kubenswrapper[4986]: I1203 12:56:11.259150 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:11 crc kubenswrapper[4986]: I1203 12:56:11.259160 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:11Z","lastTransitionTime":"2025-12-03T12:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:11 crc kubenswrapper[4986]: I1203 12:56:11.265979 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3a45156-295b-4093-80e7-2059f81ddbd7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c34a338f506e8dd09757984f92a29fd76ddd2d25fc00d680cc4883c4766080e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c34a338f506e8dd09757984f92a29fd76ddd2d25fc00d680cc4883c4766080e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9nf52\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:11Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:11 crc kubenswrapper[4986]: I1203 12:56:11.277377 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fszqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a76bcf5c-6a62-4360-826d-ecac337c88ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d94370a14317ade02cb805783e09ffcd8c6ff07f35e9124e302028e6dfa3c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhtrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fszqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:11Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:11 crc kubenswrapper[4986]: I1203 12:56:11.291475 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0d71a3ffdbe593936c83e92e92e286eaff092333bb3dd54cb5bc2825ca46fbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a402aeaef03d8d3c684f0d8e7fe8a080bc02d761104e3c1095b3233aade2b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:11Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:11 crc kubenswrapper[4986]: I1203 12:56:11.311233 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rkgd9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a939a03f-0eec-49cb-9b23-40e359e427d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc042debcddb7ea2233d6c224020a32acd8c996d5334d8e6eaae8d1329bb3774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc042debcddb7ea2233d6c224020a32acd8c996d5334d8e6eaae8d1329bb3774\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7128a1dd657b9829c1af31ef1a2f5ad08d2bfd65e5d352cf5a69a5be4e9aae9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7128a1dd657b9829c1af31ef1a2f5ad08d2bfd65e5d352cf5a69a5be4e9aae9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cd633caedf88a1bec633cef61b8b30ae422b8c8d91a012395831a09d398f0f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cd633caedf88a1bec633cef61b8b30ae422b8c8d91a012395831a09d398f0f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1b9e8e1f9326660fe6d7c89af18c5f7dd1134e9f2e866da8ba9c795e13cb57d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1b9e8e1f9326660fe6d7c89af18c5f7dd1134e9f2e866da8ba9c795e13cb57d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55ef54ee9f22072e2b437f38f6d06f24e6f1da02555505bc65c488b5818c87a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rkgd9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:11Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:11 crc kubenswrapper[4986]: I1203 12:56:11.326460 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee22d4f3-8488-4497-a757-a6dd51e84539\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4fc247caf16eab425363e12219a76c40022f0daf0bc9c8b52f4eb8e3726f242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86d62d991d6516ed42f7cad655018e1ebc85c8c1b100d624c1c2e5e8f616cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4ba9819f4f1746f03a558617fc6ee14f5627c37c362d1198a303b6f1854c7bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79cba0b7394bf25be648502fc84a916d8f2fa0949edce20b5a748046ce9b6f75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:11Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:11 crc kubenswrapper[4986]: I1203 12:56:11.342001 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:11Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:11 crc kubenswrapper[4986]: I1203 12:56:11.355063 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b923e52e6284c8adca87e7d3d801a432a676e4389d7ab3c0cba584c5b7d96f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:11Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:11 crc kubenswrapper[4986]: I1203 12:56:11.362194 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:11 crc kubenswrapper[4986]: I1203 12:56:11.362236 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:11 crc kubenswrapper[4986]: I1203 12:56:11.362255 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:11 crc kubenswrapper[4986]: I1203 12:56:11.362298 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:11 crc kubenswrapper[4986]: I1203 12:56:11.362314 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:11Z","lastTransitionTime":"2025-12-03T12:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:11 crc kubenswrapper[4986]: I1203 12:56:11.367836 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d66d06e79ffb2ecd12acc0380e5e4839904147f9194e48277ca667df13a0cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:11Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:11 crc kubenswrapper[4986]: I1203 12:56:11.380211 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://535b7b05dbe9b47808f3d358a1f370b696a5d5bb4dd96b049ac2cbe3d18ea065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgppd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0ab87b22153ba31fa79d2d210e695b274c2ddc878ad679c605aa8fb716534d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgppd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xggpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:11Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:11 crc kubenswrapper[4986]: I1203 12:56:11.396679 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231ed674-1d19-4ee2-b29c-a1b7453ed531\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7eb1602f2d542b1998655456e5c49fca8f55a45a07cb075f989d4b10f91f6c95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0ca216d3d1121b7035d5d7ead25d459f95d56808997c02b3801cac0354c76a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04aa8cf2e01507ace747505514511263d9c560608e33f592a2c10be1edad8674\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70f0aa85e0be54890d4367deaf1a95970a7b0ac9d95afdd07c431680bdf7f15e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c0ed6891f87bfc3c62f0f13739720e012dc8fa3b122a18e799e164e4656abb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T12:56:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 12:56:01.437086 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 12:56:01.437205 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 12:56:01.438166 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1972143202/tls.crt::/tmp/serving-cert-1972143202/tls.key\\\\\\\"\\\\nI1203 12:56:02.167657 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 12:56:02.174337 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 12:56:02.174360 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 12:56:02.174377 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 12:56:02.174382 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 12:56:02.178753 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 12:56:02.179168 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 12:56:02.179175 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 12:56:02.179180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 12:56:02.179184 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 12:56:02.179187 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 12:56:02.179190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 12:56:02.178775 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 12:56:02.181220 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8ecb87e56d3fa7d72b57a6f333db4244ee7f60381778e7099a6fc88d91e1e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:11Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:11 crc kubenswrapper[4986]: I1203 12:56:11.425219 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d51894dd-38f0-413a-adaf-22d196474bed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa98006a8736394300cdb9ef3331d2a23e1b3af2e2203f757c51ca9c700d26c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6e7d24b5f1325cfb2d210cb9f311026ab370c91deecc5d0d951bcd4eda79816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9bb9d940d346f9895e24d98173668ea2d0e68fd3035e5b03475bbc936582958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7fa0d673f037c8edcc75ee30b1bbeea73da55d5459347dacc9251fffbb38be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d4007e9ed33227938a5a3656fd74958aef599418fec4cd2048dfa7dd88924dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fb412bfaef1de92aa6be2e23c0dd73fe39eebf29b6814fb3bb6fd0281ddbbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fb412bfaef1de92aa6be2e23c0dd73fe39eebf29b6814fb3bb6fd0281ddbbd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c66ee903635427e08f96e3f96c512c4d25e79bf2fc14eb0c699b3b9a617cfeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c66ee903635427e08f96e3f96c512c4d25e79bf2fc14eb0c699b3b9a617cfeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54fb6e4ecd7f73267d49261875a00a51e5e36e6079170f694346e9d6c3df8f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54fb6e4ecd7f73267d49261875a00a51e5e36e6079170f694346e9d6c3df8f31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:11Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:11 crc kubenswrapper[4986]: I1203 12:56:11.465114 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:11 crc kubenswrapper[4986]: I1203 12:56:11.465151 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:11 crc kubenswrapper[4986]: I1203 12:56:11.465160 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:11 crc kubenswrapper[4986]: I1203 12:56:11.465172 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:11 crc kubenswrapper[4986]: I1203 12:56:11.465181 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:11Z","lastTransitionTime":"2025-12-03T12:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:11 crc kubenswrapper[4986]: I1203 12:56:11.568161 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:11 crc kubenswrapper[4986]: I1203 12:56:11.568208 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:11 crc kubenswrapper[4986]: I1203 12:56:11.568228 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:11 crc kubenswrapper[4986]: I1203 12:56:11.568255 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:11 crc kubenswrapper[4986]: I1203 12:56:11.568275 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:11Z","lastTransitionTime":"2025-12-03T12:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:11 crc kubenswrapper[4986]: I1203 12:56:11.671773 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:11 crc kubenswrapper[4986]: I1203 12:56:11.671815 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:11 crc kubenswrapper[4986]: I1203 12:56:11.671828 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:11 crc kubenswrapper[4986]: I1203 12:56:11.671845 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:11 crc kubenswrapper[4986]: I1203 12:56:11.671858 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:11Z","lastTransitionTime":"2025-12-03T12:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:11 crc kubenswrapper[4986]: I1203 12:56:11.775042 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:11 crc kubenswrapper[4986]: I1203 12:56:11.775473 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:11 crc kubenswrapper[4986]: I1203 12:56:11.775693 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:11 crc kubenswrapper[4986]: I1203 12:56:11.775933 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:11 crc kubenswrapper[4986]: I1203 12:56:11.776146 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:11Z","lastTransitionTime":"2025-12-03T12:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:11 crc kubenswrapper[4986]: I1203 12:56:11.860487 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:11 crc kubenswrapper[4986]: I1203 12:56:11.860771 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:11 crc kubenswrapper[4986]: I1203 12:56:11.860869 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:11 crc kubenswrapper[4986]: I1203 12:56:11.860951 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:11 crc kubenswrapper[4986]: I1203 12:56:11.861032 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:11Z","lastTransitionTime":"2025-12-03T12:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:11 crc kubenswrapper[4986]: E1203 12:56:11.874383 4986 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b28ccd4-3e47-4e26-a3b8-f87de96f7586\\\",\\\"systemUUID\\\":\\\"52e71ce5-258d-4951-aa26-5d4aac0725ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:11Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:11 crc kubenswrapper[4986]: I1203 12:56:11.878493 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:11 crc kubenswrapper[4986]: I1203 12:56:11.878521 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:11 crc kubenswrapper[4986]: I1203 12:56:11.878528 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:11 crc kubenswrapper[4986]: I1203 12:56:11.878541 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:11 crc kubenswrapper[4986]: I1203 12:56:11.878549 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:11Z","lastTransitionTime":"2025-12-03T12:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:11 crc kubenswrapper[4986]: E1203 12:56:11.897619 4986 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b28ccd4-3e47-4e26-a3b8-f87de96f7586\\\",\\\"systemUUID\\\":\\\"52e71ce5-258d-4951-aa26-5d4aac0725ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:11Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:11 crc kubenswrapper[4986]: I1203 12:56:11.902439 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:11 crc kubenswrapper[4986]: I1203 12:56:11.902482 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:11 crc kubenswrapper[4986]: I1203 12:56:11.902498 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:11 crc kubenswrapper[4986]: I1203 12:56:11.902517 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:11 crc kubenswrapper[4986]: I1203 12:56:11.902531 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:11Z","lastTransitionTime":"2025-12-03T12:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:11 crc kubenswrapper[4986]: E1203 12:56:11.920886 4986 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b28ccd4-3e47-4e26-a3b8-f87de96f7586\\\",\\\"systemUUID\\\":\\\"52e71ce5-258d-4951-aa26-5d4aac0725ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:11Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:11 crc kubenswrapper[4986]: I1203 12:56:11.925069 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:11 crc kubenswrapper[4986]: I1203 12:56:11.925112 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:11 crc kubenswrapper[4986]: I1203 12:56:11.925123 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:11 crc kubenswrapper[4986]: I1203 12:56:11.925140 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:11 crc kubenswrapper[4986]: I1203 12:56:11.925149 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:11Z","lastTransitionTime":"2025-12-03T12:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:11 crc kubenswrapper[4986]: E1203 12:56:11.937531 4986 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b28ccd4-3e47-4e26-a3b8-f87de96f7586\\\",\\\"systemUUID\\\":\\\"52e71ce5-258d-4951-aa26-5d4aac0725ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:11Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:11 crc kubenswrapper[4986]: I1203 12:56:11.940596 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:11 crc kubenswrapper[4986]: I1203 12:56:11.940658 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:11 crc kubenswrapper[4986]: I1203 12:56:11.940674 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:11 crc kubenswrapper[4986]: I1203 12:56:11.940699 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:11 crc kubenswrapper[4986]: I1203 12:56:11.940713 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:11Z","lastTransitionTime":"2025-12-03T12:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:11 crc kubenswrapper[4986]: I1203 12:56:11.942894 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 12:56:11 crc kubenswrapper[4986]: I1203 12:56:11.942943 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 12:56:11 crc kubenswrapper[4986]: E1203 12:56:11.943027 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 12:56:11 crc kubenswrapper[4986]: I1203 12:56:11.943085 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 12:56:11 crc kubenswrapper[4986]: E1203 12:56:11.943099 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 12:56:11 crc kubenswrapper[4986]: E1203 12:56:11.943270 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 12:56:11 crc kubenswrapper[4986]: E1203 12:56:11.955462 4986 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b28ccd4-3e47-4e26-a3b8-f87de96f7586\\\",\\\"systemUUID\\\":\\\"52e71ce5-258d-4951-aa26-5d4aac0725ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:11Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:11 crc kubenswrapper[4986]: E1203 12:56:11.955577 4986 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 03 12:56:11 crc kubenswrapper[4986]: I1203 12:56:11.957159 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:11 crc kubenswrapper[4986]: I1203 12:56:11.957218 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:11 crc kubenswrapper[4986]: I1203 12:56:11.957234 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:11 crc kubenswrapper[4986]: I1203 12:56:11.957252 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:11 crc kubenswrapper[4986]: I1203 12:56:11.957265 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:11Z","lastTransitionTime":"2025-12-03T12:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:12 crc kubenswrapper[4986]: I1203 12:56:12.059801 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:12 crc kubenswrapper[4986]: I1203 12:56:12.059837 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:12 crc kubenswrapper[4986]: I1203 12:56:12.059844 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:12 crc kubenswrapper[4986]: I1203 12:56:12.059861 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:12 crc kubenswrapper[4986]: I1203 12:56:12.059875 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:12Z","lastTransitionTime":"2025-12-03T12:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:12 crc kubenswrapper[4986]: I1203 12:56:12.149103 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" event={"ID":"d3a45156-295b-4093-80e7-2059f81ddbd7","Type":"ContainerStarted","Data":"7c1ecd3792204c942f231154736fda8a831651f3cc15fd115a2ff56fc88fc0ca"} Dec 03 12:56:12 crc kubenswrapper[4986]: I1203 12:56:12.162526 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:12 crc kubenswrapper[4986]: I1203 12:56:12.162574 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:12 crc kubenswrapper[4986]: I1203 12:56:12.162585 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:12 crc kubenswrapper[4986]: I1203 12:56:12.162600 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:12 crc kubenswrapper[4986]: I1203 12:56:12.162611 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:12Z","lastTransitionTime":"2025-12-03T12:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:12 crc kubenswrapper[4986]: I1203 12:56:12.264946 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:12 crc kubenswrapper[4986]: I1203 12:56:12.264998 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:12 crc kubenswrapper[4986]: I1203 12:56:12.265009 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:12 crc kubenswrapper[4986]: I1203 12:56:12.265033 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:12 crc kubenswrapper[4986]: I1203 12:56:12.265045 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:12Z","lastTransitionTime":"2025-12-03T12:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:12 crc kubenswrapper[4986]: I1203 12:56:12.367432 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:12 crc kubenswrapper[4986]: I1203 12:56:12.367469 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:12 crc kubenswrapper[4986]: I1203 12:56:12.367479 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:12 crc kubenswrapper[4986]: I1203 12:56:12.367501 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:12 crc kubenswrapper[4986]: I1203 12:56:12.367511 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:12Z","lastTransitionTime":"2025-12-03T12:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:12 crc kubenswrapper[4986]: I1203 12:56:12.469611 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:12 crc kubenswrapper[4986]: I1203 12:56:12.469648 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:12 crc kubenswrapper[4986]: I1203 12:56:12.469660 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:12 crc kubenswrapper[4986]: I1203 12:56:12.469675 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:12 crc kubenswrapper[4986]: I1203 12:56:12.469687 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:12Z","lastTransitionTime":"2025-12-03T12:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:12 crc kubenswrapper[4986]: I1203 12:56:12.572143 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:12 crc kubenswrapper[4986]: I1203 12:56:12.572183 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:12 crc kubenswrapper[4986]: I1203 12:56:12.572193 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:12 crc kubenswrapper[4986]: I1203 12:56:12.572211 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:12 crc kubenswrapper[4986]: I1203 12:56:12.572222 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:12Z","lastTransitionTime":"2025-12-03T12:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:12 crc kubenswrapper[4986]: I1203 12:56:12.674956 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:12 crc kubenswrapper[4986]: I1203 12:56:12.675047 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:12 crc kubenswrapper[4986]: I1203 12:56:12.675075 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:12 crc kubenswrapper[4986]: I1203 12:56:12.675105 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:12 crc kubenswrapper[4986]: I1203 12:56:12.675130 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:12Z","lastTransitionTime":"2025-12-03T12:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:12 crc kubenswrapper[4986]: I1203 12:56:12.779027 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:12 crc kubenswrapper[4986]: I1203 12:56:12.779068 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:12 crc kubenswrapper[4986]: I1203 12:56:12.779077 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:12 crc kubenswrapper[4986]: I1203 12:56:12.779091 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:12 crc kubenswrapper[4986]: I1203 12:56:12.779100 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:12Z","lastTransitionTime":"2025-12-03T12:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:12 crc kubenswrapper[4986]: I1203 12:56:12.880995 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:12 crc kubenswrapper[4986]: I1203 12:56:12.881032 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:12 crc kubenswrapper[4986]: I1203 12:56:12.881044 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:12 crc kubenswrapper[4986]: I1203 12:56:12.881058 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:12 crc kubenswrapper[4986]: I1203 12:56:12.881067 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:12Z","lastTransitionTime":"2025-12-03T12:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:12 crc kubenswrapper[4986]: I1203 12:56:12.983664 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:12 crc kubenswrapper[4986]: I1203 12:56:12.983764 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:12 crc kubenswrapper[4986]: I1203 12:56:12.983788 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:12 crc kubenswrapper[4986]: I1203 12:56:12.983806 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:12 crc kubenswrapper[4986]: I1203 12:56:12.983819 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:12Z","lastTransitionTime":"2025-12-03T12:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:13 crc kubenswrapper[4986]: I1203 12:56:13.085970 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:13 crc kubenswrapper[4986]: I1203 12:56:13.086034 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:13 crc kubenswrapper[4986]: I1203 12:56:13.086049 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:13 crc kubenswrapper[4986]: I1203 12:56:13.086066 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:13 crc kubenswrapper[4986]: I1203 12:56:13.086079 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:13Z","lastTransitionTime":"2025-12-03T12:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:13 crc kubenswrapper[4986]: I1203 12:56:13.156351 4986 generic.go:334] "Generic (PLEG): container finished" podID="a939a03f-0eec-49cb-9b23-40e359e427d5" containerID="55ef54ee9f22072e2b437f38f6d06f24e6f1da02555505bc65c488b5818c87a0" exitCode=0 Dec 03 12:56:13 crc kubenswrapper[4986]: I1203 12:56:13.156429 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rkgd9" event={"ID":"a939a03f-0eec-49cb-9b23-40e359e427d5","Type":"ContainerDied","Data":"55ef54ee9f22072e2b437f38f6d06f24e6f1da02555505bc65c488b5818c87a0"} Dec 03 12:56:13 crc kubenswrapper[4986]: I1203 12:56:13.156896 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" Dec 03 12:56:13 crc kubenswrapper[4986]: I1203 12:56:13.156927 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" Dec 03 12:56:13 crc kubenswrapper[4986]: I1203 12:56:13.171503 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d66d06e79ffb2ecd12acc0380e5e4839904147f9194e48277ca667df13a0cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:13Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:13 crc kubenswrapper[4986]: I1203 12:56:13.182187 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" Dec 03 12:56:13 crc kubenswrapper[4986]: I1203 12:56:13.182272 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" Dec 03 12:56:13 crc kubenswrapper[4986]: I1203 12:56:13.188190 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://535b7b05dbe9b47808f3d358a1f370b696a5d5bb4dd96b049ac2cbe3d18ea065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgppd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0ab87b22153ba31fa79d2d210e695b274c2ddc878ad679c605aa8fb716534d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgppd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xggpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:13Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:13 crc kubenswrapper[4986]: I1203 12:56:13.188825 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:13 crc kubenswrapper[4986]: I1203 12:56:13.188858 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:13 crc kubenswrapper[4986]: I1203 12:56:13.188869 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:13 crc kubenswrapper[4986]: I1203 12:56:13.188886 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:13 crc kubenswrapper[4986]: I1203 12:56:13.188897 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:13Z","lastTransitionTime":"2025-12-03T12:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:13 crc kubenswrapper[4986]: I1203 12:56:13.202210 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231ed674-1d19-4ee2-b29c-a1b7453ed531\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7eb1602f2d542b1998655456e5c49fca8f55a45a07cb075f989d4b10f91f6c95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0ca216d3d1121b7035d5d7ead25d459f95d56808997c02b3801cac0354c76a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04aa8cf2e01507ace747505514511263d9c560608e33f592a2c10be1edad8674\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70f0aa85e0be54890d4367deaf1a95970a7b0ac9d95afdd07c431680bdf7f15e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c0ed6891f87bfc3c62f0f13739720e012dc8fa3b122a18e799e164e4656abb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T12:56:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 12:56:01.437086 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 12:56:01.437205 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 12:56:01.438166 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1972143202/tls.crt::/tmp/serving-cert-1972143202/tls.key\\\\\\\"\\\\nI1203 12:56:02.167657 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 12:56:02.174337 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 12:56:02.174360 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 12:56:02.174377 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 12:56:02.174382 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 12:56:02.178753 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 12:56:02.179168 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 12:56:02.179175 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 12:56:02.179180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 12:56:02.179184 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 12:56:02.179187 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 12:56:02.179190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 12:56:02.178775 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 12:56:02.181220 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8ecb87e56d3fa7d72b57a6f333db4244ee7f60381778e7099a6fc88d91e1e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:13Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:13 crc kubenswrapper[4986]: I1203 12:56:13.219939 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d51894dd-38f0-413a-adaf-22d196474bed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa98006a8736394300cdb9ef3331d2a23e1b3af2e2203f757c51ca9c700d26c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6e7d24b5f1325cfb2d210cb9f311026ab370c91deecc5d0d951bcd4eda79816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9bb9d940d346f9895e24d98173668ea2d0e68fd3035e5b03475bbc936582958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7fa0d673f037c8edcc75ee30b1bbeea73da55d5459347dacc9251fffbb38be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d4007e9ed33227938a5a3656fd74958aef599418fec4cd2048dfa7dd88924dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fb412bfaef1de92aa6be2e23c0dd73fe39eebf29b6814fb3bb6fd0281ddbbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fb412bfaef1de92aa6be2e23c0dd73fe39eebf29b6814fb3bb6fd0281ddbbd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c66ee903635427e08f96e3f96c512c4d25e79bf2fc14eb0c699b3b9a617cfeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c66ee903635427e08f96e3f96c512c4d25e79bf2fc14eb0c699b3b9a617cfeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54fb6e4ecd7f73267d49261875a00a51e5e36e6079170f694346e9d6c3df8f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54fb6e4ecd7f73267d49261875a00a51e5e36e6079170f694346e9d6c3df8f31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:13Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:13 crc kubenswrapper[4986]: I1203 12:56:13.232744 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b923e52e6284c8adca87e7d3d801a432a676e4389d7ab3c0cba584c5b7d96f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:13Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:13 crc kubenswrapper[4986]: I1203 12:56:13.245044 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8lcrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75e4c223-9952-4d1b-b83b-5c45cfc51432\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b2af5d43c7842f129b6521e5b3b1963f6060b3e0c552a771c7df6a2a9ef867d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnr67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8lcrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:13Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:13 crc kubenswrapper[4986]: I1203 12:56:13.257922 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:13Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:13 crc kubenswrapper[4986]: I1203 12:56:13.269479 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:13Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:13 crc kubenswrapper[4986]: I1203 12:56:13.291854 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3a45156-295b-4093-80e7-2059f81ddbd7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c79f84fc019d109bac446dd058b4bafb0b516a13fae47b1924e4f7690658ba8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be6405facfa7726cbe56c48184cba0123774e8e34d3ceb9e18daed6f35560c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d77d078572dbe7f11e91ece0d25e76bd17004167dabce267a2b568e798aae158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2843fb3fcbadb926b104a9f7c4084468b36293aebed6c692c29419a0457b62a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d94eca13821749d7e15a350c411a831d2887f4211ed092a4fb57e26e49fcd9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ff8c1f1602e57602cae3f46dd3d2bf8d7abad6bd0952f4347152678e8606d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c1ecd3792204c942f231154736fda8a831651f3cc15fd115a2ff56fc88fc0ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dfc7520e7cb1e09eb10bf9615dbae11d7e73474f1b73eed481d18269b65aef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c34a338f506e8dd09757984f92a29fd76ddd2d25fc00d680cc4883c4766080e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c34a338f506e8dd09757984f92a29fd76ddd2d25fc00d680cc4883c4766080e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9nf52\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:13Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:13 crc kubenswrapper[4986]: I1203 12:56:13.292122 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:13 crc kubenswrapper[4986]: I1203 12:56:13.292136 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:13 crc kubenswrapper[4986]: I1203 12:56:13.292145 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:13 crc kubenswrapper[4986]: I1203 12:56:13.292156 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:13 crc kubenswrapper[4986]: I1203 12:56:13.292165 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:13Z","lastTransitionTime":"2025-12-03T12:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:13 crc kubenswrapper[4986]: I1203 12:56:13.302570 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fszqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a76bcf5c-6a62-4360-826d-ecac337c88ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d94370a14317ade02cb805783e09ffcd8c6ff07f35e9124e302028e6dfa3c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhtrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fszqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:13Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:13 crc kubenswrapper[4986]: I1203 12:56:13.317072 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-px97g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97196b6d-75cc-4de4-8805-f9ce3fbd4230\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c28bf1aff27ba408262b5fc561e084ff3813ce95a7c14b57fea24d409eb33bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v4j5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-px97g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:13Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:13 crc kubenswrapper[4986]: I1203 12:56:13.332961 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rkgd9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a939a03f-0eec-49cb-9b23-40e359e427d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc042debcddb7ea2233d6c224020a32acd8c996d5334d8e6eaae8d1329bb3774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc042debcddb7ea2233d6c224020a32acd8c996d5334d8e6eaae8d1329bb3774\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7128a1dd657b9829c1af31ef1a2f5ad08d2bfd65e5d352cf5a69a5be4e9aae9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7128a1dd657b9829c1af31ef1a2f5ad08d2bfd65e5d352cf5a69a5be4e9aae9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cd633caedf88a1bec633cef61b8b30ae422b8c8d91a012395831a09d398f0f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cd633caedf88a1bec633cef61b8b30ae422b8c8d91a012395831a09d398f0f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1b9e8e1f9326660fe6d7c89af18c5f7dd1134e9f2e866da8ba9c795e13cb57d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1b9e8e1f9326660fe6d7c89af18c5f7dd1134e9f2e866da8ba9c795e13cb57d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55ef54ee9f22072e2b437f38f6d06f24e6f1da02555505bc65c488b5818c87a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55ef54ee9f22072e2b437f38f6d06f24e6f1da02555505bc65c488b5818c87a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rkgd9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:13Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:13 crc kubenswrapper[4986]: I1203 12:56:13.346768 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee22d4f3-8488-4497-a757-a6dd51e84539\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4fc247caf16eab425363e12219a76c40022f0daf0bc9c8b52f4eb8e3726f242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86d62d991d6516ed42f7cad655018e1ebc85c8c1b100d624c1c2e5e8f616cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4ba9819f4f1746f03a558617fc6ee14f5627c37c362d1198a303b6f1854c7bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79cba0b7394bf25be648502fc84a916d8f2fa0949edce20b5a748046ce9b6f75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:13Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:13 crc kubenswrapper[4986]: I1203 12:56:13.359171 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:13Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:13 crc kubenswrapper[4986]: I1203 12:56:13.373296 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0d71a3ffdbe593936c83e92e92e286eaff092333bb3dd54cb5bc2825ca46fbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a402aeaef03d8d3c684f0d8e7fe8a080bc02d761104e3c1095b3233aade2b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:13Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:13 crc kubenswrapper[4986]: I1203 12:56:13.386795 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee22d4f3-8488-4497-a757-a6dd51e84539\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4fc247caf16eab425363e12219a76c40022f0daf0bc9c8b52f4eb8e3726f242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86d62d991d6516ed42f7cad655018e1ebc85c8c1b100d624c1c2e5e8f616cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4ba9819f4f1746f03a558617fc6ee14f5627c37c362d1198a303b6f1854c7bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79cba0b7394bf25be648502fc84a916d8f2fa0949edce20b5a748046ce9b6f75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:13Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:13 crc kubenswrapper[4986]: I1203 12:56:13.394357 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:13 crc kubenswrapper[4986]: I1203 12:56:13.394407 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:13 crc kubenswrapper[4986]: I1203 12:56:13.394417 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:13 crc kubenswrapper[4986]: I1203 12:56:13.394434 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:13 crc kubenswrapper[4986]: I1203 12:56:13.394446 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:13Z","lastTransitionTime":"2025-12-03T12:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:13 crc kubenswrapper[4986]: I1203 12:56:13.398336 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:13Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:13 crc kubenswrapper[4986]: I1203 12:56:13.410989 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0d71a3ffdbe593936c83e92e92e286eaff092333bb3dd54cb5bc2825ca46fbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a402aeaef03d8d3c684f0d8e7fe8a080bc02d761104e3c1095b3233aade2b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:13Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:13 crc kubenswrapper[4986]: I1203 12:56:13.427257 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rkgd9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a939a03f-0eec-49cb-9b23-40e359e427d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc042debcddb7ea2233d6c224020a32acd8c996d5334d8e6eaae8d1329bb3774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc042debcddb7ea2233d6c224020a32acd8c996d5334d8e6eaae8d1329bb3774\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7128a1dd657b9829c1af31ef1a2f5ad08d2bfd65e5d352cf5a69a5be4e9aae9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7128a1dd657b9829c1af31ef1a2f5ad08d2bfd65e5d352cf5a69a5be4e9aae9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cd633caedf88a1bec633cef61b8b30ae422b8c8d91a012395831a09d398f0f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cd633caedf88a1bec633cef61b8b30ae422b8c8d91a012395831a09d398f0f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1b9e8e1f9326660fe6d7c89af18c5f7dd1134e9f2e866da8ba9c795e13cb57d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1b9e8e1f9326660fe6d7c89af18c5f7dd1134e9f2e866da8ba9c795e13cb57d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55ef54ee9f22072e2b437f38f6d06f24e6f1da02555505bc65c488b5818c87a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55ef54ee9f22072e2b437f38f6d06f24e6f1da02555505bc65c488b5818c87a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rkgd9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:13Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:13 crc kubenswrapper[4986]: I1203 12:56:13.441822 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231ed674-1d19-4ee2-b29c-a1b7453ed531\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7eb1602f2d542b1998655456e5c49fca8f55a45a07cb075f989d4b10f91f6c95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0ca216d3d1121b7035d5d7ead25d459f95d56808997c02b3801cac0354c76a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04aa8cf2e01507ace747505514511263d9c560608e33f592a2c10be1edad8674\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70f0aa85e0be54890d4367deaf1a95970a7b0ac9d95afdd07c431680bdf7f15e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c0ed6891f87bfc3c62f0f13739720e012dc8fa3b122a18e799e164e4656abb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T12:56:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 12:56:01.437086 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 12:56:01.437205 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 12:56:01.438166 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1972143202/tls.crt::/tmp/serving-cert-1972143202/tls.key\\\\\\\"\\\\nI1203 12:56:02.167657 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 12:56:02.174337 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 12:56:02.174360 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 12:56:02.174377 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 12:56:02.174382 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 12:56:02.178753 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 12:56:02.179168 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 12:56:02.179175 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 12:56:02.179180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 12:56:02.179184 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 12:56:02.179187 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 12:56:02.179190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 12:56:02.178775 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 12:56:02.181220 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8ecb87e56d3fa7d72b57a6f333db4244ee7f60381778e7099a6fc88d91e1e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:13Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:13 crc kubenswrapper[4986]: I1203 12:56:13.458576 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d51894dd-38f0-413a-adaf-22d196474bed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa98006a8736394300cdb9ef3331d2a23e1b3af2e2203f757c51ca9c700d26c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6e7d24b5f1325cfb2d210cb9f311026ab370c91deecc5d0d951bcd4eda79816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9bb9d940d346f9895e24d98173668ea2d0e68fd3035e5b03475bbc936582958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7fa0d673f037c8edcc75ee30b1bbeea73da55d5459347dacc9251fffbb38be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d4007e9ed33227938a5a3656fd74958aef599418fec4cd2048dfa7dd88924dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fb412bfaef1de92aa6be2e23c0dd73fe39eebf29b6814fb3bb6fd0281ddbbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fb412bfaef1de92aa6be2e23c0dd73fe39eebf29b6814fb3bb6fd0281ddbbd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c66ee903635427e08f96e3f96c512c4d25e79bf2fc14eb0c699b3b9a617cfeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c66ee903635427e08f96e3f96c512c4d25e79bf2fc14eb0c699b3b9a617cfeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54fb6e4ecd7f73267d49261875a00a51e5e36e6079170f694346e9d6c3df8f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54fb6e4ecd7f73267d49261875a00a51e5e36e6079170f694346e9d6c3df8f31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:13Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:13 crc kubenswrapper[4986]: I1203 12:56:13.470356 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b923e52e6284c8adca87e7d3d801a432a676e4389d7ab3c0cba584c5b7d96f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:13Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:13 crc kubenswrapper[4986]: I1203 12:56:13.480073 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d66d06e79ffb2ecd12acc0380e5e4839904147f9194e48277ca667df13a0cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:13Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:13 crc kubenswrapper[4986]: I1203 12:56:13.491518 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://535b7b05dbe9b47808f3d358a1f370b696a5d5bb4dd96b049ac2cbe3d18ea065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgppd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0ab87b22153ba31fa79d2d210e695b274c2ddc878ad679c605aa8fb716534d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgppd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xggpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:13Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:13 crc kubenswrapper[4986]: I1203 12:56:13.496307 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:13 crc kubenswrapper[4986]: I1203 12:56:13.496348 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:13 crc kubenswrapper[4986]: I1203 12:56:13.496359 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:13 crc kubenswrapper[4986]: I1203 12:56:13.496376 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:13 crc kubenswrapper[4986]: I1203 12:56:13.496387 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:13Z","lastTransitionTime":"2025-12-03T12:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:13 crc kubenswrapper[4986]: I1203 12:56:13.503691 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:13Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:13 crc kubenswrapper[4986]: I1203 12:56:13.514743 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:13Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:13 crc kubenswrapper[4986]: I1203 12:56:13.524514 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8lcrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75e4c223-9952-4d1b-b83b-5c45cfc51432\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b2af5d43c7842f129b6521e5b3b1963f6060b3e0c552a771c7df6a2a9ef867d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnr67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8lcrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:13Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:13 crc kubenswrapper[4986]: I1203 12:56:13.533696 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fszqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a76bcf5c-6a62-4360-826d-ecac337c88ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d94370a14317ade02cb805783e09ffcd8c6ff07f35e9124e302028e6dfa3c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhtrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fszqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:13Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:13 crc kubenswrapper[4986]: I1203 12:56:13.545550 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-px97g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97196b6d-75cc-4de4-8805-f9ce3fbd4230\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c28bf1aff27ba408262b5fc561e084ff3813ce95a7c14b57fea24d409eb33bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v4j5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-px97g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:13Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:13 crc kubenswrapper[4986]: I1203 12:56:13.562064 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3a45156-295b-4093-80e7-2059f81ddbd7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c79f84fc019d109bac446dd058b4bafb0b516a13fae47b1924e4f7690658ba8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be6405facfa7726cbe56c48184cba0123774e8e34d3ceb9e18daed6f35560c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d77d078572dbe7f11e91ece0d25e76bd17004167dabce267a2b568e798aae158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2843fb3fcbadb926b104a9f7c4084468b36293aebed6c692c29419a0457b62a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d94eca13821749d7e15a350c411a831d2887f4211ed092a4fb57e26e49fcd9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ff8c1f1602e57602cae3f46dd3d2bf8d7abad6bd0952f4347152678e8606d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c1ecd3792204c942f231154736fda8a831651f3cc15fd115a2ff56fc88fc0ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dfc7520e7cb1e09eb10bf9615dbae11d7e73474f1b73eed481d18269b65aef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c34a338f506e8dd09757984f92a29fd76ddd2d25fc00d680cc4883c4766080e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c34a338f506e8dd09757984f92a29fd76ddd2d25fc00d680cc4883c4766080e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9nf52\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:13Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:13 crc kubenswrapper[4986]: I1203 12:56:13.598984 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:13 crc kubenswrapper[4986]: I1203 12:56:13.599062 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:13 crc kubenswrapper[4986]: I1203 12:56:13.599074 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:13 crc kubenswrapper[4986]: I1203 12:56:13.599150 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:13 crc kubenswrapper[4986]: I1203 12:56:13.599169 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:13Z","lastTransitionTime":"2025-12-03T12:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:13 crc kubenswrapper[4986]: I1203 12:56:13.701600 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:13 crc kubenswrapper[4986]: I1203 12:56:13.701647 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:13 crc kubenswrapper[4986]: I1203 12:56:13.701657 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:13 crc kubenswrapper[4986]: I1203 12:56:13.701672 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:13 crc kubenswrapper[4986]: I1203 12:56:13.701683 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:13Z","lastTransitionTime":"2025-12-03T12:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:13 crc kubenswrapper[4986]: I1203 12:56:13.804253 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:13 crc kubenswrapper[4986]: I1203 12:56:13.804325 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:13 crc kubenswrapper[4986]: I1203 12:56:13.804339 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:13 crc kubenswrapper[4986]: I1203 12:56:13.804356 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:13 crc kubenswrapper[4986]: I1203 12:56:13.804368 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:13Z","lastTransitionTime":"2025-12-03T12:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:13 crc kubenswrapper[4986]: I1203 12:56:13.908909 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:13 crc kubenswrapper[4986]: I1203 12:56:13.909173 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:13 crc kubenswrapper[4986]: I1203 12:56:13.909273 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:13 crc kubenswrapper[4986]: I1203 12:56:13.909410 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:13 crc kubenswrapper[4986]: I1203 12:56:13.909508 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:13Z","lastTransitionTime":"2025-12-03T12:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:13 crc kubenswrapper[4986]: I1203 12:56:13.943349 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 12:56:13 crc kubenswrapper[4986]: I1203 12:56:13.943389 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 12:56:13 crc kubenswrapper[4986]: I1203 12:56:13.943365 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 12:56:13 crc kubenswrapper[4986]: E1203 12:56:13.943475 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 12:56:13 crc kubenswrapper[4986]: E1203 12:56:13.944166 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 12:56:13 crc kubenswrapper[4986]: E1203 12:56:13.944316 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 12:56:14 crc kubenswrapper[4986]: I1203 12:56:14.011818 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:14 crc kubenswrapper[4986]: I1203 12:56:14.011846 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:14 crc kubenswrapper[4986]: I1203 12:56:14.011854 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:14 crc kubenswrapper[4986]: I1203 12:56:14.011866 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:14 crc kubenswrapper[4986]: I1203 12:56:14.011874 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:14Z","lastTransitionTime":"2025-12-03T12:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:14 crc kubenswrapper[4986]: I1203 12:56:14.114436 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:14 crc kubenswrapper[4986]: I1203 12:56:14.114480 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:14 crc kubenswrapper[4986]: I1203 12:56:14.114492 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:14 crc kubenswrapper[4986]: I1203 12:56:14.114508 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:14 crc kubenswrapper[4986]: I1203 12:56:14.114519 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:14Z","lastTransitionTime":"2025-12-03T12:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:14 crc kubenswrapper[4986]: I1203 12:56:14.163411 4986 generic.go:334] "Generic (PLEG): container finished" podID="a939a03f-0eec-49cb-9b23-40e359e427d5" containerID="35fecc2332fdab60719b864653301c5e8322b13ace401a0ff016ff3dc09cd3d7" exitCode=0 Dec 03 12:56:14 crc kubenswrapper[4986]: I1203 12:56:14.163482 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rkgd9" event={"ID":"a939a03f-0eec-49cb-9b23-40e359e427d5","Type":"ContainerDied","Data":"35fecc2332fdab60719b864653301c5e8322b13ace401a0ff016ff3dc09cd3d7"} Dec 03 12:56:14 crc kubenswrapper[4986]: I1203 12:56:14.163828 4986 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 03 12:56:14 crc kubenswrapper[4986]: I1203 12:56:14.180155 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:14Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:14 crc kubenswrapper[4986]: I1203 12:56:14.195787 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:14Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:14 crc kubenswrapper[4986]: I1203 12:56:14.206484 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8lcrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75e4c223-9952-4d1b-b83b-5c45cfc51432\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b2af5d43c7842f129b6521e5b3b1963f6060b3e0c552a771c7df6a2a9ef867d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnr67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8lcrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:14Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:14 crc kubenswrapper[4986]: I1203 12:56:14.216678 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:14 crc kubenswrapper[4986]: I1203 12:56:14.216718 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:14 crc kubenswrapper[4986]: I1203 12:56:14.216729 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:14 crc kubenswrapper[4986]: I1203 12:56:14.216745 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:14 crc kubenswrapper[4986]: I1203 12:56:14.216755 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:14Z","lastTransitionTime":"2025-12-03T12:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:14 crc kubenswrapper[4986]: I1203 12:56:14.218621 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fszqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a76bcf5c-6a62-4360-826d-ecac337c88ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d94370a14317ade02cb805783e09ffcd8c6ff07f35e9124e302028e6dfa3c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhtrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fszqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:14Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:14 crc kubenswrapper[4986]: I1203 12:56:14.239561 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-px97g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97196b6d-75cc-4de4-8805-f9ce3fbd4230\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c28bf1aff27ba408262b5fc561e084ff3813ce95a7c14b57fea24d409eb33bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v4j5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-px97g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:14Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:14 crc kubenswrapper[4986]: I1203 12:56:14.260096 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3a45156-295b-4093-80e7-2059f81ddbd7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c79f84fc019d109bac446dd058b4bafb0b516a13fae47b1924e4f7690658ba8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be6405facfa7726cbe56c48184cba0123774e8e34d3ceb9e18daed6f35560c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d77d078572dbe7f11e91ece0d25e76bd17004167dabce267a2b568e798aae158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2843fb3fcbadb926b104a9f7c4084468b36293aebed6c692c29419a0457b62a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d94eca13821749d7e15a350c411a831d2887f4211ed092a4fb57e26e49fcd9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ff8c1f1602e57602cae3f46dd3d2bf8d7abad6bd0952f4347152678e8606d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c1ecd3792204c942f231154736fda8a831651f3cc15fd115a2ff56fc88fc0ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dfc7520e7cb1e09eb10bf9615dbae11d7e73474f1b73eed481d18269b65aef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c34a338f506e8dd09757984f92a29fd76ddd2d25fc00d680cc4883c4766080e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c34a338f506e8dd09757984f92a29fd76ddd2d25fc00d680cc4883c4766080e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9nf52\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:14Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:14 crc kubenswrapper[4986]: I1203 12:56:14.273197 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee22d4f3-8488-4497-a757-a6dd51e84539\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4fc247caf16eab425363e12219a76c40022f0daf0bc9c8b52f4eb8e3726f242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86d62d991d6516ed42f7cad655018e1ebc85c8c1b100d624c1c2e5e8f616cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4ba9819f4f1746f03a558617fc6ee14f5627c37c362d1198a303b6f1854c7bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79cba0b7394bf25be648502fc84a916d8f2fa0949edce20b5a748046ce9b6f75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:14Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:14 crc kubenswrapper[4986]: I1203 12:56:14.287086 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:14Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:14 crc kubenswrapper[4986]: I1203 12:56:14.299612 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0d71a3ffdbe593936c83e92e92e286eaff092333bb3dd54cb5bc2825ca46fbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a402aeaef03d8d3c684f0d8e7fe8a080bc02d761104e3c1095b3233aade2b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:14Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:14 crc kubenswrapper[4986]: I1203 12:56:14.313319 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rkgd9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a939a03f-0eec-49cb-9b23-40e359e427d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc042debcddb7ea2233d6c224020a32acd8c996d5334d8e6eaae8d1329bb3774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc042debcddb7ea2233d6c224020a32acd8c996d5334d8e6eaae8d1329bb3774\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7128a1dd657b9829c1af31ef1a2f5ad08d2bfd65e5d352cf5a69a5be4e9aae9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7128a1dd657b9829c1af31ef1a2f5ad08d2bfd65e5d352cf5a69a5be4e9aae9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cd633caedf88a1bec633cef61b8b30ae422b8c8d91a012395831a09d398f0f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cd633caedf88a1bec633cef61b8b30ae422b8c8d91a012395831a09d398f0f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1b9e8e1f9326660fe6d7c89af18c5f7dd1134e9f2e866da8ba9c795e13cb57d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1b9e8e1f9326660fe6d7c89af18c5f7dd1134e9f2e866da8ba9c795e13cb57d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55ef54ee9f22072e2b437f38f6d06f24e6f1da02555505bc65c488b5818c87a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55ef54ee9f22072e2b437f38f6d06f24e6f1da02555505bc65c488b5818c87a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35fecc2332fdab60719b864653301c5e8322b13ace401a0ff016ff3dc09cd3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35fecc2332fdab60719b864653301c5e8322b13ace401a0ff016ff3dc09cd3d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rkgd9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:14Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:14 crc kubenswrapper[4986]: I1203 12:56:14.318732 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:14 crc kubenswrapper[4986]: I1203 12:56:14.318762 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:14 crc kubenswrapper[4986]: I1203 12:56:14.318772 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:14 crc kubenswrapper[4986]: I1203 12:56:14.318785 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:14 crc kubenswrapper[4986]: I1203 12:56:14.318795 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:14Z","lastTransitionTime":"2025-12-03T12:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:14 crc kubenswrapper[4986]: I1203 12:56:14.328928 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231ed674-1d19-4ee2-b29c-a1b7453ed531\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7eb1602f2d542b1998655456e5c49fca8f55a45a07cb075f989d4b10f91f6c95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0ca216d3d1121b7035d5d7ead25d459f95d56808997c02b3801cac0354c76a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04aa8cf2e01507ace747505514511263d9c560608e33f592a2c10be1edad8674\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70f0aa85e0be54890d4367deaf1a95970a7b0ac9d95afdd07c431680bdf7f15e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c0ed6891f87bfc3c62f0f13739720e012dc8fa3b122a18e799e164e4656abb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T12:56:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 12:56:01.437086 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 12:56:01.437205 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 12:56:01.438166 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1972143202/tls.crt::/tmp/serving-cert-1972143202/tls.key\\\\\\\"\\\\nI1203 12:56:02.167657 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 12:56:02.174337 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 12:56:02.174360 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 12:56:02.174377 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 12:56:02.174382 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 12:56:02.178753 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 12:56:02.179168 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 12:56:02.179175 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 12:56:02.179180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 12:56:02.179184 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 12:56:02.179187 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 12:56:02.179190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 12:56:02.178775 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 12:56:02.181220 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8ecb87e56d3fa7d72b57a6f333db4244ee7f60381778e7099a6fc88d91e1e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:14Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:14 crc kubenswrapper[4986]: I1203 12:56:14.357888 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d51894dd-38f0-413a-adaf-22d196474bed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa98006a8736394300cdb9ef3331d2a23e1b3af2e2203f757c51ca9c700d26c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6e7d24b5f1325cfb2d210cb9f311026ab370c91deecc5d0d951bcd4eda79816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9bb9d940d346f9895e24d98173668ea2d0e68fd3035e5b03475bbc936582958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7fa0d673f037c8edcc75ee30b1bbeea73da55d5459347dacc9251fffbb38be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d4007e9ed33227938a5a3656fd74958aef599418fec4cd2048dfa7dd88924dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fb412bfaef1de92aa6be2e23c0dd73fe39eebf29b6814fb3bb6fd0281ddbbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fb412bfaef1de92aa6be2e23c0dd73fe39eebf29b6814fb3bb6fd0281ddbbd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c66ee903635427e08f96e3f96c512c4d25e79bf2fc14eb0c699b3b9a617cfeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c66ee903635427e08f96e3f96c512c4d25e79bf2fc14eb0c699b3b9a617cfeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54fb6e4ecd7f73267d49261875a00a51e5e36e6079170f694346e9d6c3df8f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54fb6e4ecd7f73267d49261875a00a51e5e36e6079170f694346e9d6c3df8f31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:14Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:14 crc kubenswrapper[4986]: I1203 12:56:14.370430 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b923e52e6284c8adca87e7d3d801a432a676e4389d7ab3c0cba584c5b7d96f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:14Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:14 crc kubenswrapper[4986]: I1203 12:56:14.380797 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d66d06e79ffb2ecd12acc0380e5e4839904147f9194e48277ca667df13a0cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:14Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:14 crc kubenswrapper[4986]: I1203 12:56:14.391631 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://535b7b05dbe9b47808f3d358a1f370b696a5d5bb4dd96b049ac2cbe3d18ea065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgppd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0ab87b22153ba31fa79d2d210e695b274c2ddc878ad679c605aa8fb716534d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgppd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xggpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:14Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:14 crc kubenswrapper[4986]: I1203 12:56:14.421402 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:14 crc kubenswrapper[4986]: I1203 12:56:14.421431 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:14 crc kubenswrapper[4986]: I1203 12:56:14.421440 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:14 crc kubenswrapper[4986]: I1203 12:56:14.421453 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:14 crc kubenswrapper[4986]: I1203 12:56:14.421463 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:14Z","lastTransitionTime":"2025-12-03T12:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:14 crc kubenswrapper[4986]: I1203 12:56:14.523733 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:14 crc kubenswrapper[4986]: I1203 12:56:14.523771 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:14 crc kubenswrapper[4986]: I1203 12:56:14.523782 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:14 crc kubenswrapper[4986]: I1203 12:56:14.523798 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:14 crc kubenswrapper[4986]: I1203 12:56:14.523810 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:14Z","lastTransitionTime":"2025-12-03T12:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:14 crc kubenswrapper[4986]: I1203 12:56:14.626240 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:14 crc kubenswrapper[4986]: I1203 12:56:14.626276 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:14 crc kubenswrapper[4986]: I1203 12:56:14.626305 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:14 crc kubenswrapper[4986]: I1203 12:56:14.626321 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:14 crc kubenswrapper[4986]: I1203 12:56:14.626330 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:14Z","lastTransitionTime":"2025-12-03T12:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:14 crc kubenswrapper[4986]: I1203 12:56:14.728844 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:14 crc kubenswrapper[4986]: I1203 12:56:14.728881 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:14 crc kubenswrapper[4986]: I1203 12:56:14.728893 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:14 crc kubenswrapper[4986]: I1203 12:56:14.728909 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:14 crc kubenswrapper[4986]: I1203 12:56:14.728920 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:14Z","lastTransitionTime":"2025-12-03T12:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:14 crc kubenswrapper[4986]: I1203 12:56:14.832407 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:14 crc kubenswrapper[4986]: I1203 12:56:14.832472 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:14 crc kubenswrapper[4986]: I1203 12:56:14.832489 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:14 crc kubenswrapper[4986]: I1203 12:56:14.832511 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:14 crc kubenswrapper[4986]: I1203 12:56:14.832531 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:14Z","lastTransitionTime":"2025-12-03T12:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:14 crc kubenswrapper[4986]: I1203 12:56:14.935809 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:14 crc kubenswrapper[4986]: I1203 12:56:14.935888 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:14 crc kubenswrapper[4986]: I1203 12:56:14.935927 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:14 crc kubenswrapper[4986]: I1203 12:56:14.935961 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:14 crc kubenswrapper[4986]: I1203 12:56:14.935985 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:14Z","lastTransitionTime":"2025-12-03T12:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.038415 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.038822 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.039033 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.039185 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.039382 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:15Z","lastTransitionTime":"2025-12-03T12:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.141735 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.141769 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.141781 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.141796 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.141807 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:15Z","lastTransitionTime":"2025-12-03T12:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.170530 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rkgd9" event={"ID":"a939a03f-0eec-49cb-9b23-40e359e427d5","Type":"ContainerStarted","Data":"b854d526fa4ac2b270c7f27c1b35b1ff0fbc671e40a928649b578f926d6b51b4"} Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.172406 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9nf52_d3a45156-295b-4093-80e7-2059f81ddbd7/ovnkube-controller/0.log" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.174764 4986 generic.go:334] "Generic (PLEG): container finished" podID="d3a45156-295b-4093-80e7-2059f81ddbd7" containerID="7c1ecd3792204c942f231154736fda8a831651f3cc15fd115a2ff56fc88fc0ca" exitCode=1 Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.174856 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" event={"ID":"d3a45156-295b-4093-80e7-2059f81ddbd7","Type":"ContainerDied","Data":"7c1ecd3792204c942f231154736fda8a831651f3cc15fd115a2ff56fc88fc0ca"} Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.175468 4986 scope.go:117] "RemoveContainer" containerID="7c1ecd3792204c942f231154736fda8a831651f3cc15fd115a2ff56fc88fc0ca" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.188252 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231ed674-1d19-4ee2-b29c-a1b7453ed531\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7eb1602f2d542b1998655456e5c49fca8f55a45a07cb075f989d4b10f91f6c95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0ca216d3d1121b7035d5d7ead25d459f95d56808997c02b3801cac0354c76a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04aa8cf2e01507ace747505514511263d9c560608e33f592a2c10be1edad8674\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70f0aa85e0be54890d4367deaf1a95970a7b0ac9d95afdd07c431680bdf7f15e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c0ed6891f87bfc3c62f0f13739720e012dc8fa3b122a18e799e164e4656abb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T12:56:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 12:56:01.437086 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 12:56:01.437205 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 12:56:01.438166 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1972143202/tls.crt::/tmp/serving-cert-1972143202/tls.key\\\\\\\"\\\\nI1203 12:56:02.167657 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 12:56:02.174337 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 12:56:02.174360 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 12:56:02.174377 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 12:56:02.174382 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 12:56:02.178753 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 12:56:02.179168 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 12:56:02.179175 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 12:56:02.179180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 12:56:02.179184 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 12:56:02.179187 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 12:56:02.179190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 12:56:02.178775 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 12:56:02.181220 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8ecb87e56d3fa7d72b57a6f333db4244ee7f60381778e7099a6fc88d91e1e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:15Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.211345 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d51894dd-38f0-413a-adaf-22d196474bed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa98006a8736394300cdb9ef3331d2a23e1b3af2e2203f757c51ca9c700d26c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6e7d24b5f1325cfb2d210cb9f311026ab370c91deecc5d0d951bcd4eda79816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9bb9d940d346f9895e24d98173668ea2d0e68fd3035e5b03475bbc936582958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7fa0d673f037c8edcc75ee30b1bbeea73da55d5459347dacc9251fffbb38be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d4007e9ed33227938a5a3656fd74958aef599418fec4cd2048dfa7dd88924dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fb412bfaef1de92aa6be2e23c0dd73fe39eebf29b6814fb3bb6fd0281ddbbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fb412bfaef1de92aa6be2e23c0dd73fe39eebf29b6814fb3bb6fd0281ddbbd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c66ee903635427e08f96e3f96c512c4d25e79bf2fc14eb0c699b3b9a617cfeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c66ee903635427e08f96e3f96c512c4d25e79bf2fc14eb0c699b3b9a617cfeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54fb6e4ecd7f73267d49261875a00a51e5e36e6079170f694346e9d6c3df8f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54fb6e4ecd7f73267d49261875a00a51e5e36e6079170f694346e9d6c3df8f31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:15Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.224472 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b923e52e6284c8adca87e7d3d801a432a676e4389d7ab3c0cba584c5b7d96f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:15Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.235655 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d66d06e79ffb2ecd12acc0380e5e4839904147f9194e48277ca667df13a0cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:15Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.244536 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.244578 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.244590 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.244607 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.244621 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:15Z","lastTransitionTime":"2025-12-03T12:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.248152 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://535b7b05dbe9b47808f3d358a1f370b696a5d5bb4dd96b049ac2cbe3d18ea065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgppd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0ab87b22153ba31fa79d2d210e695b274c2ddc878ad679c605aa8fb716534d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgppd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xggpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:15Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.260575 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:15Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.273700 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:15Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.284350 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8lcrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75e4c223-9952-4d1b-b83b-5c45cfc51432\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b2af5d43c7842f129b6521e5b3b1963f6060b3e0c552a771c7df6a2a9ef867d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnr67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8lcrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:15Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.298821 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fszqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a76bcf5c-6a62-4360-826d-ecac337c88ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d94370a14317ade02cb805783e09ffcd8c6ff07f35e9124e302028e6dfa3c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhtrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fszqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:15Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.311657 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-px97g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97196b6d-75cc-4de4-8805-f9ce3fbd4230\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c28bf1aff27ba408262b5fc561e084ff3813ce95a7c14b57fea24d409eb33bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v4j5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-px97g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:15Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.322433 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2p8xn"] Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.322849 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2p8xn" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.324603 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.325915 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.335811 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3a45156-295b-4093-80e7-2059f81ddbd7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c79f84fc019d109bac446dd058b4bafb0b516a13fae47b1924e4f7690658ba8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be6405facfa7726cbe56c48184cba0123774e8e34d3ceb9e18daed6f35560c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d77d078572dbe7f11e91ece0d25e76bd17004167dabce267a2b568e798aae158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2843fb3fcbadb926b104a9f7c4084468b36293aebed6c692c29419a0457b62a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d94eca13821749d7e15a350c411a831d2887f4211ed092a4fb57e26e49fcd9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ff8c1f1602e57602cae3f46dd3d2bf8d7abad6bd0952f4347152678e8606d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c1ecd3792204c942f231154736fda8a831651f3cc15fd115a2ff56fc88fc0ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dfc7520e7cb1e09eb10bf9615dbae11d7e73474f1b73eed481d18269b65aef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c34a338f506e8dd09757984f92a29fd76ddd2d25fc00d680cc4883c4766080e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c34a338f506e8dd09757984f92a29fd76ddd2d25fc00d680cc4883c4766080e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9nf52\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:15Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.348304 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.348353 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.348364 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.348380 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.348392 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:15Z","lastTransitionTime":"2025-12-03T12:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.351110 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee22d4f3-8488-4497-a757-a6dd51e84539\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4fc247caf16eab425363e12219a76c40022f0daf0bc9c8b52f4eb8e3726f242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86d62d991d6516ed42f7cad655018e1ebc85c8c1b100d624c1c2e5e8f616cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4ba9819f4f1746f03a558617fc6ee14f5627c37c362d1198a303b6f1854c7bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79cba0b7394bf25be648502fc84a916d8f2fa0949edce20b5a748046ce9b6f75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:15Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.364548 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:15Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.380466 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0d71a3ffdbe593936c83e92e92e286eaff092333bb3dd54cb5bc2825ca46fbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a402aeaef03d8d3c684f0d8e7fe8a080bc02d761104e3c1095b3233aade2b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:15Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.397415 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rkgd9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a939a03f-0eec-49cb-9b23-40e359e427d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b854d526fa4ac2b270c7f27c1b35b1ff0fbc671e40a928649b578f926d6b51b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc042debcddb7ea2233d6c224020a32acd8c996d5334d8e6eaae8d1329bb3774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc042debcddb7ea2233d6c224020a32acd8c996d5334d8e6eaae8d1329bb3774\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7128a1dd657b9829c1af31ef1a2f5ad08d2bfd65e5d352cf5a69a5be4e9aae9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7128a1dd657b9829c1af31ef1a2f5ad08d2bfd65e5d352cf5a69a5be4e9aae9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cd633caedf88a1bec633cef61b8b30ae422b8c8d91a012395831a09d398f0f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cd633caedf88a1bec633cef61b8b30ae422b8c8d91a012395831a09d398f0f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1b9e8e1f9326660fe6d7c89af18c5f7dd1134e9f2e866da8ba9c795e13cb57d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1b9e8e1f9326660fe6d7c89af18c5f7dd1134e9f2e866da8ba9c795e13cb57d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55ef54ee9f22072e2b437f38f6d06f24e6f1da02555505bc65c488b5818c87a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55ef54ee9f22072e2b437f38f6d06f24e6f1da02555505bc65c488b5818c87a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35fecc2332fdab60719b864653301c5e8322b13ace401a0ff016ff3dc09cd3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35fecc2332fdab60719b864653301c5e8322b13ace401a0ff016ff3dc09cd3d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rkgd9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:15Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.411908 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d66d06e79ffb2ecd12acc0380e5e4839904147f9194e48277ca667df13a0cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:15Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.424527 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://535b7b05dbe9b47808f3d358a1f370b696a5d5bb4dd96b049ac2cbe3d18ea065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgppd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0ab87b22153ba31fa79d2d210e695b274c2ddc878ad679c605aa8fb716534d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgppd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xggpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:15Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.440549 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231ed674-1d19-4ee2-b29c-a1b7453ed531\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7eb1602f2d542b1998655456e5c49fca8f55a45a07cb075f989d4b10f91f6c95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0ca216d3d1121b7035d5d7ead25d459f95d56808997c02b3801cac0354c76a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04aa8cf2e01507ace747505514511263d9c560608e33f592a2c10be1edad8674\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70f0aa85e0be54890d4367deaf1a95970a7b0ac9d95afdd07c431680bdf7f15e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c0ed6891f87bfc3c62f0f13739720e012dc8fa3b122a18e799e164e4656abb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T12:56:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 12:56:01.437086 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 12:56:01.437205 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 12:56:01.438166 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1972143202/tls.crt::/tmp/serving-cert-1972143202/tls.key\\\\\\\"\\\\nI1203 12:56:02.167657 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 12:56:02.174337 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 12:56:02.174360 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 12:56:02.174377 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 12:56:02.174382 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 12:56:02.178753 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 12:56:02.179168 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 12:56:02.179175 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 12:56:02.179180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 12:56:02.179184 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 12:56:02.179187 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 12:56:02.179190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 12:56:02.178775 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 12:56:02.181220 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8ecb87e56d3fa7d72b57a6f333db4244ee7f60381778e7099a6fc88d91e1e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:15Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.450360 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.450396 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.450406 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.450422 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.450433 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:15Z","lastTransitionTime":"2025-12-03T12:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.458573 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d51894dd-38f0-413a-adaf-22d196474bed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa98006a8736394300cdb9ef3331d2a23e1b3af2e2203f757c51ca9c700d26c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6e7d24b5f1325cfb2d210cb9f311026ab370c91deecc5d0d951bcd4eda79816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9bb9d940d346f9895e24d98173668ea2d0e68fd3035e5b03475bbc936582958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7fa0d673f037c8edcc75ee30b1bbeea73da55d5459347dacc9251fffbb38be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d4007e9ed33227938a5a3656fd74958aef599418fec4cd2048dfa7dd88924dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fb412bfaef1de92aa6be2e23c0dd73fe39eebf29b6814fb3bb6fd0281ddbbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fb412bfaef1de92aa6be2e23c0dd73fe39eebf29b6814fb3bb6fd0281ddbbd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c66ee903635427e08f96e3f96c512c4d25e79bf2fc14eb0c699b3b9a617cfeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c66ee903635427e08f96e3f96c512c4d25e79bf2fc14eb0c699b3b9a617cfeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54fb6e4ecd7f73267d49261875a00a51e5e36e6079170f694346e9d6c3df8f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54fb6e4ecd7f73267d49261875a00a51e5e36e6079170f694346e9d6c3df8f31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:15Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.470355 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/957171e6-7b64-4d92-b690-342e3251ed8f-env-overrides\") pod \"ovnkube-control-plane-749d76644c-2p8xn\" (UID: \"957171e6-7b64-4d92-b690-342e3251ed8f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2p8xn" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.470400 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/957171e6-7b64-4d92-b690-342e3251ed8f-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-2p8xn\" (UID: \"957171e6-7b64-4d92-b690-342e3251ed8f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2p8xn" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.470425 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbhn9\" (UniqueName: \"kubernetes.io/projected/957171e6-7b64-4d92-b690-342e3251ed8f-kube-api-access-fbhn9\") pod \"ovnkube-control-plane-749d76644c-2p8xn\" (UID: \"957171e6-7b64-4d92-b690-342e3251ed8f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2p8xn" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.470484 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/957171e6-7b64-4d92-b690-342e3251ed8f-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-2p8xn\" (UID: \"957171e6-7b64-4d92-b690-342e3251ed8f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2p8xn" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.477530 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b923e52e6284c8adca87e7d3d801a432a676e4389d7ab3c0cba584c5b7d96f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:15Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.487441 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8lcrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75e4c223-9952-4d1b-b83b-5c45cfc51432\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b2af5d43c7842f129b6521e5b3b1963f6060b3e0c552a771c7df6a2a9ef867d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnr67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8lcrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:15Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.506814 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:15Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.519169 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:15Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.540477 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3a45156-295b-4093-80e7-2059f81ddbd7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c79f84fc019d109bac446dd058b4bafb0b516a13fae47b1924e4f7690658ba8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be6405facfa7726cbe56c48184cba0123774e8e34d3ceb9e18daed6f35560c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d77d078572dbe7f11e91ece0d25e76bd17004167dabce267a2b568e798aae158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2843fb3fcbadb926b104a9f7c4084468b36293aebed6c692c29419a0457b62a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d94eca13821749d7e15a350c411a831d2887f4211ed092a4fb57e26e49fcd9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ff8c1f1602e57602cae3f46dd3d2bf8d7abad6bd0952f4347152678e8606d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c1ecd3792204c942f231154736fda8a831651f3cc15fd115a2ff56fc88fc0ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c1ecd3792204c942f231154736fda8a831651f3cc15fd115a2ff56fc88fc0ca\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T12:56:14Z\\\",\\\"message\\\":\\\"go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 12:56:14.619391 6322 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1203 12:56:14.619499 6322 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1203 12:56:14.619513 6322 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1203 12:56:14.619531 6322 factory.go:656] Stopping watch factory\\\\nI1203 12:56:14.619547 6322 handler.go:208] Removed *v1.Node event handler 7\\\\nI1203 12:56:14.619554 6322 handler.go:208] Removed *v1.Node event handler 2\\\\nI1203 12:56:14.619678 6322 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 12:56:14.620055 6322 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 12:56:14.620184 6322 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 12:56:14.620382 6322 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 12:56:14.620561 6322 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dfc7520e7cb1e09eb10bf9615dbae11d7e73474f1b73eed481d18269b65aef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c34a338f506e8dd09757984f92a29fd76ddd2d25fc00d680cc4883c4766080e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c34a338f506e8dd09757984f92a29fd76ddd2d25fc00d680cc4883c4766080e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9nf52\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:15Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.550832 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2p8xn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"957171e6-7b64-4d92-b690-342e3251ed8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbhn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbhn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2p8xn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:15Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.552186 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.552215 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.552224 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.552237 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.552246 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:15Z","lastTransitionTime":"2025-12-03T12:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.559749 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fszqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a76bcf5c-6a62-4360-826d-ecac337c88ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d94370a14317ade02cb805783e09ffcd8c6ff07f35e9124e302028e6dfa3c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhtrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fszqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:15Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.570896 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/957171e6-7b64-4d92-b690-342e3251ed8f-env-overrides\") pod \"ovnkube-control-plane-749d76644c-2p8xn\" (UID: \"957171e6-7b64-4d92-b690-342e3251ed8f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2p8xn" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.570933 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/957171e6-7b64-4d92-b690-342e3251ed8f-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-2p8xn\" (UID: \"957171e6-7b64-4d92-b690-342e3251ed8f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2p8xn" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.570950 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbhn9\" (UniqueName: \"kubernetes.io/projected/957171e6-7b64-4d92-b690-342e3251ed8f-kube-api-access-fbhn9\") pod \"ovnkube-control-plane-749d76644c-2p8xn\" (UID: \"957171e6-7b64-4d92-b690-342e3251ed8f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2p8xn" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.570977 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/957171e6-7b64-4d92-b690-342e3251ed8f-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-2p8xn\" (UID: \"957171e6-7b64-4d92-b690-342e3251ed8f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2p8xn" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.571256 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-px97g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97196b6d-75cc-4de4-8805-f9ce3fbd4230\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c28bf1aff27ba408262b5fc561e084ff3813ce95a7c14b57fea24d409eb33bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v4j5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-px97g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:15Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.571519 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/957171e6-7b64-4d92-b690-342e3251ed8f-env-overrides\") pod \"ovnkube-control-plane-749d76644c-2p8xn\" (UID: \"957171e6-7b64-4d92-b690-342e3251ed8f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2p8xn" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.571825 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/957171e6-7b64-4d92-b690-342e3251ed8f-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-2p8xn\" (UID: \"957171e6-7b64-4d92-b690-342e3251ed8f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2p8xn" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.578412 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/957171e6-7b64-4d92-b690-342e3251ed8f-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-2p8xn\" (UID: \"957171e6-7b64-4d92-b690-342e3251ed8f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2p8xn" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.585143 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rkgd9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a939a03f-0eec-49cb-9b23-40e359e427d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b854d526fa4ac2b270c7f27c1b35b1ff0fbc671e40a928649b578f926d6b51b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc042debcddb7ea2233d6c224020a32acd8c996d5334d8e6eaae8d1329bb3774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc042debcddb7ea2233d6c224020a32acd8c996d5334d8e6eaae8d1329bb3774\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7128a1dd657b9829c1af31ef1a2f5ad08d2bfd65e5d352cf5a69a5be4e9aae9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7128a1dd657b9829c1af31ef1a2f5ad08d2bfd65e5d352cf5a69a5be4e9aae9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cd633caedf88a1bec633cef61b8b30ae422b8c8d91a012395831a09d398f0f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cd633caedf88a1bec633cef61b8b30ae422b8c8d91a012395831a09d398f0f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1b9e8e1f9326660fe6d7c89af18c5f7dd1134e9f2e866da8ba9c795e13cb57d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1b9e8e1f9326660fe6d7c89af18c5f7dd1134e9f2e866da8ba9c795e13cb57d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55ef54ee9f22072e2b437f38f6d06f24e6f1da02555505bc65c488b5818c87a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55ef54ee9f22072e2b437f38f6d06f24e6f1da02555505bc65c488b5818c87a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35fecc2332fdab60719b864653301c5e8322b13ace401a0ff016ff3dc09cd3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35fecc2332fdab60719b864653301c5e8322b13ace401a0ff016ff3dc09cd3d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rkgd9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:15Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.587227 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbhn9\" (UniqueName: \"kubernetes.io/projected/957171e6-7b64-4d92-b690-342e3251ed8f-kube-api-access-fbhn9\") pod \"ovnkube-control-plane-749d76644c-2p8xn\" (UID: \"957171e6-7b64-4d92-b690-342e3251ed8f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2p8xn" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.598572 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee22d4f3-8488-4497-a757-a6dd51e84539\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4fc247caf16eab425363e12219a76c40022f0daf0bc9c8b52f4eb8e3726f242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86d62d991d6516ed42f7cad655018e1ebc85c8c1b100d624c1c2e5e8f616cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4ba9819f4f1746f03a558617fc6ee14f5627c37c362d1198a303b6f1854c7bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79cba0b7394bf25be648502fc84a916d8f2fa0949edce20b5a748046ce9b6f75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:15Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.609525 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:15Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.620658 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0d71a3ffdbe593936c83e92e92e286eaff092333bb3dd54cb5bc2825ca46fbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a402aeaef03d8d3c684f0d8e7fe8a080bc02d761104e3c1095b3233aade2b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:15Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.634846 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2p8xn" Dec 03 12:56:15 crc kubenswrapper[4986]: W1203 12:56:15.647336 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod957171e6_7b64_4d92_b690_342e3251ed8f.slice/crio-68f7048d71385cd80c38c5d41a681dd7734febbfc798911a82eca76c2dcead9d WatchSource:0}: Error finding container 68f7048d71385cd80c38c5d41a681dd7734febbfc798911a82eca76c2dcead9d: Status 404 returned error can't find the container with id 68f7048d71385cd80c38c5d41a681dd7734febbfc798911a82eca76c2dcead9d Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.656930 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.656958 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.656968 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.656980 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.656989 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:15Z","lastTransitionTime":"2025-12-03T12:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.701981 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.711943 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8lcrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75e4c223-9952-4d1b-b83b-5c45cfc51432\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b2af5d43c7842f129b6521e5b3b1963f6060b3e0c552a771c7df6a2a9ef867d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnr67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8lcrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:15Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.723912 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:15Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.734722 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:15Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.752048 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3a45156-295b-4093-80e7-2059f81ddbd7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c79f84fc019d109bac446dd058b4bafb0b516a13fae47b1924e4f7690658ba8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be6405facfa7726cbe56c48184cba0123774e8e34d3ceb9e18daed6f35560c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d77d078572dbe7f11e91ece0d25e76bd17004167dabce267a2b568e798aae158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2843fb3fcbadb926b104a9f7c4084468b36293aebed6c692c29419a0457b62a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d94eca13821749d7e15a350c411a831d2887f4211ed092a4fb57e26e49fcd9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ff8c1f1602e57602cae3f46dd3d2bf8d7abad6bd0952f4347152678e8606d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c1ecd3792204c942f231154736fda8a831651f3cc15fd115a2ff56fc88fc0ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c1ecd3792204c942f231154736fda8a831651f3cc15fd115a2ff56fc88fc0ca\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T12:56:14Z\\\",\\\"message\\\":\\\"go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 12:56:14.619391 6322 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1203 12:56:14.619499 6322 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1203 12:56:14.619513 6322 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1203 12:56:14.619531 6322 factory.go:656] Stopping watch factory\\\\nI1203 12:56:14.619547 6322 handler.go:208] Removed *v1.Node event handler 7\\\\nI1203 12:56:14.619554 6322 handler.go:208] Removed *v1.Node event handler 2\\\\nI1203 12:56:14.619678 6322 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 12:56:14.620055 6322 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 12:56:14.620184 6322 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 12:56:14.620382 6322 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 12:56:14.620561 6322 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dfc7520e7cb1e09eb10bf9615dbae11d7e73474f1b73eed481d18269b65aef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c34a338f506e8dd09757984f92a29fd76ddd2d25fc00d680cc4883c4766080e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c34a338f506e8dd09757984f92a29fd76ddd2d25fc00d680cc4883c4766080e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9nf52\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:15Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.758798 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.758829 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.758838 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.758852 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.758862 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:15Z","lastTransitionTime":"2025-12-03T12:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.762071 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2p8xn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"957171e6-7b64-4d92-b690-342e3251ed8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbhn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbhn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2p8xn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:15Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.777273 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fszqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a76bcf5c-6a62-4360-826d-ecac337c88ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d94370a14317ade02cb805783e09ffcd8c6ff07f35e9124e302028e6dfa3c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhtrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fszqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:15Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.789220 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-px97g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97196b6d-75cc-4de4-8805-f9ce3fbd4230\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c28bf1aff27ba408262b5fc561e084ff3813ce95a7c14b57fea24d409eb33bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v4j5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-px97g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:15Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.802962 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rkgd9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a939a03f-0eec-49cb-9b23-40e359e427d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b854d526fa4ac2b270c7f27c1b35b1ff0fbc671e40a928649b578f926d6b51b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc042debcddb7ea2233d6c224020a32acd8c996d5334d8e6eaae8d1329bb3774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc042debcddb7ea2233d6c224020a32acd8c996d5334d8e6eaae8d1329bb3774\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7128a1dd657b9829c1af31ef1a2f5ad08d2bfd65e5d352cf5a69a5be4e9aae9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7128a1dd657b9829c1af31ef1a2f5ad08d2bfd65e5d352cf5a69a5be4e9aae9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cd633caedf88a1bec633cef61b8b30ae422b8c8d91a012395831a09d398f0f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cd633caedf88a1bec633cef61b8b30ae422b8c8d91a012395831a09d398f0f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1b9e8e1f9326660fe6d7c89af18c5f7dd1134e9f2e866da8ba9c795e13cb57d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1b9e8e1f9326660fe6d7c89af18c5f7dd1134e9f2e866da8ba9c795e13cb57d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55ef54ee9f22072e2b437f38f6d06f24e6f1da02555505bc65c488b5818c87a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55ef54ee9f22072e2b437f38f6d06f24e6f1da02555505bc65c488b5818c87a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35fecc2332fdab60719b864653301c5e8322b13ace401a0ff016ff3dc09cd3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35fecc2332fdab60719b864653301c5e8322b13ace401a0ff016ff3dc09cd3d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rkgd9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:15Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.813134 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee22d4f3-8488-4497-a757-a6dd51e84539\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4fc247caf16eab425363e12219a76c40022f0daf0bc9c8b52f4eb8e3726f242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86d62d991d6516ed42f7cad655018e1ebc85c8c1b100d624c1c2e5e8f616cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4ba9819f4f1746f03a558617fc6ee14f5627c37c362d1198a303b6f1854c7bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79cba0b7394bf25be648502fc84a916d8f2fa0949edce20b5a748046ce9b6f75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:15Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.822629 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:15Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.832789 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0d71a3ffdbe593936c83e92e92e286eaff092333bb3dd54cb5bc2825ca46fbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a402aeaef03d8d3c684f0d8e7fe8a080bc02d761104e3c1095b3233aade2b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:15Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.842428 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d66d06e79ffb2ecd12acc0380e5e4839904147f9194e48277ca667df13a0cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:15Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.852582 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://535b7b05dbe9b47808f3d358a1f370b696a5d5bb4dd96b049ac2cbe3d18ea065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgppd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0ab87b22153ba31fa79d2d210e695b274c2ddc878ad679c605aa8fb716534d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgppd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xggpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:15Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.861036 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.861073 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.861084 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.861099 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.861111 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:15Z","lastTransitionTime":"2025-12-03T12:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.864332 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231ed674-1d19-4ee2-b29c-a1b7453ed531\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7eb1602f2d542b1998655456e5c49fca8f55a45a07cb075f989d4b10f91f6c95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0ca216d3d1121b7035d5d7ead25d459f95d56808997c02b3801cac0354c76a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04aa8cf2e01507ace747505514511263d9c560608e33f592a2c10be1edad8674\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70f0aa85e0be54890d4367deaf1a95970a7b0ac9d95afdd07c431680bdf7f15e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c0ed6891f87bfc3c62f0f13739720e012dc8fa3b122a18e799e164e4656abb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T12:56:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 12:56:01.437086 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 12:56:01.437205 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 12:56:01.438166 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1972143202/tls.crt::/tmp/serving-cert-1972143202/tls.key\\\\\\\"\\\\nI1203 12:56:02.167657 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 12:56:02.174337 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 12:56:02.174360 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 12:56:02.174377 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 12:56:02.174382 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 12:56:02.178753 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 12:56:02.179168 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 12:56:02.179175 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 12:56:02.179180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 12:56:02.179184 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 12:56:02.179187 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 12:56:02.179190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 12:56:02.178775 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 12:56:02.181220 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8ecb87e56d3fa7d72b57a6f333db4244ee7f60381778e7099a6fc88d91e1e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:15Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.882467 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d51894dd-38f0-413a-adaf-22d196474bed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa98006a8736394300cdb9ef3331d2a23e1b3af2e2203f757c51ca9c700d26c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6e7d24b5f1325cfb2d210cb9f311026ab370c91deecc5d0d951bcd4eda79816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9bb9d940d346f9895e24d98173668ea2d0e68fd3035e5b03475bbc936582958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7fa0d673f037c8edcc75ee30b1bbeea73da55d5459347dacc9251fffbb38be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d4007e9ed33227938a5a3656fd74958aef599418fec4cd2048dfa7dd88924dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fb412bfaef1de92aa6be2e23c0dd73fe39eebf29b6814fb3bb6fd0281ddbbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fb412bfaef1de92aa6be2e23c0dd73fe39eebf29b6814fb3bb6fd0281ddbbd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c66ee903635427e08f96e3f96c512c4d25e79bf2fc14eb0c699b3b9a617cfeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c66ee903635427e08f96e3f96c512c4d25e79bf2fc14eb0c699b3b9a617cfeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54fb6e4ecd7f73267d49261875a00a51e5e36e6079170f694346e9d6c3df8f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54fb6e4ecd7f73267d49261875a00a51e5e36e6079170f694346e9d6c3df8f31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:15Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.897777 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b923e52e6284c8adca87e7d3d801a432a676e4389d7ab3c0cba584c5b7d96f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:15Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.942974 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.943005 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 12:56:15 crc kubenswrapper[4986]: E1203 12:56:15.943085 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 12:56:15 crc kubenswrapper[4986]: E1203 12:56:15.943171 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.943304 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 12:56:15 crc kubenswrapper[4986]: E1203 12:56:15.943381 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.963816 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.963846 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.963855 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.963869 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:15 crc kubenswrapper[4986]: I1203 12:56:15.963878 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:15Z","lastTransitionTime":"2025-12-03T12:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:16 crc kubenswrapper[4986]: I1203 12:56:16.056932 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-rl2mt"] Dec 03 12:56:16 crc kubenswrapper[4986]: I1203 12:56:16.057846 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl2mt" Dec 03 12:56:16 crc kubenswrapper[4986]: E1203 12:56:16.057969 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl2mt" podUID="ea24f625-ded4-4e37-a23b-f96fe691b0dd" Dec 03 12:56:16 crc kubenswrapper[4986]: I1203 12:56:16.066124 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:16 crc kubenswrapper[4986]: I1203 12:56:16.066168 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:16 crc kubenswrapper[4986]: I1203 12:56:16.066176 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:16 crc kubenswrapper[4986]: I1203 12:56:16.066190 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:16 crc kubenswrapper[4986]: I1203 12:56:16.066200 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:16Z","lastTransitionTime":"2025-12-03T12:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:16 crc kubenswrapper[4986]: I1203 12:56:16.075653 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ea24f625-ded4-4e37-a23b-f96fe691b0dd-metrics-certs\") pod \"network-metrics-daemon-rl2mt\" (UID: \"ea24f625-ded4-4e37-a23b-f96fe691b0dd\") " pod="openshift-multus/network-metrics-daemon-rl2mt" Dec 03 12:56:16 crc kubenswrapper[4986]: I1203 12:56:16.075892 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhr82\" (UniqueName: \"kubernetes.io/projected/ea24f625-ded4-4e37-a23b-f96fe691b0dd-kube-api-access-xhr82\") pod \"network-metrics-daemon-rl2mt\" (UID: \"ea24f625-ded4-4e37-a23b-f96fe691b0dd\") " pod="openshift-multus/network-metrics-daemon-rl2mt" Dec 03 12:56:16 crc kubenswrapper[4986]: I1203 12:56:16.077700 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee22d4f3-8488-4497-a757-a6dd51e84539\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4fc247caf16eab425363e12219a76c40022f0daf0bc9c8b52f4eb8e3726f242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86d62d991d6516ed42f7cad655018e1ebc85c8c1b100d624c1c2e5e8f616cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4ba9819f4f1746f03a558617fc6ee14f5627c37c362d1198a303b6f1854c7bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79cba0b7394bf25be648502fc84a916d8f2fa0949edce20b5a748046ce9b6f75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:16Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:16 crc kubenswrapper[4986]: I1203 12:56:16.089976 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:16Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:16 crc kubenswrapper[4986]: I1203 12:56:16.103547 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0d71a3ffdbe593936c83e92e92e286eaff092333bb3dd54cb5bc2825ca46fbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a402aeaef03d8d3c684f0d8e7fe8a080bc02d761104e3c1095b3233aade2b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:16Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:16 crc kubenswrapper[4986]: I1203 12:56:16.126765 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rkgd9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a939a03f-0eec-49cb-9b23-40e359e427d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b854d526fa4ac2b270c7f27c1b35b1ff0fbc671e40a928649b578f926d6b51b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc042debcddb7ea2233d6c224020a32acd8c996d5334d8e6eaae8d1329bb3774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc042debcddb7ea2233d6c224020a32acd8c996d5334d8e6eaae8d1329bb3774\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7128a1dd657b9829c1af31ef1a2f5ad08d2bfd65e5d352cf5a69a5be4e9aae9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7128a1dd657b9829c1af31ef1a2f5ad08d2bfd65e5d352cf5a69a5be4e9aae9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cd633caedf88a1bec633cef61b8b30ae422b8c8d91a012395831a09d398f0f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cd633caedf88a1bec633cef61b8b30ae422b8c8d91a012395831a09d398f0f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1b9e8e1f9326660fe6d7c89af18c5f7dd1134e9f2e866da8ba9c795e13cb57d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1b9e8e1f9326660fe6d7c89af18c5f7dd1134e9f2e866da8ba9c795e13cb57d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55ef54ee9f22072e2b437f38f6d06f24e6f1da02555505bc65c488b5818c87a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55ef54ee9f22072e2b437f38f6d06f24e6f1da02555505bc65c488b5818c87a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35fecc2332fdab60719b864653301c5e8322b13ace401a0ff016ff3dc09cd3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35fecc2332fdab60719b864653301c5e8322b13ace401a0ff016ff3dc09cd3d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rkgd9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:16Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:16 crc kubenswrapper[4986]: I1203 12:56:16.138617 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rl2mt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea24f625-ded4-4e37-a23b-f96fe691b0dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhr82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhr82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rl2mt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:16Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:16 crc kubenswrapper[4986]: I1203 12:56:16.158045 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231ed674-1d19-4ee2-b29c-a1b7453ed531\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7eb1602f2d542b1998655456e5c49fca8f55a45a07cb075f989d4b10f91f6c95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0ca216d3d1121b7035d5d7ead25d459f95d56808997c02b3801cac0354c76a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04aa8cf2e01507ace747505514511263d9c560608e33f592a2c10be1edad8674\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70f0aa85e0be54890d4367deaf1a95970a7b0ac9d95afdd07c431680bdf7f15e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c0ed6891f87bfc3c62f0f13739720e012dc8fa3b122a18e799e164e4656abb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T12:56:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 12:56:01.437086 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 12:56:01.437205 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 12:56:01.438166 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1972143202/tls.crt::/tmp/serving-cert-1972143202/tls.key\\\\\\\"\\\\nI1203 12:56:02.167657 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 12:56:02.174337 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 12:56:02.174360 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 12:56:02.174377 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 12:56:02.174382 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 12:56:02.178753 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 12:56:02.179168 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 12:56:02.179175 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 12:56:02.179180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 12:56:02.179184 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 12:56:02.179187 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 12:56:02.179190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 12:56:02.178775 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 12:56:02.181220 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8ecb87e56d3fa7d72b57a6f333db4244ee7f60381778e7099a6fc88d91e1e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:16Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:16 crc kubenswrapper[4986]: I1203 12:56:16.168255 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:16 crc kubenswrapper[4986]: I1203 12:56:16.168292 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:16 crc kubenswrapper[4986]: I1203 12:56:16.168301 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:16 crc kubenswrapper[4986]: I1203 12:56:16.168313 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:16 crc kubenswrapper[4986]: I1203 12:56:16.168322 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:16Z","lastTransitionTime":"2025-12-03T12:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:16 crc kubenswrapper[4986]: I1203 12:56:16.177617 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ea24f625-ded4-4e37-a23b-f96fe691b0dd-metrics-certs\") pod \"network-metrics-daemon-rl2mt\" (UID: \"ea24f625-ded4-4e37-a23b-f96fe691b0dd\") " pod="openshift-multus/network-metrics-daemon-rl2mt" Dec 03 12:56:16 crc kubenswrapper[4986]: I1203 12:56:16.177884 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhr82\" (UniqueName: \"kubernetes.io/projected/ea24f625-ded4-4e37-a23b-f96fe691b0dd-kube-api-access-xhr82\") pod \"network-metrics-daemon-rl2mt\" (UID: \"ea24f625-ded4-4e37-a23b-f96fe691b0dd\") " pod="openshift-multus/network-metrics-daemon-rl2mt" Dec 03 12:56:16 crc kubenswrapper[4986]: E1203 12:56:16.177727 4986 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 12:56:16 crc kubenswrapper[4986]: E1203 12:56:16.178056 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ea24f625-ded4-4e37-a23b-f96fe691b0dd-metrics-certs podName:ea24f625-ded4-4e37-a23b-f96fe691b0dd nodeName:}" failed. No retries permitted until 2025-12-03 12:56:16.678041175 +0000 UTC m=+36.144472366 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ea24f625-ded4-4e37-a23b-f96fe691b0dd-metrics-certs") pod "network-metrics-daemon-rl2mt" (UID: "ea24f625-ded4-4e37-a23b-f96fe691b0dd") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 12:56:16 crc kubenswrapper[4986]: I1203 12:56:16.179346 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2p8xn" event={"ID":"957171e6-7b64-4d92-b690-342e3251ed8f","Type":"ContainerStarted","Data":"68f7048d71385cd80c38c5d41a681dd7734febbfc798911a82eca76c2dcead9d"} Dec 03 12:56:16 crc kubenswrapper[4986]: I1203 12:56:16.181699 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9nf52_d3a45156-295b-4093-80e7-2059f81ddbd7/ovnkube-controller/0.log" Dec 03 12:56:16 crc kubenswrapper[4986]: I1203 12:56:16.184690 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" event={"ID":"d3a45156-295b-4093-80e7-2059f81ddbd7","Type":"ContainerStarted","Data":"6dcd8afad237a7ca1c7f5e0d37bec9ab99fd61ac7be8b3604a8f5d0bb6a2181e"} Dec 03 12:56:16 crc kubenswrapper[4986]: I1203 12:56:16.184879 4986 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 03 12:56:16 crc kubenswrapper[4986]: I1203 12:56:16.202321 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d51894dd-38f0-413a-adaf-22d196474bed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa98006a8736394300cdb9ef3331d2a23e1b3af2e2203f757c51ca9c700d26c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6e7d24b5f1325cfb2d210cb9f311026ab370c91deecc5d0d951bcd4eda79816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9bb9d940d346f9895e24d98173668ea2d0e68fd3035e5b03475bbc936582958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7fa0d673f037c8edcc75ee30b1bbeea73da55d5459347dacc9251fffbb38be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d4007e9ed33227938a5a3656fd74958aef599418fec4cd2048dfa7dd88924dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fb412bfaef1de92aa6be2e23c0dd73fe39eebf29b6814fb3bb6fd0281ddbbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fb412bfaef1de92aa6be2e23c0dd73fe39eebf29b6814fb3bb6fd0281ddbbd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c66ee903635427e08f96e3f96c512c4d25e79bf2fc14eb0c699b3b9a617cfeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c66ee903635427e08f96e3f96c512c4d25e79bf2fc14eb0c699b3b9a617cfeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54fb6e4ecd7f73267d49261875a00a51e5e36e6079170f694346e9d6c3df8f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54fb6e4ecd7f73267d49261875a00a51e5e36e6079170f694346e9d6c3df8f31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:16Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:16 crc kubenswrapper[4986]: I1203 12:56:16.213436 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhr82\" (UniqueName: \"kubernetes.io/projected/ea24f625-ded4-4e37-a23b-f96fe691b0dd-kube-api-access-xhr82\") pod \"network-metrics-daemon-rl2mt\" (UID: \"ea24f625-ded4-4e37-a23b-f96fe691b0dd\") " pod="openshift-multus/network-metrics-daemon-rl2mt" Dec 03 12:56:16 crc kubenswrapper[4986]: I1203 12:56:16.230363 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b923e52e6284c8adca87e7d3d801a432a676e4389d7ab3c0cba584c5b7d96f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:16Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:16 crc kubenswrapper[4986]: I1203 12:56:16.242572 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d66d06e79ffb2ecd12acc0380e5e4839904147f9194e48277ca667df13a0cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:16Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:16 crc kubenswrapper[4986]: I1203 12:56:16.253401 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://535b7b05dbe9b47808f3d358a1f370b696a5d5bb4dd96b049ac2cbe3d18ea065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgppd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0ab87b22153ba31fa79d2d210e695b274c2ddc878ad679c605aa8fb716534d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgppd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xggpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:16Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:16 crc kubenswrapper[4986]: I1203 12:56:16.264956 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:16Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:16 crc kubenswrapper[4986]: I1203 12:56:16.270093 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:16 crc kubenswrapper[4986]: I1203 12:56:16.270116 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:16 crc kubenswrapper[4986]: I1203 12:56:16.270123 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:16 crc kubenswrapper[4986]: I1203 12:56:16.270135 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:16 crc kubenswrapper[4986]: I1203 12:56:16.270144 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:16Z","lastTransitionTime":"2025-12-03T12:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:16 crc kubenswrapper[4986]: I1203 12:56:16.276268 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:16Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:16 crc kubenswrapper[4986]: I1203 12:56:16.286125 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8lcrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75e4c223-9952-4d1b-b83b-5c45cfc51432\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b2af5d43c7842f129b6521e5b3b1963f6060b3e0c552a771c7df6a2a9ef867d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnr67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8lcrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:16Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:16 crc kubenswrapper[4986]: I1203 12:56:16.295262 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fszqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a76bcf5c-6a62-4360-826d-ecac337c88ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d94370a14317ade02cb805783e09ffcd8c6ff07f35e9124e302028e6dfa3c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhtrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fszqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:16Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:16 crc kubenswrapper[4986]: I1203 12:56:16.306008 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-px97g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97196b6d-75cc-4de4-8805-f9ce3fbd4230\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c28bf1aff27ba408262b5fc561e084ff3813ce95a7c14b57fea24d409eb33bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v4j5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-px97g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:16Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:16 crc kubenswrapper[4986]: I1203 12:56:16.321570 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3a45156-295b-4093-80e7-2059f81ddbd7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c79f84fc019d109bac446dd058b4bafb0b516a13fae47b1924e4f7690658ba8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be6405facfa7726cbe56c48184cba0123774e8e34d3ceb9e18daed6f35560c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d77d078572dbe7f11e91ece0d25e76bd17004167dabce267a2b568e798aae158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2843fb3fcbadb926b104a9f7c4084468b36293aebed6c692c29419a0457b62a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d94eca13821749d7e15a350c411a831d2887f4211ed092a4fb57e26e49fcd9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ff8c1f1602e57602cae3f46dd3d2bf8d7abad6bd0952f4347152678e8606d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c1ecd3792204c942f231154736fda8a831651f3cc15fd115a2ff56fc88fc0ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c1ecd3792204c942f231154736fda8a831651f3cc15fd115a2ff56fc88fc0ca\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T12:56:14Z\\\",\\\"message\\\":\\\"go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 12:56:14.619391 6322 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1203 12:56:14.619499 6322 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1203 12:56:14.619513 6322 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1203 12:56:14.619531 6322 factory.go:656] Stopping watch factory\\\\nI1203 12:56:14.619547 6322 handler.go:208] Removed *v1.Node event handler 7\\\\nI1203 12:56:14.619554 6322 handler.go:208] Removed *v1.Node event handler 2\\\\nI1203 12:56:14.619678 6322 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 12:56:14.620055 6322 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 12:56:14.620184 6322 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 12:56:14.620382 6322 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 12:56:14.620561 6322 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dfc7520e7cb1e09eb10bf9615dbae11d7e73474f1b73eed481d18269b65aef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c34a338f506e8dd09757984f92a29fd76ddd2d25fc00d680cc4883c4766080e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c34a338f506e8dd09757984f92a29fd76ddd2d25fc00d680cc4883c4766080e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9nf52\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:16Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:16 crc kubenswrapper[4986]: I1203 12:56:16.336206 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2p8xn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"957171e6-7b64-4d92-b690-342e3251ed8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbhn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbhn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2p8xn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:16Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:16 crc kubenswrapper[4986]: I1203 12:56:16.350010 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-px97g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97196b6d-75cc-4de4-8805-f9ce3fbd4230\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c28bf1aff27ba408262b5fc561e084ff3813ce95a7c14b57fea24d409eb33bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v4j5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-px97g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:16Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:16 crc kubenswrapper[4986]: I1203 12:56:16.373650 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:16 crc kubenswrapper[4986]: I1203 12:56:16.373694 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:16 crc kubenswrapper[4986]: I1203 12:56:16.373705 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:16 crc kubenswrapper[4986]: I1203 12:56:16.373724 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:16 crc kubenswrapper[4986]: I1203 12:56:16.373736 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:16Z","lastTransitionTime":"2025-12-03T12:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:16 crc kubenswrapper[4986]: I1203 12:56:16.382074 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3a45156-295b-4093-80e7-2059f81ddbd7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c79f84fc019d109bac446dd058b4bafb0b516a13fae47b1924e4f7690658ba8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be6405facfa7726cbe56c48184cba0123774e8e34d3ceb9e18daed6f35560c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d77d078572dbe7f11e91ece0d25e76bd17004167dabce267a2b568e798aae158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2843fb3fcbadb926b104a9f7c4084468b36293aebed6c692c29419a0457b62a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d94eca13821749d7e15a350c411a831d2887f4211ed092a4fb57e26e49fcd9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ff8c1f1602e57602cae3f46dd3d2bf8d7abad6bd0952f4347152678e8606d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dcd8afad237a7ca1c7f5e0d37bec9ab99fd61ac7be8b3604a8f5d0bb6a2181e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c1ecd3792204c942f231154736fda8a831651f3cc15fd115a2ff56fc88fc0ca\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T12:56:14Z\\\",\\\"message\\\":\\\"go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 12:56:14.619391 6322 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1203 12:56:14.619499 6322 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1203 12:56:14.619513 6322 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1203 12:56:14.619531 6322 factory.go:656] Stopping watch factory\\\\nI1203 12:56:14.619547 6322 handler.go:208] Removed *v1.Node event handler 7\\\\nI1203 12:56:14.619554 6322 handler.go:208] Removed *v1.Node event handler 2\\\\nI1203 12:56:14.619678 6322 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 12:56:14.620055 6322 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 12:56:14.620184 6322 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 12:56:14.620382 6322 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 12:56:14.620561 6322 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dfc7520e7cb1e09eb10bf9615dbae11d7e73474f1b73eed481d18269b65aef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c34a338f506e8dd09757984f92a29fd76ddd2d25fc00d680cc4883c4766080e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c34a338f506e8dd09757984f92a29fd76ddd2d25fc00d680cc4883c4766080e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9nf52\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:16Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:16 crc kubenswrapper[4986]: I1203 12:56:16.395361 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2p8xn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"957171e6-7b64-4d92-b690-342e3251ed8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbhn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbhn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2p8xn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:16Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:16 crc kubenswrapper[4986]: I1203 12:56:16.404648 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fszqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a76bcf5c-6a62-4360-826d-ecac337c88ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d94370a14317ade02cb805783e09ffcd8c6ff07f35e9124e302028e6dfa3c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhtrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fszqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:16Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:16 crc kubenswrapper[4986]: I1203 12:56:16.421673 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0d71a3ffdbe593936c83e92e92e286eaff092333bb3dd54cb5bc2825ca46fbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a402aeaef03d8d3c684f0d8e7fe8a080bc02d761104e3c1095b3233aade2b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:16Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:16 crc kubenswrapper[4986]: I1203 12:56:16.439863 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rkgd9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a939a03f-0eec-49cb-9b23-40e359e427d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b854d526fa4ac2b270c7f27c1b35b1ff0fbc671e40a928649b578f926d6b51b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc042debcddb7ea2233d6c224020a32acd8c996d5334d8e6eaae8d1329bb3774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc042debcddb7ea2233d6c224020a32acd8c996d5334d8e6eaae8d1329bb3774\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7128a1dd657b9829c1af31ef1a2f5ad08d2bfd65e5d352cf5a69a5be4e9aae9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7128a1dd657b9829c1af31ef1a2f5ad08d2bfd65e5d352cf5a69a5be4e9aae9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cd633caedf88a1bec633cef61b8b30ae422b8c8d91a012395831a09d398f0f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cd633caedf88a1bec633cef61b8b30ae422b8c8d91a012395831a09d398f0f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1b9e8e1f9326660fe6d7c89af18c5f7dd1134e9f2e866da8ba9c795e13cb57d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1b9e8e1f9326660fe6d7c89af18c5f7dd1134e9f2e866da8ba9c795e13cb57d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55ef54ee9f22072e2b437f38f6d06f24e6f1da02555505bc65c488b5818c87a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55ef54ee9f22072e2b437f38f6d06f24e6f1da02555505bc65c488b5818c87a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35fecc2332fdab60719b864653301c5e8322b13ace401a0ff016ff3dc09cd3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35fecc2332fdab60719b864653301c5e8322b13ace401a0ff016ff3dc09cd3d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rkgd9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:16Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:16 crc kubenswrapper[4986]: I1203 12:56:16.454440 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rl2mt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea24f625-ded4-4e37-a23b-f96fe691b0dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhr82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhr82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rl2mt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:16Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:16 crc kubenswrapper[4986]: I1203 12:56:16.469610 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee22d4f3-8488-4497-a757-a6dd51e84539\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4fc247caf16eab425363e12219a76c40022f0daf0bc9c8b52f4eb8e3726f242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86d62d991d6516ed42f7cad655018e1ebc85c8c1b100d624c1c2e5e8f616cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4ba9819f4f1746f03a558617fc6ee14f5627c37c362d1198a303b6f1854c7bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79cba0b7394bf25be648502fc84a916d8f2fa0949edce20b5a748046ce9b6f75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:16Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:16 crc kubenswrapper[4986]: I1203 12:56:16.475395 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:16 crc kubenswrapper[4986]: I1203 12:56:16.475426 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:16 crc kubenswrapper[4986]: I1203 12:56:16.475435 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:16 crc kubenswrapper[4986]: I1203 12:56:16.475447 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:16 crc kubenswrapper[4986]: I1203 12:56:16.475455 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:16Z","lastTransitionTime":"2025-12-03T12:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:16 crc kubenswrapper[4986]: I1203 12:56:16.482896 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:16Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:16 crc kubenswrapper[4986]: I1203 12:56:16.500226 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b923e52e6284c8adca87e7d3d801a432a676e4389d7ab3c0cba584c5b7d96f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:16Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:16 crc kubenswrapper[4986]: I1203 12:56:16.511547 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d66d06e79ffb2ecd12acc0380e5e4839904147f9194e48277ca667df13a0cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:16Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:16 crc kubenswrapper[4986]: I1203 12:56:16.526096 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://535b7b05dbe9b47808f3d358a1f370b696a5d5bb4dd96b049ac2cbe3d18ea065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgppd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0ab87b22153ba31fa79d2d210e695b274c2ddc878ad679c605aa8fb716534d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgppd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xggpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:16Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:16 crc kubenswrapper[4986]: I1203 12:56:16.544503 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231ed674-1d19-4ee2-b29c-a1b7453ed531\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7eb1602f2d542b1998655456e5c49fca8f55a45a07cb075f989d4b10f91f6c95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0ca216d3d1121b7035d5d7ead25d459f95d56808997c02b3801cac0354c76a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04aa8cf2e01507ace747505514511263d9c560608e33f592a2c10be1edad8674\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70f0aa85e0be54890d4367deaf1a95970a7b0ac9d95afdd07c431680bdf7f15e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c0ed6891f87bfc3c62f0f13739720e012dc8fa3b122a18e799e164e4656abb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T12:56:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 12:56:01.437086 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 12:56:01.437205 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 12:56:01.438166 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1972143202/tls.crt::/tmp/serving-cert-1972143202/tls.key\\\\\\\"\\\\nI1203 12:56:02.167657 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 12:56:02.174337 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 12:56:02.174360 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 12:56:02.174377 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 12:56:02.174382 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 12:56:02.178753 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 12:56:02.179168 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 12:56:02.179175 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 12:56:02.179180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 12:56:02.179184 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 12:56:02.179187 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 12:56:02.179190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 12:56:02.178775 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 12:56:02.181220 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8ecb87e56d3fa7d72b57a6f333db4244ee7f60381778e7099a6fc88d91e1e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:16Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:16 crc kubenswrapper[4986]: I1203 12:56:16.569749 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d51894dd-38f0-413a-adaf-22d196474bed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa98006a8736394300cdb9ef3331d2a23e1b3af2e2203f757c51ca9c700d26c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6e7d24b5f1325cfb2d210cb9f311026ab370c91deecc5d0d951bcd4eda79816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9bb9d940d346f9895e24d98173668ea2d0e68fd3035e5b03475bbc936582958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7fa0d673f037c8edcc75ee30b1bbeea73da55d5459347dacc9251fffbb38be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d4007e9ed33227938a5a3656fd74958aef599418fec4cd2048dfa7dd88924dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fb412bfaef1de92aa6be2e23c0dd73fe39eebf29b6814fb3bb6fd0281ddbbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fb412bfaef1de92aa6be2e23c0dd73fe39eebf29b6814fb3bb6fd0281ddbbd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c66ee903635427e08f96e3f96c512c4d25e79bf2fc14eb0c699b3b9a617cfeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c66ee903635427e08f96e3f96c512c4d25e79bf2fc14eb0c699b3b9a617cfeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54fb6e4ecd7f73267d49261875a00a51e5e36e6079170f694346e9d6c3df8f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54fb6e4ecd7f73267d49261875a00a51e5e36e6079170f694346e9d6c3df8f31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:16Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:16 crc kubenswrapper[4986]: I1203 12:56:16.577932 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:16 crc kubenswrapper[4986]: I1203 12:56:16.577981 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:16 crc kubenswrapper[4986]: I1203 12:56:16.577993 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:16 crc kubenswrapper[4986]: I1203 12:56:16.578010 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:16 crc kubenswrapper[4986]: I1203 12:56:16.578024 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:16Z","lastTransitionTime":"2025-12-03T12:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:16 crc kubenswrapper[4986]: I1203 12:56:16.584071 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:16Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:16 crc kubenswrapper[4986]: I1203 12:56:16.594713 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8lcrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75e4c223-9952-4d1b-b83b-5c45cfc51432\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b2af5d43c7842f129b6521e5b3b1963f6060b3e0c552a771c7df6a2a9ef867d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnr67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8lcrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:16Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:16 crc kubenswrapper[4986]: I1203 12:56:16.606644 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:16Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:16 crc kubenswrapper[4986]: I1203 12:56:16.680307 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:16 crc kubenswrapper[4986]: I1203 12:56:16.680363 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:16 crc kubenswrapper[4986]: I1203 12:56:16.680380 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:16 crc kubenswrapper[4986]: I1203 12:56:16.680406 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:16 crc kubenswrapper[4986]: I1203 12:56:16.680422 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:16Z","lastTransitionTime":"2025-12-03T12:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:16 crc kubenswrapper[4986]: I1203 12:56:16.682741 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ea24f625-ded4-4e37-a23b-f96fe691b0dd-metrics-certs\") pod \"network-metrics-daemon-rl2mt\" (UID: \"ea24f625-ded4-4e37-a23b-f96fe691b0dd\") " pod="openshift-multus/network-metrics-daemon-rl2mt" Dec 03 12:56:16 crc kubenswrapper[4986]: E1203 12:56:16.682875 4986 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 12:56:16 crc kubenswrapper[4986]: E1203 12:56:16.682923 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ea24f625-ded4-4e37-a23b-f96fe691b0dd-metrics-certs podName:ea24f625-ded4-4e37-a23b-f96fe691b0dd nodeName:}" failed. No retries permitted until 2025-12-03 12:56:17.682909432 +0000 UTC m=+37.149340623 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ea24f625-ded4-4e37-a23b-f96fe691b0dd-metrics-certs") pod "network-metrics-daemon-rl2mt" (UID: "ea24f625-ded4-4e37-a23b-f96fe691b0dd") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 12:56:16 crc kubenswrapper[4986]: I1203 12:56:16.783425 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:16 crc kubenswrapper[4986]: I1203 12:56:16.783512 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:16 crc kubenswrapper[4986]: I1203 12:56:16.783539 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:16 crc kubenswrapper[4986]: I1203 12:56:16.783570 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:16 crc kubenswrapper[4986]: I1203 12:56:16.783593 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:16Z","lastTransitionTime":"2025-12-03T12:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:16 crc kubenswrapper[4986]: I1203 12:56:16.886565 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:16 crc kubenswrapper[4986]: I1203 12:56:16.886625 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:16 crc kubenswrapper[4986]: I1203 12:56:16.886641 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:16 crc kubenswrapper[4986]: I1203 12:56:16.886665 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:16 crc kubenswrapper[4986]: I1203 12:56:16.886683 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:16Z","lastTransitionTime":"2025-12-03T12:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:16 crc kubenswrapper[4986]: I1203 12:56:16.989406 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:16 crc kubenswrapper[4986]: I1203 12:56:16.989447 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:16 crc kubenswrapper[4986]: I1203 12:56:16.989459 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:16 crc kubenswrapper[4986]: I1203 12:56:16.989476 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:16 crc kubenswrapper[4986]: I1203 12:56:16.989488 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:16Z","lastTransitionTime":"2025-12-03T12:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:17 crc kubenswrapper[4986]: I1203 12:56:17.092655 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:17 crc kubenswrapper[4986]: I1203 12:56:17.092733 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:17 crc kubenswrapper[4986]: I1203 12:56:17.092763 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:17 crc kubenswrapper[4986]: I1203 12:56:17.092791 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:17 crc kubenswrapper[4986]: I1203 12:56:17.092810 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:17Z","lastTransitionTime":"2025-12-03T12:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:17 crc kubenswrapper[4986]: I1203 12:56:17.188704 4986 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 03 12:56:17 crc kubenswrapper[4986]: I1203 12:56:17.195200 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:17 crc kubenswrapper[4986]: I1203 12:56:17.195260 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:17 crc kubenswrapper[4986]: I1203 12:56:17.195338 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:17 crc kubenswrapper[4986]: I1203 12:56:17.195373 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:17 crc kubenswrapper[4986]: I1203 12:56:17.195396 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:17Z","lastTransitionTime":"2025-12-03T12:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:17 crc kubenswrapper[4986]: I1203 12:56:17.298773 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:17 crc kubenswrapper[4986]: I1203 12:56:17.298839 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:17 crc kubenswrapper[4986]: I1203 12:56:17.298863 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:17 crc kubenswrapper[4986]: I1203 12:56:17.298896 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:17 crc kubenswrapper[4986]: I1203 12:56:17.298921 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:17Z","lastTransitionTime":"2025-12-03T12:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:17 crc kubenswrapper[4986]: I1203 12:56:17.401984 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:17 crc kubenswrapper[4986]: I1203 12:56:17.402060 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:17 crc kubenswrapper[4986]: I1203 12:56:17.402072 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:17 crc kubenswrapper[4986]: I1203 12:56:17.402099 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:17 crc kubenswrapper[4986]: I1203 12:56:17.402110 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:17Z","lastTransitionTime":"2025-12-03T12:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:17 crc kubenswrapper[4986]: I1203 12:56:17.504342 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:17 crc kubenswrapper[4986]: I1203 12:56:17.504397 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:17 crc kubenswrapper[4986]: I1203 12:56:17.504414 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:17 crc kubenswrapper[4986]: I1203 12:56:17.504437 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:17 crc kubenswrapper[4986]: I1203 12:56:17.504453 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:17Z","lastTransitionTime":"2025-12-03T12:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:17 crc kubenswrapper[4986]: I1203 12:56:17.606563 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:17 crc kubenswrapper[4986]: I1203 12:56:17.606620 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:17 crc kubenswrapper[4986]: I1203 12:56:17.606640 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:17 crc kubenswrapper[4986]: I1203 12:56:17.606661 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:17 crc kubenswrapper[4986]: I1203 12:56:17.606676 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:17Z","lastTransitionTime":"2025-12-03T12:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:17 crc kubenswrapper[4986]: I1203 12:56:17.693713 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 12:56:17 crc kubenswrapper[4986]: E1203 12:56:17.693887 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 12:56:33.693850758 +0000 UTC m=+53.160281979 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:56:17 crc kubenswrapper[4986]: I1203 12:56:17.694006 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 12:56:17 crc kubenswrapper[4986]: I1203 12:56:17.694219 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ea24f625-ded4-4e37-a23b-f96fe691b0dd-metrics-certs\") pod \"network-metrics-daemon-rl2mt\" (UID: \"ea24f625-ded4-4e37-a23b-f96fe691b0dd\") " pod="openshift-multus/network-metrics-daemon-rl2mt" Dec 03 12:56:17 crc kubenswrapper[4986]: E1203 12:56:17.694242 4986 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 12:56:17 crc kubenswrapper[4986]: I1203 12:56:17.694320 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 12:56:17 crc kubenswrapper[4986]: E1203 12:56:17.694367 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 12:56:33.69433912 +0000 UTC m=+53.160770351 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 12:56:17 crc kubenswrapper[4986]: E1203 12:56:17.694488 4986 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 12:56:17 crc kubenswrapper[4986]: E1203 12:56:17.694508 4986 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 12:56:17 crc kubenswrapper[4986]: E1203 12:56:17.694584 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 12:56:33.694561605 +0000 UTC m=+53.160992836 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 12:56:17 crc kubenswrapper[4986]: E1203 12:56:17.694629 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ea24f625-ded4-4e37-a23b-f96fe691b0dd-metrics-certs podName:ea24f625-ded4-4e37-a23b-f96fe691b0dd nodeName:}" failed. No retries permitted until 2025-12-03 12:56:19.694598926 +0000 UTC m=+39.161030157 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ea24f625-ded4-4e37-a23b-f96fe691b0dd-metrics-certs") pod "network-metrics-daemon-rl2mt" (UID: "ea24f625-ded4-4e37-a23b-f96fe691b0dd") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 12:56:17 crc kubenswrapper[4986]: I1203 12:56:17.709793 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:17 crc kubenswrapper[4986]: I1203 12:56:17.709850 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:17 crc kubenswrapper[4986]: I1203 12:56:17.709864 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:17 crc kubenswrapper[4986]: I1203 12:56:17.709883 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:17 crc kubenswrapper[4986]: I1203 12:56:17.709897 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:17Z","lastTransitionTime":"2025-12-03T12:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:17 crc kubenswrapper[4986]: I1203 12:56:17.795104 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 12:56:17 crc kubenswrapper[4986]: I1203 12:56:17.795176 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 12:56:17 crc kubenswrapper[4986]: E1203 12:56:17.795354 4986 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 12:56:17 crc kubenswrapper[4986]: E1203 12:56:17.795388 4986 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 12:56:17 crc kubenswrapper[4986]: E1203 12:56:17.795405 4986 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 12:56:17 crc kubenswrapper[4986]: E1203 12:56:17.795462 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-03 12:56:33.795440988 +0000 UTC m=+53.261872189 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 12:56:17 crc kubenswrapper[4986]: E1203 12:56:17.795354 4986 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 12:56:17 crc kubenswrapper[4986]: E1203 12:56:17.795504 4986 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 12:56:17 crc kubenswrapper[4986]: E1203 12:56:17.795519 4986 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 12:56:17 crc kubenswrapper[4986]: E1203 12:56:17.795566 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-03 12:56:33.795552271 +0000 UTC m=+53.261983472 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 12:56:17 crc kubenswrapper[4986]: I1203 12:56:17.812258 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:17 crc kubenswrapper[4986]: I1203 12:56:17.812329 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:17 crc kubenswrapper[4986]: I1203 12:56:17.812341 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:17 crc kubenswrapper[4986]: I1203 12:56:17.812358 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:17 crc kubenswrapper[4986]: I1203 12:56:17.812371 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:17Z","lastTransitionTime":"2025-12-03T12:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:17 crc kubenswrapper[4986]: I1203 12:56:17.914552 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:17 crc kubenswrapper[4986]: I1203 12:56:17.914594 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:17 crc kubenswrapper[4986]: I1203 12:56:17.914604 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:17 crc kubenswrapper[4986]: I1203 12:56:17.914641 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:17 crc kubenswrapper[4986]: I1203 12:56:17.914653 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:17Z","lastTransitionTime":"2025-12-03T12:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:17 crc kubenswrapper[4986]: I1203 12:56:17.943362 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 12:56:17 crc kubenswrapper[4986]: I1203 12:56:17.943449 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 12:56:17 crc kubenswrapper[4986]: I1203 12:56:17.943508 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl2mt" Dec 03 12:56:17 crc kubenswrapper[4986]: E1203 12:56:17.943689 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 12:56:17 crc kubenswrapper[4986]: I1203 12:56:17.943716 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 12:56:17 crc kubenswrapper[4986]: E1203 12:56:17.943834 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 12:56:17 crc kubenswrapper[4986]: E1203 12:56:17.943925 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 12:56:17 crc kubenswrapper[4986]: E1203 12:56:17.944034 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl2mt" podUID="ea24f625-ded4-4e37-a23b-f96fe691b0dd" Dec 03 12:56:18 crc kubenswrapper[4986]: I1203 12:56:18.016956 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:18 crc kubenswrapper[4986]: I1203 12:56:18.016993 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:18 crc kubenswrapper[4986]: I1203 12:56:18.017004 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:18 crc kubenswrapper[4986]: I1203 12:56:18.017020 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:18 crc kubenswrapper[4986]: I1203 12:56:18.017031 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:18Z","lastTransitionTime":"2025-12-03T12:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:18 crc kubenswrapper[4986]: I1203 12:56:18.119343 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:18 crc kubenswrapper[4986]: I1203 12:56:18.119386 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:18 crc kubenswrapper[4986]: I1203 12:56:18.119401 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:18 crc kubenswrapper[4986]: I1203 12:56:18.119419 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:18 crc kubenswrapper[4986]: I1203 12:56:18.119434 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:18Z","lastTransitionTime":"2025-12-03T12:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:18 crc kubenswrapper[4986]: I1203 12:56:18.193760 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9nf52_d3a45156-295b-4093-80e7-2059f81ddbd7/ovnkube-controller/1.log" Dec 03 12:56:18 crc kubenswrapper[4986]: I1203 12:56:18.194503 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9nf52_d3a45156-295b-4093-80e7-2059f81ddbd7/ovnkube-controller/0.log" Dec 03 12:56:18 crc kubenswrapper[4986]: I1203 12:56:18.201944 4986 generic.go:334] "Generic (PLEG): container finished" podID="d3a45156-295b-4093-80e7-2059f81ddbd7" containerID="6dcd8afad237a7ca1c7f5e0d37bec9ab99fd61ac7be8b3604a8f5d0bb6a2181e" exitCode=1 Dec 03 12:56:18 crc kubenswrapper[4986]: I1203 12:56:18.202028 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" event={"ID":"d3a45156-295b-4093-80e7-2059f81ddbd7","Type":"ContainerDied","Data":"6dcd8afad237a7ca1c7f5e0d37bec9ab99fd61ac7be8b3604a8f5d0bb6a2181e"} Dec 03 12:56:18 crc kubenswrapper[4986]: I1203 12:56:18.202073 4986 scope.go:117] "RemoveContainer" containerID="7c1ecd3792204c942f231154736fda8a831651f3cc15fd115a2ff56fc88fc0ca" Dec 03 12:56:18 crc kubenswrapper[4986]: I1203 12:56:18.202979 4986 scope.go:117] "RemoveContainer" containerID="6dcd8afad237a7ca1c7f5e0d37bec9ab99fd61ac7be8b3604a8f5d0bb6a2181e" Dec 03 12:56:18 crc kubenswrapper[4986]: E1203 12:56:18.203220 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-9nf52_openshift-ovn-kubernetes(d3a45156-295b-4093-80e7-2059f81ddbd7)\"" pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" podUID="d3a45156-295b-4093-80e7-2059f81ddbd7" Dec 03 12:56:18 crc kubenswrapper[4986]: I1203 12:56:18.204235 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2p8xn" event={"ID":"957171e6-7b64-4d92-b690-342e3251ed8f","Type":"ContainerStarted","Data":"d417b35a888f61c40a58512bcff329faed85f3b4f8d1d29a7c6f2f43634a9090"} Dec 03 12:56:18 crc kubenswrapper[4986]: I1203 12:56:18.216896 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:18Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:18 crc kubenswrapper[4986]: I1203 12:56:18.221616 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:18 crc kubenswrapper[4986]: I1203 12:56:18.221655 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:18 crc kubenswrapper[4986]: I1203 12:56:18.221666 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:18 crc kubenswrapper[4986]: I1203 12:56:18.221683 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:18 crc kubenswrapper[4986]: I1203 12:56:18.221706 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:18Z","lastTransitionTime":"2025-12-03T12:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:18 crc kubenswrapper[4986]: I1203 12:56:18.232332 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:18Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:18 crc kubenswrapper[4986]: I1203 12:56:18.242902 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8lcrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75e4c223-9952-4d1b-b83b-5c45cfc51432\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b2af5d43c7842f129b6521e5b3b1963f6060b3e0c552a771c7df6a2a9ef867d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnr67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8lcrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:18Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:18 crc kubenswrapper[4986]: I1203 12:56:18.251613 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fszqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a76bcf5c-6a62-4360-826d-ecac337c88ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d94370a14317ade02cb805783e09ffcd8c6ff07f35e9124e302028e6dfa3c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhtrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fszqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:18Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:18 crc kubenswrapper[4986]: I1203 12:56:18.263832 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-px97g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97196b6d-75cc-4de4-8805-f9ce3fbd4230\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c28bf1aff27ba408262b5fc561e084ff3813ce95a7c14b57fea24d409eb33bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v4j5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-px97g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:18Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:18 crc kubenswrapper[4986]: I1203 12:56:18.284118 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3a45156-295b-4093-80e7-2059f81ddbd7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c79f84fc019d109bac446dd058b4bafb0b516a13fae47b1924e4f7690658ba8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be6405facfa7726cbe56c48184cba0123774e8e34d3ceb9e18daed6f35560c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d77d078572dbe7f11e91ece0d25e76bd17004167dabce267a2b568e798aae158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2843fb3fcbadb926b104a9f7c4084468b36293aebed6c692c29419a0457b62a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d94eca13821749d7e15a350c411a831d2887f4211ed092a4fb57e26e49fcd9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ff8c1f1602e57602cae3f46dd3d2bf8d7abad6bd0952f4347152678e8606d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dcd8afad237a7ca1c7f5e0d37bec9ab99fd61ac7be8b3604a8f5d0bb6a2181e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c1ecd3792204c942f231154736fda8a831651f3cc15fd115a2ff56fc88fc0ca\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T12:56:14Z\\\",\\\"message\\\":\\\"go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 12:56:14.619391 6322 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1203 12:56:14.619499 6322 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1203 12:56:14.619513 6322 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1203 12:56:14.619531 6322 factory.go:656] Stopping watch factory\\\\nI1203 12:56:14.619547 6322 handler.go:208] Removed *v1.Node event handler 7\\\\nI1203 12:56:14.619554 6322 handler.go:208] Removed *v1.Node event handler 2\\\\nI1203 12:56:14.619678 6322 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 12:56:14.620055 6322 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 12:56:14.620184 6322 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 12:56:14.620382 6322 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 12:56:14.620561 6322 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dcd8afad237a7ca1c7f5e0d37bec9ab99fd61ac7be8b3604a8f5d0bb6a2181e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T12:56:17Z\\\",\\\"message\\\":\\\"a-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-service-ca-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-service-ca-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.40\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF1203 12:56:15.918790 6490 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dfc7520e7cb1e09eb10bf9615dbae11d7e73474f1b73eed481d18269b65aef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c34a338f506e8dd09757984f92a29fd76ddd2d25fc00d680cc4883c4766080e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c34a338f506e8dd09757984f92a29fd76ddd2d25fc00d680cc4883c4766080e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9nf52\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:18Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:18 crc kubenswrapper[4986]: I1203 12:56:18.297221 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2p8xn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"957171e6-7b64-4d92-b690-342e3251ed8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbhn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbhn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2p8xn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:18Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:18 crc kubenswrapper[4986]: I1203 12:56:18.307477 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:18Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:18 crc kubenswrapper[4986]: I1203 12:56:18.321797 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0d71a3ffdbe593936c83e92e92e286eaff092333bb3dd54cb5bc2825ca46fbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a402aeaef03d8d3c684f0d8e7fe8a080bc02d761104e3c1095b3233aade2b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:18Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:18 crc kubenswrapper[4986]: I1203 12:56:18.324450 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:18 crc kubenswrapper[4986]: I1203 12:56:18.324499 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:18 crc kubenswrapper[4986]: I1203 12:56:18.324513 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:18 crc kubenswrapper[4986]: I1203 12:56:18.324530 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:18 crc kubenswrapper[4986]: I1203 12:56:18.324543 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:18Z","lastTransitionTime":"2025-12-03T12:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:18 crc kubenswrapper[4986]: I1203 12:56:18.337763 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rkgd9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a939a03f-0eec-49cb-9b23-40e359e427d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b854d526fa4ac2b270c7f27c1b35b1ff0fbc671e40a928649b578f926d6b51b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc042debcddb7ea2233d6c224020a32acd8c996d5334d8e6eaae8d1329bb3774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc042debcddb7ea2233d6c224020a32acd8c996d5334d8e6eaae8d1329bb3774\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7128a1dd657b9829c1af31ef1a2f5ad08d2bfd65e5d352cf5a69a5be4e9aae9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7128a1dd657b9829c1af31ef1a2f5ad08d2bfd65e5d352cf5a69a5be4e9aae9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cd633caedf88a1bec633cef61b8b30ae422b8c8d91a012395831a09d398f0f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cd633caedf88a1bec633cef61b8b30ae422b8c8d91a012395831a09d398f0f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1b9e8e1f9326660fe6d7c89af18c5f7dd1134e9f2e866da8ba9c795e13cb57d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1b9e8e1f9326660fe6d7c89af18c5f7dd1134e9f2e866da8ba9c795e13cb57d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55ef54ee9f22072e2b437f38f6d06f24e6f1da02555505bc65c488b5818c87a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55ef54ee9f22072e2b437f38f6d06f24e6f1da02555505bc65c488b5818c87a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35fecc2332fdab60719b864653301c5e8322b13ace401a0ff016ff3dc09cd3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35fecc2332fdab60719b864653301c5e8322b13ace401a0ff016ff3dc09cd3d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rkgd9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:18Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:18 crc kubenswrapper[4986]: I1203 12:56:18.352658 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rl2mt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea24f625-ded4-4e37-a23b-f96fe691b0dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhr82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhr82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rl2mt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:18Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:18 crc kubenswrapper[4986]: I1203 12:56:18.364422 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee22d4f3-8488-4497-a757-a6dd51e84539\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4fc247caf16eab425363e12219a76c40022f0daf0bc9c8b52f4eb8e3726f242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86d62d991d6516ed42f7cad655018e1ebc85c8c1b100d624c1c2e5e8f616cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4ba9819f4f1746f03a558617fc6ee14f5627c37c362d1198a303b6f1854c7bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79cba0b7394bf25be648502fc84a916d8f2fa0949edce20b5a748046ce9b6f75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:18Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:18 crc kubenswrapper[4986]: I1203 12:56:18.393002 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d51894dd-38f0-413a-adaf-22d196474bed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa98006a8736394300cdb9ef3331d2a23e1b3af2e2203f757c51ca9c700d26c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6e7d24b5f1325cfb2d210cb9f311026ab370c91deecc5d0d951bcd4eda79816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9bb9d940d346f9895e24d98173668ea2d0e68fd3035e5b03475bbc936582958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7fa0d673f037c8edcc75ee30b1bbeea73da55d5459347dacc9251fffbb38be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d4007e9ed33227938a5a3656fd74958aef599418fec4cd2048dfa7dd88924dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fb412bfaef1de92aa6be2e23c0dd73fe39eebf29b6814fb3bb6fd0281ddbbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fb412bfaef1de92aa6be2e23c0dd73fe39eebf29b6814fb3bb6fd0281ddbbd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c66ee903635427e08f96e3f96c512c4d25e79bf2fc14eb0c699b3b9a617cfeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c66ee903635427e08f96e3f96c512c4d25e79bf2fc14eb0c699b3b9a617cfeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54fb6e4ecd7f73267d49261875a00a51e5e36e6079170f694346e9d6c3df8f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54fb6e4ecd7f73267d49261875a00a51e5e36e6079170f694346e9d6c3df8f31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:18Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:18 crc kubenswrapper[4986]: I1203 12:56:18.408061 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b923e52e6284c8adca87e7d3d801a432a676e4389d7ab3c0cba584c5b7d96f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:18Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:18 crc kubenswrapper[4986]: I1203 12:56:18.420264 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d66d06e79ffb2ecd12acc0380e5e4839904147f9194e48277ca667df13a0cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:18Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:18 crc kubenswrapper[4986]: I1203 12:56:18.427346 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:18 crc kubenswrapper[4986]: I1203 12:56:18.427434 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:18 crc kubenswrapper[4986]: I1203 12:56:18.427512 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:18 crc kubenswrapper[4986]: I1203 12:56:18.427552 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:18 crc kubenswrapper[4986]: I1203 12:56:18.427579 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:18Z","lastTransitionTime":"2025-12-03T12:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:18 crc kubenswrapper[4986]: I1203 12:56:18.436592 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://535b7b05dbe9b47808f3d358a1f370b696a5d5bb4dd96b049ac2cbe3d18ea065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgppd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0ab87b22153ba31fa79d2d210e695b274c2ddc878ad679c605aa8fb716534d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgppd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xggpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:18Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:18 crc kubenswrapper[4986]: I1203 12:56:18.452252 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231ed674-1d19-4ee2-b29c-a1b7453ed531\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7eb1602f2d542b1998655456e5c49fca8f55a45a07cb075f989d4b10f91f6c95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0ca216d3d1121b7035d5d7ead25d459f95d56808997c02b3801cac0354c76a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04aa8cf2e01507ace747505514511263d9c560608e33f592a2c10be1edad8674\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70f0aa85e0be54890d4367deaf1a95970a7b0ac9d95afdd07c431680bdf7f15e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c0ed6891f87bfc3c62f0f13739720e012dc8fa3b122a18e799e164e4656abb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T12:56:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 12:56:01.437086 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 12:56:01.437205 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 12:56:01.438166 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1972143202/tls.crt::/tmp/serving-cert-1972143202/tls.key\\\\\\\"\\\\nI1203 12:56:02.167657 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 12:56:02.174337 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 12:56:02.174360 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 12:56:02.174377 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 12:56:02.174382 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 12:56:02.178753 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 12:56:02.179168 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 12:56:02.179175 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 12:56:02.179180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 12:56:02.179184 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 12:56:02.179187 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 12:56:02.179190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 12:56:02.178775 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 12:56:02.181220 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8ecb87e56d3fa7d72b57a6f333db4244ee7f60381778e7099a6fc88d91e1e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:18Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:18 crc kubenswrapper[4986]: I1203 12:56:18.530981 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:18 crc kubenswrapper[4986]: I1203 12:56:18.531046 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:18 crc kubenswrapper[4986]: I1203 12:56:18.531066 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:18 crc kubenswrapper[4986]: I1203 12:56:18.531091 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:18 crc kubenswrapper[4986]: I1203 12:56:18.531112 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:18Z","lastTransitionTime":"2025-12-03T12:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:18 crc kubenswrapper[4986]: I1203 12:56:18.634018 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:18 crc kubenswrapper[4986]: I1203 12:56:18.634073 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:18 crc kubenswrapper[4986]: I1203 12:56:18.634088 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:18 crc kubenswrapper[4986]: I1203 12:56:18.634108 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:18 crc kubenswrapper[4986]: I1203 12:56:18.634123 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:18Z","lastTransitionTime":"2025-12-03T12:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:18 crc kubenswrapper[4986]: I1203 12:56:18.736735 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:18 crc kubenswrapper[4986]: I1203 12:56:18.736785 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:18 crc kubenswrapper[4986]: I1203 12:56:18.736800 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:18 crc kubenswrapper[4986]: I1203 12:56:18.736818 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:18 crc kubenswrapper[4986]: I1203 12:56:18.736831 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:18Z","lastTransitionTime":"2025-12-03T12:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:18 crc kubenswrapper[4986]: I1203 12:56:18.839521 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:18 crc kubenswrapper[4986]: I1203 12:56:18.839592 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:18 crc kubenswrapper[4986]: I1203 12:56:18.839614 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:18 crc kubenswrapper[4986]: I1203 12:56:18.839644 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:18 crc kubenswrapper[4986]: I1203 12:56:18.839666 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:18Z","lastTransitionTime":"2025-12-03T12:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:18 crc kubenswrapper[4986]: I1203 12:56:18.942271 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:18 crc kubenswrapper[4986]: I1203 12:56:18.942386 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:18 crc kubenswrapper[4986]: I1203 12:56:18.942403 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:18 crc kubenswrapper[4986]: I1203 12:56:18.942427 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:18 crc kubenswrapper[4986]: I1203 12:56:18.942445 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:18Z","lastTransitionTime":"2025-12-03T12:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:19 crc kubenswrapper[4986]: I1203 12:56:19.045133 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:19 crc kubenswrapper[4986]: I1203 12:56:19.045184 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:19 crc kubenswrapper[4986]: I1203 12:56:19.045195 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:19 crc kubenswrapper[4986]: I1203 12:56:19.045216 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:19 crc kubenswrapper[4986]: I1203 12:56:19.045235 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:19Z","lastTransitionTime":"2025-12-03T12:56:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:19 crc kubenswrapper[4986]: I1203 12:56:19.148098 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:19 crc kubenswrapper[4986]: I1203 12:56:19.148160 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:19 crc kubenswrapper[4986]: I1203 12:56:19.148177 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:19 crc kubenswrapper[4986]: I1203 12:56:19.148201 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:19 crc kubenswrapper[4986]: I1203 12:56:19.148219 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:19Z","lastTransitionTime":"2025-12-03T12:56:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:19 crc kubenswrapper[4986]: I1203 12:56:19.251447 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:19 crc kubenswrapper[4986]: I1203 12:56:19.251509 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:19 crc kubenswrapper[4986]: I1203 12:56:19.251523 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:19 crc kubenswrapper[4986]: I1203 12:56:19.251544 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:19 crc kubenswrapper[4986]: I1203 12:56:19.251559 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:19Z","lastTransitionTime":"2025-12-03T12:56:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:19 crc kubenswrapper[4986]: I1203 12:56:19.354752 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:19 crc kubenswrapper[4986]: I1203 12:56:19.354816 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:19 crc kubenswrapper[4986]: I1203 12:56:19.354839 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:19 crc kubenswrapper[4986]: I1203 12:56:19.354868 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:19 crc kubenswrapper[4986]: I1203 12:56:19.354891 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:19Z","lastTransitionTime":"2025-12-03T12:56:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:19 crc kubenswrapper[4986]: I1203 12:56:19.457730 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:19 crc kubenswrapper[4986]: I1203 12:56:19.457798 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:19 crc kubenswrapper[4986]: I1203 12:56:19.457825 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:19 crc kubenswrapper[4986]: I1203 12:56:19.457854 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:19 crc kubenswrapper[4986]: I1203 12:56:19.457876 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:19Z","lastTransitionTime":"2025-12-03T12:56:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:19 crc kubenswrapper[4986]: I1203 12:56:19.561397 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:19 crc kubenswrapper[4986]: I1203 12:56:19.561481 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:19 crc kubenswrapper[4986]: I1203 12:56:19.561505 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:19 crc kubenswrapper[4986]: I1203 12:56:19.561534 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:19 crc kubenswrapper[4986]: I1203 12:56:19.561556 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:19Z","lastTransitionTime":"2025-12-03T12:56:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:19 crc kubenswrapper[4986]: I1203 12:56:19.664335 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:19 crc kubenswrapper[4986]: I1203 12:56:19.664402 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:19 crc kubenswrapper[4986]: I1203 12:56:19.664424 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:19 crc kubenswrapper[4986]: I1203 12:56:19.664454 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:19 crc kubenswrapper[4986]: I1203 12:56:19.664476 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:19Z","lastTransitionTime":"2025-12-03T12:56:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:19 crc kubenswrapper[4986]: I1203 12:56:19.713087 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ea24f625-ded4-4e37-a23b-f96fe691b0dd-metrics-certs\") pod \"network-metrics-daemon-rl2mt\" (UID: \"ea24f625-ded4-4e37-a23b-f96fe691b0dd\") " pod="openshift-multus/network-metrics-daemon-rl2mt" Dec 03 12:56:19 crc kubenswrapper[4986]: E1203 12:56:19.713259 4986 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 12:56:19 crc kubenswrapper[4986]: E1203 12:56:19.713361 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ea24f625-ded4-4e37-a23b-f96fe691b0dd-metrics-certs podName:ea24f625-ded4-4e37-a23b-f96fe691b0dd nodeName:}" failed. No retries permitted until 2025-12-03 12:56:23.713342697 +0000 UTC m=+43.179773888 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ea24f625-ded4-4e37-a23b-f96fe691b0dd-metrics-certs") pod "network-metrics-daemon-rl2mt" (UID: "ea24f625-ded4-4e37-a23b-f96fe691b0dd") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 12:56:19 crc kubenswrapper[4986]: I1203 12:56:19.767782 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:19 crc kubenswrapper[4986]: I1203 12:56:19.767846 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:19 crc kubenswrapper[4986]: I1203 12:56:19.767880 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:19 crc kubenswrapper[4986]: I1203 12:56:19.767911 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:19 crc kubenswrapper[4986]: I1203 12:56:19.767932 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:19Z","lastTransitionTime":"2025-12-03T12:56:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:19 crc kubenswrapper[4986]: I1203 12:56:19.871023 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:19 crc kubenswrapper[4986]: I1203 12:56:19.871072 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:19 crc kubenswrapper[4986]: I1203 12:56:19.871083 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:19 crc kubenswrapper[4986]: I1203 12:56:19.871170 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:19 crc kubenswrapper[4986]: I1203 12:56:19.871185 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:19Z","lastTransitionTime":"2025-12-03T12:56:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:19 crc kubenswrapper[4986]: I1203 12:56:19.943471 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl2mt" Dec 03 12:56:19 crc kubenswrapper[4986]: I1203 12:56:19.943515 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 12:56:19 crc kubenswrapper[4986]: I1203 12:56:19.943556 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 12:56:19 crc kubenswrapper[4986]: I1203 12:56:19.943471 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 12:56:19 crc kubenswrapper[4986]: E1203 12:56:19.943685 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl2mt" podUID="ea24f625-ded4-4e37-a23b-f96fe691b0dd" Dec 03 12:56:19 crc kubenswrapper[4986]: E1203 12:56:19.943833 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 12:56:19 crc kubenswrapper[4986]: E1203 12:56:19.943975 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 12:56:19 crc kubenswrapper[4986]: E1203 12:56:19.944101 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 12:56:19 crc kubenswrapper[4986]: I1203 12:56:19.978352 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:19 crc kubenswrapper[4986]: I1203 12:56:19.978417 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:19 crc kubenswrapper[4986]: I1203 12:56:19.978436 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:19 crc kubenswrapper[4986]: I1203 12:56:19.978458 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:19 crc kubenswrapper[4986]: I1203 12:56:19.978475 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:19Z","lastTransitionTime":"2025-12-03T12:56:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:20 crc kubenswrapper[4986]: I1203 12:56:20.081869 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:20 crc kubenswrapper[4986]: I1203 12:56:20.081924 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:20 crc kubenswrapper[4986]: I1203 12:56:20.081943 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:20 crc kubenswrapper[4986]: I1203 12:56:20.081968 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:20 crc kubenswrapper[4986]: I1203 12:56:20.081985 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:20Z","lastTransitionTime":"2025-12-03T12:56:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:20 crc kubenswrapper[4986]: I1203 12:56:20.185014 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:20 crc kubenswrapper[4986]: I1203 12:56:20.185076 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:20 crc kubenswrapper[4986]: I1203 12:56:20.185089 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:20 crc kubenswrapper[4986]: I1203 12:56:20.185105 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:20 crc kubenswrapper[4986]: I1203 12:56:20.185115 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:20Z","lastTransitionTime":"2025-12-03T12:56:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:20 crc kubenswrapper[4986]: I1203 12:56:20.288172 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:20 crc kubenswrapper[4986]: I1203 12:56:20.288248 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:20 crc kubenswrapper[4986]: I1203 12:56:20.288327 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:20 crc kubenswrapper[4986]: I1203 12:56:20.288360 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:20 crc kubenswrapper[4986]: I1203 12:56:20.288381 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:20Z","lastTransitionTime":"2025-12-03T12:56:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:20 crc kubenswrapper[4986]: I1203 12:56:20.395420 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:20 crc kubenswrapper[4986]: I1203 12:56:20.395477 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:20 crc kubenswrapper[4986]: I1203 12:56:20.395494 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:20 crc kubenswrapper[4986]: I1203 12:56:20.395517 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:20 crc kubenswrapper[4986]: I1203 12:56:20.395535 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:20Z","lastTransitionTime":"2025-12-03T12:56:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:20 crc kubenswrapper[4986]: I1203 12:56:20.498809 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:20 crc kubenswrapper[4986]: I1203 12:56:20.498854 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:20 crc kubenswrapper[4986]: I1203 12:56:20.498867 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:20 crc kubenswrapper[4986]: I1203 12:56:20.498883 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:20 crc kubenswrapper[4986]: I1203 12:56:20.498893 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:20Z","lastTransitionTime":"2025-12-03T12:56:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:20 crc kubenswrapper[4986]: I1203 12:56:20.602365 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:20 crc kubenswrapper[4986]: I1203 12:56:20.602686 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:20 crc kubenswrapper[4986]: I1203 12:56:20.602825 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:20 crc kubenswrapper[4986]: I1203 12:56:20.602959 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:20 crc kubenswrapper[4986]: I1203 12:56:20.603072 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:20Z","lastTransitionTime":"2025-12-03T12:56:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:20 crc kubenswrapper[4986]: I1203 12:56:20.711423 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:20 crc kubenswrapper[4986]: I1203 12:56:20.711483 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:20 crc kubenswrapper[4986]: I1203 12:56:20.711505 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:20 crc kubenswrapper[4986]: I1203 12:56:20.711543 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:20 crc kubenswrapper[4986]: I1203 12:56:20.711564 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:20Z","lastTransitionTime":"2025-12-03T12:56:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:20 crc kubenswrapper[4986]: I1203 12:56:20.814382 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:20 crc kubenswrapper[4986]: I1203 12:56:20.814419 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:20 crc kubenswrapper[4986]: I1203 12:56:20.814429 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:20 crc kubenswrapper[4986]: I1203 12:56:20.814445 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:20 crc kubenswrapper[4986]: I1203 12:56:20.814456 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:20Z","lastTransitionTime":"2025-12-03T12:56:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:20 crc kubenswrapper[4986]: I1203 12:56:20.917444 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:20 crc kubenswrapper[4986]: I1203 12:56:20.917556 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:20 crc kubenswrapper[4986]: I1203 12:56:20.917582 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:20 crc kubenswrapper[4986]: I1203 12:56:20.917613 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:20 crc kubenswrapper[4986]: I1203 12:56:20.917637 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:20Z","lastTransitionTime":"2025-12-03T12:56:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:20 crc kubenswrapper[4986]: I1203 12:56:20.960796 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2p8xn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"957171e6-7b64-4d92-b690-342e3251ed8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbhn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbhn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2p8xn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:20Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:20 crc kubenswrapper[4986]: I1203 12:56:20.975612 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fszqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a76bcf5c-6a62-4360-826d-ecac337c88ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d94370a14317ade02cb805783e09ffcd8c6ff07f35e9124e302028e6dfa3c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhtrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fszqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:20Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:21 crc kubenswrapper[4986]: I1203 12:56:21.006065 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-px97g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97196b6d-75cc-4de4-8805-f9ce3fbd4230\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c28bf1aff27ba408262b5fc561e084ff3813ce95a7c14b57fea24d409eb33bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v4j5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-px97g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:20Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:21 crc kubenswrapper[4986]: I1203 12:56:21.020684 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:21 crc kubenswrapper[4986]: I1203 12:56:21.020734 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:21 crc kubenswrapper[4986]: I1203 12:56:21.020752 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:21 crc kubenswrapper[4986]: I1203 12:56:21.020776 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:21 crc kubenswrapper[4986]: I1203 12:56:21.020792 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:21Z","lastTransitionTime":"2025-12-03T12:56:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:21 crc kubenswrapper[4986]: I1203 12:56:21.028366 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3a45156-295b-4093-80e7-2059f81ddbd7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c79f84fc019d109bac446dd058b4bafb0b516a13fae47b1924e4f7690658ba8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be6405facfa7726cbe56c48184cba0123774e8e34d3ceb9e18daed6f35560c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d77d078572dbe7f11e91ece0d25e76bd17004167dabce267a2b568e798aae158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2843fb3fcbadb926b104a9f7c4084468b36293aebed6c692c29419a0457b62a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d94eca13821749d7e15a350c411a831d2887f4211ed092a4fb57e26e49fcd9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ff8c1f1602e57602cae3f46dd3d2bf8d7abad6bd0952f4347152678e8606d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dcd8afad237a7ca1c7f5e0d37bec9ab99fd61ac7be8b3604a8f5d0bb6a2181e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c1ecd3792204c942f231154736fda8a831651f3cc15fd115a2ff56fc88fc0ca\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T12:56:14Z\\\",\\\"message\\\":\\\"go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 12:56:14.619391 6322 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1203 12:56:14.619499 6322 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1203 12:56:14.619513 6322 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1203 12:56:14.619531 6322 factory.go:656] Stopping watch factory\\\\nI1203 12:56:14.619547 6322 handler.go:208] Removed *v1.Node event handler 7\\\\nI1203 12:56:14.619554 6322 handler.go:208] Removed *v1.Node event handler 2\\\\nI1203 12:56:14.619678 6322 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 12:56:14.620055 6322 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 12:56:14.620184 6322 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 12:56:14.620382 6322 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 12:56:14.620561 6322 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dcd8afad237a7ca1c7f5e0d37bec9ab99fd61ac7be8b3604a8f5d0bb6a2181e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T12:56:17Z\\\",\\\"message\\\":\\\"a-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-service-ca-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-service-ca-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.40\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF1203 12:56:15.918790 6490 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dfc7520e7cb1e09eb10bf9615dbae11d7e73474f1b73eed481d18269b65aef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c34a338f506e8dd09757984f92a29fd76ddd2d25fc00d680cc4883c4766080e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c34a338f506e8dd09757984f92a29fd76ddd2d25fc00d680cc4883c4766080e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9nf52\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:21Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:21 crc kubenswrapper[4986]: I1203 12:56:21.043425 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rl2mt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea24f625-ded4-4e37-a23b-f96fe691b0dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhr82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhr82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rl2mt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:21Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:21 crc kubenswrapper[4986]: I1203 12:56:21.066263 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee22d4f3-8488-4497-a757-a6dd51e84539\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4fc247caf16eab425363e12219a76c40022f0daf0bc9c8b52f4eb8e3726f242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86d62d991d6516ed42f7cad655018e1ebc85c8c1b100d624c1c2e5e8f616cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4ba9819f4f1746f03a558617fc6ee14f5627c37c362d1198a303b6f1854c7bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79cba0b7394bf25be648502fc84a916d8f2fa0949edce20b5a748046ce9b6f75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:21Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:21 crc kubenswrapper[4986]: I1203 12:56:21.088743 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:21Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:21 crc kubenswrapper[4986]: I1203 12:56:21.099344 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" Dec 03 12:56:21 crc kubenswrapper[4986]: I1203 12:56:21.100131 4986 scope.go:117] "RemoveContainer" containerID="6dcd8afad237a7ca1c7f5e0d37bec9ab99fd61ac7be8b3604a8f5d0bb6a2181e" Dec 03 12:56:21 crc kubenswrapper[4986]: E1203 12:56:21.100414 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-9nf52_openshift-ovn-kubernetes(d3a45156-295b-4093-80e7-2059f81ddbd7)\"" pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" podUID="d3a45156-295b-4093-80e7-2059f81ddbd7" Dec 03 12:56:21 crc kubenswrapper[4986]: I1203 12:56:21.104445 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0d71a3ffdbe593936c83e92e92e286eaff092333bb3dd54cb5bc2825ca46fbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a402aeaef03d8d3c684f0d8e7fe8a080bc02d761104e3c1095b3233aade2b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:21Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:21 crc kubenswrapper[4986]: I1203 12:56:21.117301 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rkgd9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a939a03f-0eec-49cb-9b23-40e359e427d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b854d526fa4ac2b270c7f27c1b35b1ff0fbc671e40a928649b578f926d6b51b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc042debcddb7ea2233d6c224020a32acd8c996d5334d8e6eaae8d1329bb3774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc042debcddb7ea2233d6c224020a32acd8c996d5334d8e6eaae8d1329bb3774\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7128a1dd657b9829c1af31ef1a2f5ad08d2bfd65e5d352cf5a69a5be4e9aae9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7128a1dd657b9829c1af31ef1a2f5ad08d2bfd65e5d352cf5a69a5be4e9aae9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cd633caedf88a1bec633cef61b8b30ae422b8c8d91a012395831a09d398f0f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cd633caedf88a1bec633cef61b8b30ae422b8c8d91a012395831a09d398f0f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1b9e8e1f9326660fe6d7c89af18c5f7dd1134e9f2e866da8ba9c795e13cb57d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1b9e8e1f9326660fe6d7c89af18c5f7dd1134e9f2e866da8ba9c795e13cb57d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55ef54ee9f22072e2b437f38f6d06f24e6f1da02555505bc65c488b5818c87a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55ef54ee9f22072e2b437f38f6d06f24e6f1da02555505bc65c488b5818c87a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35fecc2332fdab60719b864653301c5e8322b13ace401a0ff016ff3dc09cd3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35fecc2332fdab60719b864653301c5e8322b13ace401a0ff016ff3dc09cd3d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rkgd9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:21Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:21 crc kubenswrapper[4986]: I1203 12:56:21.122874 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:21 crc kubenswrapper[4986]: I1203 12:56:21.122920 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:21 crc kubenswrapper[4986]: I1203 12:56:21.122933 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:21 crc kubenswrapper[4986]: I1203 12:56:21.122949 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:21 crc kubenswrapper[4986]: I1203 12:56:21.122961 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:21Z","lastTransitionTime":"2025-12-03T12:56:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:21 crc kubenswrapper[4986]: I1203 12:56:21.127437 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://535b7b05dbe9b47808f3d358a1f370b696a5d5bb4dd96b049ac2cbe3d18ea065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgppd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0ab87b22153ba31fa79d2d210e695b274c2ddc878ad679c605aa8fb716534d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgppd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xggpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:21Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:21 crc kubenswrapper[4986]: I1203 12:56:21.138874 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231ed674-1d19-4ee2-b29c-a1b7453ed531\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7eb1602f2d542b1998655456e5c49fca8f55a45a07cb075f989d4b10f91f6c95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0ca216d3d1121b7035d5d7ead25d459f95d56808997c02b3801cac0354c76a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04aa8cf2e01507ace747505514511263d9c560608e33f592a2c10be1edad8674\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70f0aa85e0be54890d4367deaf1a95970a7b0ac9d95afdd07c431680bdf7f15e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c0ed6891f87bfc3c62f0f13739720e012dc8fa3b122a18e799e164e4656abb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T12:56:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 12:56:01.437086 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 12:56:01.437205 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 12:56:01.438166 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1972143202/tls.crt::/tmp/serving-cert-1972143202/tls.key\\\\\\\"\\\\nI1203 12:56:02.167657 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 12:56:02.174337 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 12:56:02.174360 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 12:56:02.174377 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 12:56:02.174382 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 12:56:02.178753 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 12:56:02.179168 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 12:56:02.179175 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 12:56:02.179180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 12:56:02.179184 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 12:56:02.179187 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 12:56:02.179190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 12:56:02.178775 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 12:56:02.181220 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8ecb87e56d3fa7d72b57a6f333db4244ee7f60381778e7099a6fc88d91e1e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:21Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:21 crc kubenswrapper[4986]: I1203 12:56:21.161032 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d51894dd-38f0-413a-adaf-22d196474bed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa98006a8736394300cdb9ef3331d2a23e1b3af2e2203f757c51ca9c700d26c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6e7d24b5f1325cfb2d210cb9f311026ab370c91deecc5d0d951bcd4eda79816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9bb9d940d346f9895e24d98173668ea2d0e68fd3035e5b03475bbc936582958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7fa0d673f037c8edcc75ee30b1bbeea73da55d5459347dacc9251fffbb38be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d4007e9ed33227938a5a3656fd74958aef599418fec4cd2048dfa7dd88924dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fb412bfaef1de92aa6be2e23c0dd73fe39eebf29b6814fb3bb6fd0281ddbbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fb412bfaef1de92aa6be2e23c0dd73fe39eebf29b6814fb3bb6fd0281ddbbd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c66ee903635427e08f96e3f96c512c4d25e79bf2fc14eb0c699b3b9a617cfeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c66ee903635427e08f96e3f96c512c4d25e79bf2fc14eb0c699b3b9a617cfeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54fb6e4ecd7f73267d49261875a00a51e5e36e6079170f694346e9d6c3df8f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54fb6e4ecd7f73267d49261875a00a51e5e36e6079170f694346e9d6c3df8f31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:21Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:21 crc kubenswrapper[4986]: I1203 12:56:21.177498 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b923e52e6284c8adca87e7d3d801a432a676e4389d7ab3c0cba584c5b7d96f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:21Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:21 crc kubenswrapper[4986]: I1203 12:56:21.190340 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d66d06e79ffb2ecd12acc0380e5e4839904147f9194e48277ca667df13a0cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:21Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:21 crc kubenswrapper[4986]: I1203 12:56:21.202755 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:21Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:21 crc kubenswrapper[4986]: I1203 12:56:21.213802 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9nf52_d3a45156-295b-4093-80e7-2059f81ddbd7/ovnkube-controller/1.log" Dec 03 12:56:21 crc kubenswrapper[4986]: I1203 12:56:21.216479 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:21Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:21 crc kubenswrapper[4986]: I1203 12:56:21.218054 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2p8xn" event={"ID":"957171e6-7b64-4d92-b690-342e3251ed8f","Type":"ContainerStarted","Data":"27878a8de45ccf0e02719e2a3f9d1339d8910db5947d423a3e7400359cc96253"} Dec 03 12:56:21 crc kubenswrapper[4986]: I1203 12:56:21.225856 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:21 crc kubenswrapper[4986]: I1203 12:56:21.225902 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:21 crc kubenswrapper[4986]: I1203 12:56:21.225914 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:21 crc kubenswrapper[4986]: I1203 12:56:21.225929 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:21 crc kubenswrapper[4986]: I1203 12:56:21.225942 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:21Z","lastTransitionTime":"2025-12-03T12:56:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:21 crc kubenswrapper[4986]: I1203 12:56:21.228647 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8lcrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75e4c223-9952-4d1b-b83b-5c45cfc51432\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b2af5d43c7842f129b6521e5b3b1963f6060b3e0c552a771c7df6a2a9ef867d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnr67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8lcrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:21Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:21 crc kubenswrapper[4986]: I1203 12:56:21.246736 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d51894dd-38f0-413a-adaf-22d196474bed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa98006a8736394300cdb9ef3331d2a23e1b3af2e2203f757c51ca9c700d26c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6e7d24b5f1325cfb2d210cb9f311026ab370c91deecc5d0d951bcd4eda79816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9bb9d940d346f9895e24d98173668ea2d0e68fd3035e5b03475bbc936582958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7fa0d673f037c8edcc75ee30b1bbeea73da55d5459347dacc9251fffbb38be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d4007e9ed33227938a5a3656fd74958aef599418fec4cd2048dfa7dd88924dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fb412bfaef1de92aa6be2e23c0dd73fe39eebf29b6814fb3bb6fd0281ddbbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fb412bfaef1de92aa6be2e23c0dd73fe39eebf29b6814fb3bb6fd0281ddbbd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c66ee903635427e08f96e3f96c512c4d25e79bf2fc14eb0c699b3b9a617cfeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c66ee903635427e08f96e3f96c512c4d25e79bf2fc14eb0c699b3b9a617cfeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54fb6e4ecd7f73267d49261875a00a51e5e36e6079170f694346e9d6c3df8f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54fb6e4ecd7f73267d49261875a00a51e5e36e6079170f694346e9d6c3df8f31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:21Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:21 crc kubenswrapper[4986]: I1203 12:56:21.259365 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b923e52e6284c8adca87e7d3d801a432a676e4389d7ab3c0cba584c5b7d96f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:21Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:21 crc kubenswrapper[4986]: I1203 12:56:21.273907 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d66d06e79ffb2ecd12acc0380e5e4839904147f9194e48277ca667df13a0cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:21Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:21 crc kubenswrapper[4986]: I1203 12:56:21.285562 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://535b7b05dbe9b47808f3d358a1f370b696a5d5bb4dd96b049ac2cbe3d18ea065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgppd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0ab87b22153ba31fa79d2d210e695b274c2ddc878ad679c605aa8fb716534d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgppd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xggpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:21Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:21 crc kubenswrapper[4986]: I1203 12:56:21.302851 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231ed674-1d19-4ee2-b29c-a1b7453ed531\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7eb1602f2d542b1998655456e5c49fca8f55a45a07cb075f989d4b10f91f6c95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0ca216d3d1121b7035d5d7ead25d459f95d56808997c02b3801cac0354c76a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04aa8cf2e01507ace747505514511263d9c560608e33f592a2c10be1edad8674\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70f0aa85e0be54890d4367deaf1a95970a7b0ac9d95afdd07c431680bdf7f15e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c0ed6891f87bfc3c62f0f13739720e012dc8fa3b122a18e799e164e4656abb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T12:56:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 12:56:01.437086 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 12:56:01.437205 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 12:56:01.438166 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1972143202/tls.crt::/tmp/serving-cert-1972143202/tls.key\\\\\\\"\\\\nI1203 12:56:02.167657 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 12:56:02.174337 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 12:56:02.174360 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 12:56:02.174377 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 12:56:02.174382 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 12:56:02.178753 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 12:56:02.179168 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 12:56:02.179175 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 12:56:02.179180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 12:56:02.179184 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 12:56:02.179187 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 12:56:02.179190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 12:56:02.178775 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 12:56:02.181220 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8ecb87e56d3fa7d72b57a6f333db4244ee7f60381778e7099a6fc88d91e1e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:21Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:21 crc kubenswrapper[4986]: I1203 12:56:21.314026 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:21Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:21 crc kubenswrapper[4986]: I1203 12:56:21.325514 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:21Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:21 crc kubenswrapper[4986]: I1203 12:56:21.328002 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:21 crc kubenswrapper[4986]: I1203 12:56:21.328036 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:21 crc kubenswrapper[4986]: I1203 12:56:21.328044 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:21 crc kubenswrapper[4986]: I1203 12:56:21.328058 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:21 crc kubenswrapper[4986]: I1203 12:56:21.328068 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:21Z","lastTransitionTime":"2025-12-03T12:56:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:21 crc kubenswrapper[4986]: I1203 12:56:21.337381 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8lcrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75e4c223-9952-4d1b-b83b-5c45cfc51432\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b2af5d43c7842f129b6521e5b3b1963f6060b3e0c552a771c7df6a2a9ef867d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnr67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8lcrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:21Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:21 crc kubenswrapper[4986]: I1203 12:56:21.346787 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fszqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a76bcf5c-6a62-4360-826d-ecac337c88ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d94370a14317ade02cb805783e09ffcd8c6ff07f35e9124e302028e6dfa3c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhtrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fszqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:21Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:21 crc kubenswrapper[4986]: I1203 12:56:21.357888 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-px97g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97196b6d-75cc-4de4-8805-f9ce3fbd4230\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c28bf1aff27ba408262b5fc561e084ff3813ce95a7c14b57fea24d409eb33bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v4j5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-px97g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:21Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:21 crc kubenswrapper[4986]: I1203 12:56:21.379132 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3a45156-295b-4093-80e7-2059f81ddbd7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c79f84fc019d109bac446dd058b4bafb0b516a13fae47b1924e4f7690658ba8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be6405facfa7726cbe56c48184cba0123774e8e34d3ceb9e18daed6f35560c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d77d078572dbe7f11e91ece0d25e76bd17004167dabce267a2b568e798aae158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2843fb3fcbadb926b104a9f7c4084468b36293aebed6c692c29419a0457b62a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d94eca13821749d7e15a350c411a831d2887f4211ed092a4fb57e26e49fcd9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ff8c1f1602e57602cae3f46dd3d2bf8d7abad6bd0952f4347152678e8606d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dcd8afad237a7ca1c7f5e0d37bec9ab99fd61ac7be8b3604a8f5d0bb6a2181e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dcd8afad237a7ca1c7f5e0d37bec9ab99fd61ac7be8b3604a8f5d0bb6a2181e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T12:56:17Z\\\",\\\"message\\\":\\\"a-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-service-ca-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-service-ca-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.40\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF1203 12:56:15.918790 6490 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-9nf52_openshift-ovn-kubernetes(d3a45156-295b-4093-80e7-2059f81ddbd7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dfc7520e7cb1e09eb10bf9615dbae11d7e73474f1b73eed481d18269b65aef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c34a338f506e8dd09757984f92a29fd76ddd2d25fc00d680cc4883c4766080e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c34a338f506e8dd09757984f92a29fd76ddd2d25fc00d680cc4883c4766080e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9nf52\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:21Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:21 crc kubenswrapper[4986]: I1203 12:56:21.392826 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2p8xn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"957171e6-7b64-4d92-b690-342e3251ed8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d417b35a888f61c40a58512bcff329faed85f3b4f8d1d29a7c6f2f43634a9090\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbhn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27878a8de45ccf0e02719e2a3f9d1339d8910db5947d423a3e7400359cc96253\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbhn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2p8xn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:21Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:21 crc kubenswrapper[4986]: I1203 12:56:21.404329 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:21Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:21 crc kubenswrapper[4986]: I1203 12:56:21.417614 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0d71a3ffdbe593936c83e92e92e286eaff092333bb3dd54cb5bc2825ca46fbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a402aeaef03d8d3c684f0d8e7fe8a080bc02d761104e3c1095b3233aade2b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:21Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:21 crc kubenswrapper[4986]: I1203 12:56:21.430342 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:21 crc kubenswrapper[4986]: I1203 12:56:21.430444 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:21 crc kubenswrapper[4986]: I1203 12:56:21.430468 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:21 crc kubenswrapper[4986]: I1203 12:56:21.430501 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:21 crc kubenswrapper[4986]: I1203 12:56:21.430520 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:21Z","lastTransitionTime":"2025-12-03T12:56:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:21 crc kubenswrapper[4986]: I1203 12:56:21.434727 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rkgd9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a939a03f-0eec-49cb-9b23-40e359e427d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b854d526fa4ac2b270c7f27c1b35b1ff0fbc671e40a928649b578f926d6b51b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc042debcddb7ea2233d6c224020a32acd8c996d5334d8e6eaae8d1329bb3774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc042debcddb7ea2233d6c224020a32acd8c996d5334d8e6eaae8d1329bb3774\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7128a1dd657b9829c1af31ef1a2f5ad08d2bfd65e5d352cf5a69a5be4e9aae9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7128a1dd657b9829c1af31ef1a2f5ad08d2bfd65e5d352cf5a69a5be4e9aae9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cd633caedf88a1bec633cef61b8b30ae422b8c8d91a012395831a09d398f0f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cd633caedf88a1bec633cef61b8b30ae422b8c8d91a012395831a09d398f0f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1b9e8e1f9326660fe6d7c89af18c5f7dd1134e9f2e866da8ba9c795e13cb57d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1b9e8e1f9326660fe6d7c89af18c5f7dd1134e9f2e866da8ba9c795e13cb57d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55ef54ee9f22072e2b437f38f6d06f24e6f1da02555505bc65c488b5818c87a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55ef54ee9f22072e2b437f38f6d06f24e6f1da02555505bc65c488b5818c87a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35fecc2332fdab60719b864653301c5e8322b13ace401a0ff016ff3dc09cd3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35fecc2332fdab60719b864653301c5e8322b13ace401a0ff016ff3dc09cd3d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rkgd9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:21Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:21 crc kubenswrapper[4986]: I1203 12:56:21.446236 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rl2mt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea24f625-ded4-4e37-a23b-f96fe691b0dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhr82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhr82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rl2mt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:21Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:21 crc kubenswrapper[4986]: I1203 12:56:21.458150 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee22d4f3-8488-4497-a757-a6dd51e84539\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4fc247caf16eab425363e12219a76c40022f0daf0bc9c8b52f4eb8e3726f242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86d62d991d6516ed42f7cad655018e1ebc85c8c1b100d624c1c2e5e8f616cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4ba9819f4f1746f03a558617fc6ee14f5627c37c362d1198a303b6f1854c7bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79cba0b7394bf25be648502fc84a916d8f2fa0949edce20b5a748046ce9b6f75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:21Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:21 crc kubenswrapper[4986]: I1203 12:56:21.533060 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:21 crc kubenswrapper[4986]: I1203 12:56:21.533126 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:21 crc kubenswrapper[4986]: I1203 12:56:21.533145 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:21 crc kubenswrapper[4986]: I1203 12:56:21.533170 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:21 crc kubenswrapper[4986]: I1203 12:56:21.533187 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:21Z","lastTransitionTime":"2025-12-03T12:56:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:21 crc kubenswrapper[4986]: I1203 12:56:21.635628 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:21 crc kubenswrapper[4986]: I1203 12:56:21.635683 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:21 crc kubenswrapper[4986]: I1203 12:56:21.635699 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:21 crc kubenswrapper[4986]: I1203 12:56:21.635722 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:21 crc kubenswrapper[4986]: I1203 12:56:21.635733 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:21Z","lastTransitionTime":"2025-12-03T12:56:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:21 crc kubenswrapper[4986]: I1203 12:56:21.746248 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:21 crc kubenswrapper[4986]: I1203 12:56:21.746304 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:21 crc kubenswrapper[4986]: I1203 12:56:21.746319 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:21 crc kubenswrapper[4986]: I1203 12:56:21.746337 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:21 crc kubenswrapper[4986]: I1203 12:56:21.746350 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:21Z","lastTransitionTime":"2025-12-03T12:56:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:21 crc kubenswrapper[4986]: I1203 12:56:21.848636 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:21 crc kubenswrapper[4986]: I1203 12:56:21.848688 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:21 crc kubenswrapper[4986]: I1203 12:56:21.848697 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:21 crc kubenswrapper[4986]: I1203 12:56:21.848711 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:21 crc kubenswrapper[4986]: I1203 12:56:21.848720 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:21Z","lastTransitionTime":"2025-12-03T12:56:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:21 crc kubenswrapper[4986]: I1203 12:56:21.942604 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 12:56:21 crc kubenswrapper[4986]: E1203 12:56:21.942781 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 12:56:21 crc kubenswrapper[4986]: I1203 12:56:21.943244 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 12:56:21 crc kubenswrapper[4986]: E1203 12:56:21.943356 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 12:56:21 crc kubenswrapper[4986]: I1203 12:56:21.943423 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl2mt" Dec 03 12:56:21 crc kubenswrapper[4986]: E1203 12:56:21.943495 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl2mt" podUID="ea24f625-ded4-4e37-a23b-f96fe691b0dd" Dec 03 12:56:21 crc kubenswrapper[4986]: I1203 12:56:21.943560 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 12:56:21 crc kubenswrapper[4986]: E1203 12:56:21.943632 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 12:56:21 crc kubenswrapper[4986]: I1203 12:56:21.950632 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:21 crc kubenswrapper[4986]: I1203 12:56:21.950675 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:21 crc kubenswrapper[4986]: I1203 12:56:21.950689 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:21 crc kubenswrapper[4986]: I1203 12:56:21.950704 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:21 crc kubenswrapper[4986]: I1203 12:56:21.950716 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:21Z","lastTransitionTime":"2025-12-03T12:56:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:22 crc kubenswrapper[4986]: I1203 12:56:22.006159 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:22 crc kubenswrapper[4986]: I1203 12:56:22.006199 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:22 crc kubenswrapper[4986]: I1203 12:56:22.006210 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:22 crc kubenswrapper[4986]: I1203 12:56:22.006225 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:22 crc kubenswrapper[4986]: I1203 12:56:22.006238 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:22Z","lastTransitionTime":"2025-12-03T12:56:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:22 crc kubenswrapper[4986]: E1203 12:56:22.031059 4986 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b28ccd4-3e47-4e26-a3b8-f87de96f7586\\\",\\\"systemUUID\\\":\\\"52e71ce5-258d-4951-aa26-5d4aac0725ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:22Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:22 crc kubenswrapper[4986]: I1203 12:56:22.036633 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:22 crc kubenswrapper[4986]: I1203 12:56:22.036693 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:22 crc kubenswrapper[4986]: I1203 12:56:22.036711 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:22 crc kubenswrapper[4986]: I1203 12:56:22.036735 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:22 crc kubenswrapper[4986]: I1203 12:56:22.036753 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:22Z","lastTransitionTime":"2025-12-03T12:56:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:22 crc kubenswrapper[4986]: E1203 12:56:22.055710 4986 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b28ccd4-3e47-4e26-a3b8-f87de96f7586\\\",\\\"systemUUID\\\":\\\"52e71ce5-258d-4951-aa26-5d4aac0725ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:22Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:22 crc kubenswrapper[4986]: I1203 12:56:22.059968 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:22 crc kubenswrapper[4986]: I1203 12:56:22.060031 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:22 crc kubenswrapper[4986]: I1203 12:56:22.060084 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:22 crc kubenswrapper[4986]: I1203 12:56:22.060121 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:22 crc kubenswrapper[4986]: I1203 12:56:22.060384 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:22Z","lastTransitionTime":"2025-12-03T12:56:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:22 crc kubenswrapper[4986]: E1203 12:56:22.073893 4986 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b28ccd4-3e47-4e26-a3b8-f87de96f7586\\\",\\\"systemUUID\\\":\\\"52e71ce5-258d-4951-aa26-5d4aac0725ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:22Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:22 crc kubenswrapper[4986]: I1203 12:56:22.076753 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:22 crc kubenswrapper[4986]: I1203 12:56:22.076807 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:22 crc kubenswrapper[4986]: I1203 12:56:22.076816 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:22 crc kubenswrapper[4986]: I1203 12:56:22.076829 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:22 crc kubenswrapper[4986]: I1203 12:56:22.076838 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:22Z","lastTransitionTime":"2025-12-03T12:56:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:22 crc kubenswrapper[4986]: E1203 12:56:22.087987 4986 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b28ccd4-3e47-4e26-a3b8-f87de96f7586\\\",\\\"systemUUID\\\":\\\"52e71ce5-258d-4951-aa26-5d4aac0725ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:22Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:22 crc kubenswrapper[4986]: I1203 12:56:22.091082 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:22 crc kubenswrapper[4986]: I1203 12:56:22.091135 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:22 crc kubenswrapper[4986]: I1203 12:56:22.091143 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:22 crc kubenswrapper[4986]: I1203 12:56:22.091158 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:22 crc kubenswrapper[4986]: I1203 12:56:22.091167 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:22Z","lastTransitionTime":"2025-12-03T12:56:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:22 crc kubenswrapper[4986]: E1203 12:56:22.101904 4986 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b28ccd4-3e47-4e26-a3b8-f87de96f7586\\\",\\\"systemUUID\\\":\\\"52e71ce5-258d-4951-aa26-5d4aac0725ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:22Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:22 crc kubenswrapper[4986]: E1203 12:56:22.102013 4986 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 03 12:56:22 crc kubenswrapper[4986]: I1203 12:56:22.103270 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:22 crc kubenswrapper[4986]: I1203 12:56:22.103326 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:22 crc kubenswrapper[4986]: I1203 12:56:22.103336 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:22 crc kubenswrapper[4986]: I1203 12:56:22.103354 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:22 crc kubenswrapper[4986]: I1203 12:56:22.103364 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:22Z","lastTransitionTime":"2025-12-03T12:56:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:22 crc kubenswrapper[4986]: I1203 12:56:22.205310 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:22 crc kubenswrapper[4986]: I1203 12:56:22.205357 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:22 crc kubenswrapper[4986]: I1203 12:56:22.205367 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:22 crc kubenswrapper[4986]: I1203 12:56:22.205384 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:22 crc kubenswrapper[4986]: I1203 12:56:22.205398 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:22Z","lastTransitionTime":"2025-12-03T12:56:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:22 crc kubenswrapper[4986]: I1203 12:56:22.308684 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:22 crc kubenswrapper[4986]: I1203 12:56:22.308734 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:22 crc kubenswrapper[4986]: I1203 12:56:22.308746 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:22 crc kubenswrapper[4986]: I1203 12:56:22.308762 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:22 crc kubenswrapper[4986]: I1203 12:56:22.308775 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:22Z","lastTransitionTime":"2025-12-03T12:56:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:22 crc kubenswrapper[4986]: I1203 12:56:22.412172 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:22 crc kubenswrapper[4986]: I1203 12:56:22.412231 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:22 crc kubenswrapper[4986]: I1203 12:56:22.412247 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:22 crc kubenswrapper[4986]: I1203 12:56:22.412268 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:22 crc kubenswrapper[4986]: I1203 12:56:22.412305 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:22Z","lastTransitionTime":"2025-12-03T12:56:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:22 crc kubenswrapper[4986]: I1203 12:56:22.514315 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:22 crc kubenswrapper[4986]: I1203 12:56:22.514357 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:22 crc kubenswrapper[4986]: I1203 12:56:22.514397 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:22 crc kubenswrapper[4986]: I1203 12:56:22.514411 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:22 crc kubenswrapper[4986]: I1203 12:56:22.514420 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:22Z","lastTransitionTime":"2025-12-03T12:56:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:22 crc kubenswrapper[4986]: I1203 12:56:22.616920 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:22 crc kubenswrapper[4986]: I1203 12:56:22.616955 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:22 crc kubenswrapper[4986]: I1203 12:56:22.616963 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:22 crc kubenswrapper[4986]: I1203 12:56:22.616976 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:22 crc kubenswrapper[4986]: I1203 12:56:22.616986 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:22Z","lastTransitionTime":"2025-12-03T12:56:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:22 crc kubenswrapper[4986]: I1203 12:56:22.718489 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:22 crc kubenswrapper[4986]: I1203 12:56:22.718526 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:22 crc kubenswrapper[4986]: I1203 12:56:22.718536 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:22 crc kubenswrapper[4986]: I1203 12:56:22.718550 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:22 crc kubenswrapper[4986]: I1203 12:56:22.718558 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:22Z","lastTransitionTime":"2025-12-03T12:56:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:22 crc kubenswrapper[4986]: I1203 12:56:22.820797 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:22 crc kubenswrapper[4986]: I1203 12:56:22.820829 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:22 crc kubenswrapper[4986]: I1203 12:56:22.820839 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:22 crc kubenswrapper[4986]: I1203 12:56:22.820854 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:22 crc kubenswrapper[4986]: I1203 12:56:22.820898 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:22Z","lastTransitionTime":"2025-12-03T12:56:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:22 crc kubenswrapper[4986]: I1203 12:56:22.923976 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:22 crc kubenswrapper[4986]: I1203 12:56:22.924015 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:22 crc kubenswrapper[4986]: I1203 12:56:22.924029 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:22 crc kubenswrapper[4986]: I1203 12:56:22.924050 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:22 crc kubenswrapper[4986]: I1203 12:56:22.924065 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:22Z","lastTransitionTime":"2025-12-03T12:56:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:23 crc kubenswrapper[4986]: I1203 12:56:23.026784 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:23 crc kubenswrapper[4986]: I1203 12:56:23.026820 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:23 crc kubenswrapper[4986]: I1203 12:56:23.026828 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:23 crc kubenswrapper[4986]: I1203 12:56:23.026841 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:23 crc kubenswrapper[4986]: I1203 12:56:23.026849 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:23Z","lastTransitionTime":"2025-12-03T12:56:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:23 crc kubenswrapper[4986]: I1203 12:56:23.129210 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:23 crc kubenswrapper[4986]: I1203 12:56:23.129261 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:23 crc kubenswrapper[4986]: I1203 12:56:23.129270 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:23 crc kubenswrapper[4986]: I1203 12:56:23.129311 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:23 crc kubenswrapper[4986]: I1203 12:56:23.129328 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:23Z","lastTransitionTime":"2025-12-03T12:56:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:23 crc kubenswrapper[4986]: I1203 12:56:23.232214 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:23 crc kubenswrapper[4986]: I1203 12:56:23.232254 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:23 crc kubenswrapper[4986]: I1203 12:56:23.232263 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:23 crc kubenswrapper[4986]: I1203 12:56:23.232292 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:23 crc kubenswrapper[4986]: I1203 12:56:23.232302 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:23Z","lastTransitionTime":"2025-12-03T12:56:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:23 crc kubenswrapper[4986]: I1203 12:56:23.334430 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:23 crc kubenswrapper[4986]: I1203 12:56:23.334804 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:23 crc kubenswrapper[4986]: I1203 12:56:23.334942 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:23 crc kubenswrapper[4986]: I1203 12:56:23.335090 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:23 crc kubenswrapper[4986]: I1203 12:56:23.335231 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:23Z","lastTransitionTime":"2025-12-03T12:56:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:23 crc kubenswrapper[4986]: I1203 12:56:23.438539 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:23 crc kubenswrapper[4986]: I1203 12:56:23.438622 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:23 crc kubenswrapper[4986]: I1203 12:56:23.438706 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:23 crc kubenswrapper[4986]: I1203 12:56:23.438740 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:23 crc kubenswrapper[4986]: I1203 12:56:23.438763 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:23Z","lastTransitionTime":"2025-12-03T12:56:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:23 crc kubenswrapper[4986]: I1203 12:56:23.541128 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:23 crc kubenswrapper[4986]: I1203 12:56:23.541173 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:23 crc kubenswrapper[4986]: I1203 12:56:23.541182 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:23 crc kubenswrapper[4986]: I1203 12:56:23.541198 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:23 crc kubenswrapper[4986]: I1203 12:56:23.541208 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:23Z","lastTransitionTime":"2025-12-03T12:56:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:23 crc kubenswrapper[4986]: I1203 12:56:23.643393 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:23 crc kubenswrapper[4986]: I1203 12:56:23.643454 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:23 crc kubenswrapper[4986]: I1203 12:56:23.643473 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:23 crc kubenswrapper[4986]: I1203 12:56:23.643496 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:23 crc kubenswrapper[4986]: I1203 12:56:23.643513 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:23Z","lastTransitionTime":"2025-12-03T12:56:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:23 crc kubenswrapper[4986]: I1203 12:56:23.745443 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:23 crc kubenswrapper[4986]: I1203 12:56:23.745518 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:23 crc kubenswrapper[4986]: I1203 12:56:23.745527 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:23 crc kubenswrapper[4986]: I1203 12:56:23.745539 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:23 crc kubenswrapper[4986]: I1203 12:56:23.745547 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:23Z","lastTransitionTime":"2025-12-03T12:56:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:23 crc kubenswrapper[4986]: I1203 12:56:23.767723 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ea24f625-ded4-4e37-a23b-f96fe691b0dd-metrics-certs\") pod \"network-metrics-daemon-rl2mt\" (UID: \"ea24f625-ded4-4e37-a23b-f96fe691b0dd\") " pod="openshift-multus/network-metrics-daemon-rl2mt" Dec 03 12:56:23 crc kubenswrapper[4986]: E1203 12:56:23.767945 4986 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 12:56:23 crc kubenswrapper[4986]: E1203 12:56:23.768034 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ea24f625-ded4-4e37-a23b-f96fe691b0dd-metrics-certs podName:ea24f625-ded4-4e37-a23b-f96fe691b0dd nodeName:}" failed. No retries permitted until 2025-12-03 12:56:31.768012254 +0000 UTC m=+51.234443475 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ea24f625-ded4-4e37-a23b-f96fe691b0dd-metrics-certs") pod "network-metrics-daemon-rl2mt" (UID: "ea24f625-ded4-4e37-a23b-f96fe691b0dd") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 12:56:23 crc kubenswrapper[4986]: I1203 12:56:23.847481 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:23 crc kubenswrapper[4986]: I1203 12:56:23.847536 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:23 crc kubenswrapper[4986]: I1203 12:56:23.847552 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:23 crc kubenswrapper[4986]: I1203 12:56:23.847574 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:23 crc kubenswrapper[4986]: I1203 12:56:23.847589 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:23Z","lastTransitionTime":"2025-12-03T12:56:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:23 crc kubenswrapper[4986]: I1203 12:56:23.942810 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 12:56:23 crc kubenswrapper[4986]: I1203 12:56:23.942832 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 12:56:23 crc kubenswrapper[4986]: I1203 12:56:23.942940 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 12:56:23 crc kubenswrapper[4986]: E1203 12:56:23.943161 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 12:56:23 crc kubenswrapper[4986]: I1203 12:56:23.943246 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl2mt" Dec 03 12:56:23 crc kubenswrapper[4986]: E1203 12:56:23.943484 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 12:56:23 crc kubenswrapper[4986]: E1203 12:56:23.944101 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl2mt" podUID="ea24f625-ded4-4e37-a23b-f96fe691b0dd" Dec 03 12:56:23 crc kubenswrapper[4986]: E1203 12:56:23.944552 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 12:56:23 crc kubenswrapper[4986]: I1203 12:56:23.951142 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:23 crc kubenswrapper[4986]: I1203 12:56:23.951207 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:23 crc kubenswrapper[4986]: I1203 12:56:23.951221 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:23 crc kubenswrapper[4986]: I1203 12:56:23.951240 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:23 crc kubenswrapper[4986]: I1203 12:56:23.951251 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:23Z","lastTransitionTime":"2025-12-03T12:56:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:24 crc kubenswrapper[4986]: I1203 12:56:24.054846 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:24 crc kubenswrapper[4986]: I1203 12:56:24.054909 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:24 crc kubenswrapper[4986]: I1203 12:56:24.054926 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:24 crc kubenswrapper[4986]: I1203 12:56:24.054948 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:24 crc kubenswrapper[4986]: I1203 12:56:24.054967 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:24Z","lastTransitionTime":"2025-12-03T12:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:24 crc kubenswrapper[4986]: I1203 12:56:24.157253 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:24 crc kubenswrapper[4986]: I1203 12:56:24.157328 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:24 crc kubenswrapper[4986]: I1203 12:56:24.157342 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:24 crc kubenswrapper[4986]: I1203 12:56:24.157359 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:24 crc kubenswrapper[4986]: I1203 12:56:24.157371 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:24Z","lastTransitionTime":"2025-12-03T12:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:24 crc kubenswrapper[4986]: I1203 12:56:24.260541 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:24 crc kubenswrapper[4986]: I1203 12:56:24.260587 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:24 crc kubenswrapper[4986]: I1203 12:56:24.260624 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:24 crc kubenswrapper[4986]: I1203 12:56:24.260643 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:24 crc kubenswrapper[4986]: I1203 12:56:24.260654 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:24Z","lastTransitionTime":"2025-12-03T12:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:24 crc kubenswrapper[4986]: I1203 12:56:24.362844 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:24 crc kubenswrapper[4986]: I1203 12:56:24.362882 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:24 crc kubenswrapper[4986]: I1203 12:56:24.362890 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:24 crc kubenswrapper[4986]: I1203 12:56:24.362902 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:24 crc kubenswrapper[4986]: I1203 12:56:24.362911 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:24Z","lastTransitionTime":"2025-12-03T12:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:24 crc kubenswrapper[4986]: I1203 12:56:24.465099 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:24 crc kubenswrapper[4986]: I1203 12:56:24.465141 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:24 crc kubenswrapper[4986]: I1203 12:56:24.465151 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:24 crc kubenswrapper[4986]: I1203 12:56:24.465166 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:24 crc kubenswrapper[4986]: I1203 12:56:24.465177 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:24Z","lastTransitionTime":"2025-12-03T12:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:24 crc kubenswrapper[4986]: I1203 12:56:24.567638 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:24 crc kubenswrapper[4986]: I1203 12:56:24.567689 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:24 crc kubenswrapper[4986]: I1203 12:56:24.567700 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:24 crc kubenswrapper[4986]: I1203 12:56:24.567718 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:24 crc kubenswrapper[4986]: I1203 12:56:24.567731 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:24Z","lastTransitionTime":"2025-12-03T12:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:24 crc kubenswrapper[4986]: I1203 12:56:24.669965 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:24 crc kubenswrapper[4986]: I1203 12:56:24.670018 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:24 crc kubenswrapper[4986]: I1203 12:56:24.670030 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:24 crc kubenswrapper[4986]: I1203 12:56:24.670047 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:24 crc kubenswrapper[4986]: I1203 12:56:24.670058 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:24Z","lastTransitionTime":"2025-12-03T12:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:24 crc kubenswrapper[4986]: I1203 12:56:24.772493 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:24 crc kubenswrapper[4986]: I1203 12:56:24.772555 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:24 crc kubenswrapper[4986]: I1203 12:56:24.772573 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:24 crc kubenswrapper[4986]: I1203 12:56:24.772596 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:24 crc kubenswrapper[4986]: I1203 12:56:24.772613 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:24Z","lastTransitionTime":"2025-12-03T12:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:24 crc kubenswrapper[4986]: I1203 12:56:24.874718 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:24 crc kubenswrapper[4986]: I1203 12:56:24.874810 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:24 crc kubenswrapper[4986]: I1203 12:56:24.874845 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:24 crc kubenswrapper[4986]: I1203 12:56:24.874880 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:24 crc kubenswrapper[4986]: I1203 12:56:24.874902 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:24Z","lastTransitionTime":"2025-12-03T12:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:24 crc kubenswrapper[4986]: I1203 12:56:24.977150 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:24 crc kubenswrapper[4986]: I1203 12:56:24.977193 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:24 crc kubenswrapper[4986]: I1203 12:56:24.977202 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:24 crc kubenswrapper[4986]: I1203 12:56:24.977215 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:24 crc kubenswrapper[4986]: I1203 12:56:24.977225 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:24Z","lastTransitionTime":"2025-12-03T12:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:25 crc kubenswrapper[4986]: I1203 12:56:25.079174 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:25 crc kubenswrapper[4986]: I1203 12:56:25.079209 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:25 crc kubenswrapper[4986]: I1203 12:56:25.079219 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:25 crc kubenswrapper[4986]: I1203 12:56:25.079232 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:25 crc kubenswrapper[4986]: I1203 12:56:25.079241 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:25Z","lastTransitionTime":"2025-12-03T12:56:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:25 crc kubenswrapper[4986]: I1203 12:56:25.181187 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:25 crc kubenswrapper[4986]: I1203 12:56:25.181233 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:25 crc kubenswrapper[4986]: I1203 12:56:25.181244 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:25 crc kubenswrapper[4986]: I1203 12:56:25.181261 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:25 crc kubenswrapper[4986]: I1203 12:56:25.181272 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:25Z","lastTransitionTime":"2025-12-03T12:56:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:25 crc kubenswrapper[4986]: I1203 12:56:25.283851 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:25 crc kubenswrapper[4986]: I1203 12:56:25.283910 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:25 crc kubenswrapper[4986]: I1203 12:56:25.283926 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:25 crc kubenswrapper[4986]: I1203 12:56:25.283948 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:25 crc kubenswrapper[4986]: I1203 12:56:25.283966 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:25Z","lastTransitionTime":"2025-12-03T12:56:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:25 crc kubenswrapper[4986]: I1203 12:56:25.387412 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:25 crc kubenswrapper[4986]: I1203 12:56:25.387479 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:25 crc kubenswrapper[4986]: I1203 12:56:25.387496 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:25 crc kubenswrapper[4986]: I1203 12:56:25.387521 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:25 crc kubenswrapper[4986]: I1203 12:56:25.387541 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:25Z","lastTransitionTime":"2025-12-03T12:56:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:25 crc kubenswrapper[4986]: I1203 12:56:25.490963 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:25 crc kubenswrapper[4986]: I1203 12:56:25.491011 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:25 crc kubenswrapper[4986]: I1203 12:56:25.491020 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:25 crc kubenswrapper[4986]: I1203 12:56:25.491040 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:25 crc kubenswrapper[4986]: I1203 12:56:25.491050 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:25Z","lastTransitionTime":"2025-12-03T12:56:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:25 crc kubenswrapper[4986]: I1203 12:56:25.593727 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:25 crc kubenswrapper[4986]: I1203 12:56:25.593776 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:25 crc kubenswrapper[4986]: I1203 12:56:25.593794 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:25 crc kubenswrapper[4986]: I1203 12:56:25.593813 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:25 crc kubenswrapper[4986]: I1203 12:56:25.593831 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:25Z","lastTransitionTime":"2025-12-03T12:56:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:25 crc kubenswrapper[4986]: I1203 12:56:25.696442 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:25 crc kubenswrapper[4986]: I1203 12:56:25.696480 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:25 crc kubenswrapper[4986]: I1203 12:56:25.696490 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:25 crc kubenswrapper[4986]: I1203 12:56:25.696504 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:25 crc kubenswrapper[4986]: I1203 12:56:25.696513 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:25Z","lastTransitionTime":"2025-12-03T12:56:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:25 crc kubenswrapper[4986]: I1203 12:56:25.799153 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:25 crc kubenswrapper[4986]: I1203 12:56:25.799193 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:25 crc kubenswrapper[4986]: I1203 12:56:25.799202 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:25 crc kubenswrapper[4986]: I1203 12:56:25.799217 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:25 crc kubenswrapper[4986]: I1203 12:56:25.799229 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:25Z","lastTransitionTime":"2025-12-03T12:56:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:25 crc kubenswrapper[4986]: I1203 12:56:25.901972 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:25 crc kubenswrapper[4986]: I1203 12:56:25.902019 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:25 crc kubenswrapper[4986]: I1203 12:56:25.902030 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:25 crc kubenswrapper[4986]: I1203 12:56:25.902046 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:25 crc kubenswrapper[4986]: I1203 12:56:25.902057 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:25Z","lastTransitionTime":"2025-12-03T12:56:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:25 crc kubenswrapper[4986]: I1203 12:56:25.942486 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 12:56:25 crc kubenswrapper[4986]: I1203 12:56:25.942565 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 12:56:25 crc kubenswrapper[4986]: I1203 12:56:25.942595 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl2mt" Dec 03 12:56:25 crc kubenswrapper[4986]: E1203 12:56:25.942692 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 12:56:25 crc kubenswrapper[4986]: I1203 12:56:25.942734 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 12:56:25 crc kubenswrapper[4986]: E1203 12:56:25.942864 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 12:56:25 crc kubenswrapper[4986]: E1203 12:56:25.942956 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 12:56:25 crc kubenswrapper[4986]: E1203 12:56:25.943068 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl2mt" podUID="ea24f625-ded4-4e37-a23b-f96fe691b0dd" Dec 03 12:56:26 crc kubenswrapper[4986]: I1203 12:56:26.004051 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:26 crc kubenswrapper[4986]: I1203 12:56:26.004121 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:26 crc kubenswrapper[4986]: I1203 12:56:26.004134 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:26 crc kubenswrapper[4986]: I1203 12:56:26.004174 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:26 crc kubenswrapper[4986]: I1203 12:56:26.004186 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:26Z","lastTransitionTime":"2025-12-03T12:56:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:26 crc kubenswrapper[4986]: I1203 12:56:26.107012 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:26 crc kubenswrapper[4986]: I1203 12:56:26.107071 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:26 crc kubenswrapper[4986]: I1203 12:56:26.107082 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:26 crc kubenswrapper[4986]: I1203 12:56:26.107098 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:26 crc kubenswrapper[4986]: I1203 12:56:26.107110 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:26Z","lastTransitionTime":"2025-12-03T12:56:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:26 crc kubenswrapper[4986]: I1203 12:56:26.209594 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:26 crc kubenswrapper[4986]: I1203 12:56:26.209628 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:26 crc kubenswrapper[4986]: I1203 12:56:26.209637 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:26 crc kubenswrapper[4986]: I1203 12:56:26.209651 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:26 crc kubenswrapper[4986]: I1203 12:56:26.209661 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:26Z","lastTransitionTime":"2025-12-03T12:56:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:26 crc kubenswrapper[4986]: I1203 12:56:26.315117 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:26 crc kubenswrapper[4986]: I1203 12:56:26.315164 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:26 crc kubenswrapper[4986]: I1203 12:56:26.315175 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:26 crc kubenswrapper[4986]: I1203 12:56:26.315193 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:26 crc kubenswrapper[4986]: I1203 12:56:26.315202 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:26Z","lastTransitionTime":"2025-12-03T12:56:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:26 crc kubenswrapper[4986]: I1203 12:56:26.418185 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:26 crc kubenswrapper[4986]: I1203 12:56:26.418269 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:26 crc kubenswrapper[4986]: I1203 12:56:26.418380 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:26 crc kubenswrapper[4986]: I1203 12:56:26.418443 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:26 crc kubenswrapper[4986]: I1203 12:56:26.418707 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:26Z","lastTransitionTime":"2025-12-03T12:56:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:26 crc kubenswrapper[4986]: I1203 12:56:26.521182 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:26 crc kubenswrapper[4986]: I1203 12:56:26.521239 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:26 crc kubenswrapper[4986]: I1203 12:56:26.521250 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:26 crc kubenswrapper[4986]: I1203 12:56:26.521268 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:26 crc kubenswrapper[4986]: I1203 12:56:26.521305 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:26Z","lastTransitionTime":"2025-12-03T12:56:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:26 crc kubenswrapper[4986]: I1203 12:56:26.623693 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:26 crc kubenswrapper[4986]: I1203 12:56:26.623739 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:26 crc kubenswrapper[4986]: I1203 12:56:26.623753 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:26 crc kubenswrapper[4986]: I1203 12:56:26.623770 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:26 crc kubenswrapper[4986]: I1203 12:56:26.623782 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:26Z","lastTransitionTime":"2025-12-03T12:56:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:26 crc kubenswrapper[4986]: I1203 12:56:26.725692 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:26 crc kubenswrapper[4986]: I1203 12:56:26.725772 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:26 crc kubenswrapper[4986]: I1203 12:56:26.725784 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:26 crc kubenswrapper[4986]: I1203 12:56:26.725800 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:26 crc kubenswrapper[4986]: I1203 12:56:26.726180 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:26Z","lastTransitionTime":"2025-12-03T12:56:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:26 crc kubenswrapper[4986]: I1203 12:56:26.828871 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:26 crc kubenswrapper[4986]: I1203 12:56:26.828897 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:26 crc kubenswrapper[4986]: I1203 12:56:26.828906 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:26 crc kubenswrapper[4986]: I1203 12:56:26.828921 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:26 crc kubenswrapper[4986]: I1203 12:56:26.828932 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:26Z","lastTransitionTime":"2025-12-03T12:56:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:26 crc kubenswrapper[4986]: I1203 12:56:26.932488 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:26 crc kubenswrapper[4986]: I1203 12:56:26.932549 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:26 crc kubenswrapper[4986]: I1203 12:56:26.932570 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:26 crc kubenswrapper[4986]: I1203 12:56:26.932596 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:26 crc kubenswrapper[4986]: I1203 12:56:26.932623 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:26Z","lastTransitionTime":"2025-12-03T12:56:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:27 crc kubenswrapper[4986]: I1203 12:56:27.035788 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:27 crc kubenswrapper[4986]: I1203 12:56:27.035839 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:27 crc kubenswrapper[4986]: I1203 12:56:27.035849 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:27 crc kubenswrapper[4986]: I1203 12:56:27.035868 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:27 crc kubenswrapper[4986]: I1203 12:56:27.036332 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:27Z","lastTransitionTime":"2025-12-03T12:56:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:27 crc kubenswrapper[4986]: I1203 12:56:27.139659 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:27 crc kubenswrapper[4986]: I1203 12:56:27.139741 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:27 crc kubenswrapper[4986]: I1203 12:56:27.139763 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:27 crc kubenswrapper[4986]: I1203 12:56:27.139805 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:27 crc kubenswrapper[4986]: I1203 12:56:27.139836 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:27Z","lastTransitionTime":"2025-12-03T12:56:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:27 crc kubenswrapper[4986]: I1203 12:56:27.242385 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:27 crc kubenswrapper[4986]: I1203 12:56:27.242439 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:27 crc kubenswrapper[4986]: I1203 12:56:27.242454 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:27 crc kubenswrapper[4986]: I1203 12:56:27.242473 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:27 crc kubenswrapper[4986]: I1203 12:56:27.242490 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:27Z","lastTransitionTime":"2025-12-03T12:56:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:27 crc kubenswrapper[4986]: I1203 12:56:27.344895 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:27 crc kubenswrapper[4986]: I1203 12:56:27.344957 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:27 crc kubenswrapper[4986]: I1203 12:56:27.344976 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:27 crc kubenswrapper[4986]: I1203 12:56:27.345000 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:27 crc kubenswrapper[4986]: I1203 12:56:27.345017 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:27Z","lastTransitionTime":"2025-12-03T12:56:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:27 crc kubenswrapper[4986]: I1203 12:56:27.447780 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:27 crc kubenswrapper[4986]: I1203 12:56:27.447826 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:27 crc kubenswrapper[4986]: I1203 12:56:27.447835 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:27 crc kubenswrapper[4986]: I1203 12:56:27.447849 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:27 crc kubenswrapper[4986]: I1203 12:56:27.447859 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:27Z","lastTransitionTime":"2025-12-03T12:56:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:27 crc kubenswrapper[4986]: I1203 12:56:27.549744 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:27 crc kubenswrapper[4986]: I1203 12:56:27.550018 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:27 crc kubenswrapper[4986]: I1203 12:56:27.550162 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:27 crc kubenswrapper[4986]: I1203 12:56:27.550300 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:27 crc kubenswrapper[4986]: I1203 12:56:27.550415 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:27Z","lastTransitionTime":"2025-12-03T12:56:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:27 crc kubenswrapper[4986]: I1203 12:56:27.653217 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:27 crc kubenswrapper[4986]: I1203 12:56:27.653267 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:27 crc kubenswrapper[4986]: I1203 12:56:27.653301 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:27 crc kubenswrapper[4986]: I1203 12:56:27.653319 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:27 crc kubenswrapper[4986]: I1203 12:56:27.653332 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:27Z","lastTransitionTime":"2025-12-03T12:56:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:27 crc kubenswrapper[4986]: I1203 12:56:27.756387 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:27 crc kubenswrapper[4986]: I1203 12:56:27.756444 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:27 crc kubenswrapper[4986]: I1203 12:56:27.756457 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:27 crc kubenswrapper[4986]: I1203 12:56:27.756483 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:27 crc kubenswrapper[4986]: I1203 12:56:27.756501 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:27Z","lastTransitionTime":"2025-12-03T12:56:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:27 crc kubenswrapper[4986]: I1203 12:56:27.859602 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:27 crc kubenswrapper[4986]: I1203 12:56:27.859654 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:27 crc kubenswrapper[4986]: I1203 12:56:27.859665 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:27 crc kubenswrapper[4986]: I1203 12:56:27.859680 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:27 crc kubenswrapper[4986]: I1203 12:56:27.859689 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:27Z","lastTransitionTime":"2025-12-03T12:56:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:27 crc kubenswrapper[4986]: I1203 12:56:27.943308 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 12:56:27 crc kubenswrapper[4986]: I1203 12:56:27.943383 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl2mt" Dec 03 12:56:27 crc kubenswrapper[4986]: E1203 12:56:27.943491 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 12:56:27 crc kubenswrapper[4986]: I1203 12:56:27.943512 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 12:56:27 crc kubenswrapper[4986]: I1203 12:56:27.943512 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 12:56:27 crc kubenswrapper[4986]: E1203 12:56:27.943682 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl2mt" podUID="ea24f625-ded4-4e37-a23b-f96fe691b0dd" Dec 03 12:56:27 crc kubenswrapper[4986]: E1203 12:56:27.943804 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 12:56:27 crc kubenswrapper[4986]: E1203 12:56:27.943888 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 12:56:27 crc kubenswrapper[4986]: I1203 12:56:27.962121 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:27 crc kubenswrapper[4986]: I1203 12:56:27.962159 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:27 crc kubenswrapper[4986]: I1203 12:56:27.962169 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:27 crc kubenswrapper[4986]: I1203 12:56:27.962185 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:27 crc kubenswrapper[4986]: I1203 12:56:27.962196 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:27Z","lastTransitionTime":"2025-12-03T12:56:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:28 crc kubenswrapper[4986]: I1203 12:56:28.065103 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:28 crc kubenswrapper[4986]: I1203 12:56:28.065150 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:28 crc kubenswrapper[4986]: I1203 12:56:28.065161 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:28 crc kubenswrapper[4986]: I1203 12:56:28.065179 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:28 crc kubenswrapper[4986]: I1203 12:56:28.065194 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:28Z","lastTransitionTime":"2025-12-03T12:56:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:28 crc kubenswrapper[4986]: I1203 12:56:28.167644 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:28 crc kubenswrapper[4986]: I1203 12:56:28.167701 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:28 crc kubenswrapper[4986]: I1203 12:56:28.167713 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:28 crc kubenswrapper[4986]: I1203 12:56:28.167728 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:28 crc kubenswrapper[4986]: I1203 12:56:28.167740 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:28Z","lastTransitionTime":"2025-12-03T12:56:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:28 crc kubenswrapper[4986]: I1203 12:56:28.270156 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:28 crc kubenswrapper[4986]: I1203 12:56:28.270185 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:28 crc kubenswrapper[4986]: I1203 12:56:28.270192 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:28 crc kubenswrapper[4986]: I1203 12:56:28.270206 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:28 crc kubenswrapper[4986]: I1203 12:56:28.270216 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:28Z","lastTransitionTime":"2025-12-03T12:56:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:28 crc kubenswrapper[4986]: I1203 12:56:28.373373 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:28 crc kubenswrapper[4986]: I1203 12:56:28.373439 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:28 crc kubenswrapper[4986]: I1203 12:56:28.373456 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:28 crc kubenswrapper[4986]: I1203 12:56:28.373482 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:28 crc kubenswrapper[4986]: I1203 12:56:28.373499 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:28Z","lastTransitionTime":"2025-12-03T12:56:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:28 crc kubenswrapper[4986]: I1203 12:56:28.475258 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:28 crc kubenswrapper[4986]: I1203 12:56:28.475321 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:28 crc kubenswrapper[4986]: I1203 12:56:28.475332 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:28 crc kubenswrapper[4986]: I1203 12:56:28.475345 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:28 crc kubenswrapper[4986]: I1203 12:56:28.475355 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:28Z","lastTransitionTime":"2025-12-03T12:56:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:28 crc kubenswrapper[4986]: I1203 12:56:28.579160 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:28 crc kubenswrapper[4986]: I1203 12:56:28.579222 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:28 crc kubenswrapper[4986]: I1203 12:56:28.579237 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:28 crc kubenswrapper[4986]: I1203 12:56:28.579258 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:28 crc kubenswrapper[4986]: I1203 12:56:28.579273 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:28Z","lastTransitionTime":"2025-12-03T12:56:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:28 crc kubenswrapper[4986]: I1203 12:56:28.682010 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:28 crc kubenswrapper[4986]: I1203 12:56:28.682041 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:28 crc kubenswrapper[4986]: I1203 12:56:28.682057 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:28 crc kubenswrapper[4986]: I1203 12:56:28.682072 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:28 crc kubenswrapper[4986]: I1203 12:56:28.682084 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:28Z","lastTransitionTime":"2025-12-03T12:56:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:28 crc kubenswrapper[4986]: I1203 12:56:28.785440 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:28 crc kubenswrapper[4986]: I1203 12:56:28.785491 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:28 crc kubenswrapper[4986]: I1203 12:56:28.785508 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:28 crc kubenswrapper[4986]: I1203 12:56:28.785531 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:28 crc kubenswrapper[4986]: I1203 12:56:28.785546 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:28Z","lastTransitionTime":"2025-12-03T12:56:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:28 crc kubenswrapper[4986]: I1203 12:56:28.890630 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:28 crc kubenswrapper[4986]: I1203 12:56:28.890697 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:28 crc kubenswrapper[4986]: I1203 12:56:28.890715 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:28 crc kubenswrapper[4986]: I1203 12:56:28.890739 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:28 crc kubenswrapper[4986]: I1203 12:56:28.890756 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:28Z","lastTransitionTime":"2025-12-03T12:56:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:28 crc kubenswrapper[4986]: I1203 12:56:28.992947 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:28 crc kubenswrapper[4986]: I1203 12:56:28.993016 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:28 crc kubenswrapper[4986]: I1203 12:56:28.993025 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:28 crc kubenswrapper[4986]: I1203 12:56:28.993042 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:28 crc kubenswrapper[4986]: I1203 12:56:28.993052 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:28Z","lastTransitionTime":"2025-12-03T12:56:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:29 crc kubenswrapper[4986]: I1203 12:56:29.095548 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:29 crc kubenswrapper[4986]: I1203 12:56:29.095592 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:29 crc kubenswrapper[4986]: I1203 12:56:29.095604 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:29 crc kubenswrapper[4986]: I1203 12:56:29.095621 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:29 crc kubenswrapper[4986]: I1203 12:56:29.095633 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:29Z","lastTransitionTime":"2025-12-03T12:56:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:29 crc kubenswrapper[4986]: I1203 12:56:29.198074 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:29 crc kubenswrapper[4986]: I1203 12:56:29.198179 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:29 crc kubenswrapper[4986]: I1203 12:56:29.198201 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:29 crc kubenswrapper[4986]: I1203 12:56:29.198223 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:29 crc kubenswrapper[4986]: I1203 12:56:29.198239 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:29Z","lastTransitionTime":"2025-12-03T12:56:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:29 crc kubenswrapper[4986]: I1203 12:56:29.300907 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:29 crc kubenswrapper[4986]: I1203 12:56:29.301180 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:29 crc kubenswrapper[4986]: I1203 12:56:29.301263 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:29 crc kubenswrapper[4986]: I1203 12:56:29.301374 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:29 crc kubenswrapper[4986]: I1203 12:56:29.301437 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:29Z","lastTransitionTime":"2025-12-03T12:56:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:29 crc kubenswrapper[4986]: I1203 12:56:29.403183 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:29 crc kubenswrapper[4986]: I1203 12:56:29.403220 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:29 crc kubenswrapper[4986]: I1203 12:56:29.403230 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:29 crc kubenswrapper[4986]: I1203 12:56:29.403244 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:29 crc kubenswrapper[4986]: I1203 12:56:29.403254 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:29Z","lastTransitionTime":"2025-12-03T12:56:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:29 crc kubenswrapper[4986]: I1203 12:56:29.506245 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:29 crc kubenswrapper[4986]: I1203 12:56:29.506311 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:29 crc kubenswrapper[4986]: I1203 12:56:29.506327 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:29 crc kubenswrapper[4986]: I1203 12:56:29.506347 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:29 crc kubenswrapper[4986]: I1203 12:56:29.506360 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:29Z","lastTransitionTime":"2025-12-03T12:56:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:29 crc kubenswrapper[4986]: I1203 12:56:29.608259 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:29 crc kubenswrapper[4986]: I1203 12:56:29.608333 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:29 crc kubenswrapper[4986]: I1203 12:56:29.608344 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:29 crc kubenswrapper[4986]: I1203 12:56:29.608361 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:29 crc kubenswrapper[4986]: I1203 12:56:29.608373 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:29Z","lastTransitionTime":"2025-12-03T12:56:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:29 crc kubenswrapper[4986]: I1203 12:56:29.713811 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:29 crc kubenswrapper[4986]: I1203 12:56:29.713852 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:29 crc kubenswrapper[4986]: I1203 12:56:29.713868 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:29 crc kubenswrapper[4986]: I1203 12:56:29.713890 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:29 crc kubenswrapper[4986]: I1203 12:56:29.713905 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:29Z","lastTransitionTime":"2025-12-03T12:56:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:29 crc kubenswrapper[4986]: I1203 12:56:29.816517 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:29 crc kubenswrapper[4986]: I1203 12:56:29.816587 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:29 crc kubenswrapper[4986]: I1203 12:56:29.816598 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:29 crc kubenswrapper[4986]: I1203 12:56:29.816612 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:29 crc kubenswrapper[4986]: I1203 12:56:29.816624 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:29Z","lastTransitionTime":"2025-12-03T12:56:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:29 crc kubenswrapper[4986]: I1203 12:56:29.919138 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:29 crc kubenswrapper[4986]: I1203 12:56:29.919199 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:29 crc kubenswrapper[4986]: I1203 12:56:29.919211 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:29 crc kubenswrapper[4986]: I1203 12:56:29.919226 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:29 crc kubenswrapper[4986]: I1203 12:56:29.919235 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:29Z","lastTransitionTime":"2025-12-03T12:56:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:29 crc kubenswrapper[4986]: I1203 12:56:29.942716 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 12:56:29 crc kubenswrapper[4986]: E1203 12:56:29.942885 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 12:56:29 crc kubenswrapper[4986]: I1203 12:56:29.942975 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 12:56:29 crc kubenswrapper[4986]: E1203 12:56:29.943051 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 12:56:29 crc kubenswrapper[4986]: I1203 12:56:29.943123 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl2mt" Dec 03 12:56:29 crc kubenswrapper[4986]: E1203 12:56:29.943216 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl2mt" podUID="ea24f625-ded4-4e37-a23b-f96fe691b0dd" Dec 03 12:56:29 crc kubenswrapper[4986]: I1203 12:56:29.943399 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 12:56:29 crc kubenswrapper[4986]: E1203 12:56:29.943573 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 12:56:30 crc kubenswrapper[4986]: I1203 12:56:30.021824 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:30 crc kubenswrapper[4986]: I1203 12:56:30.022396 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:30 crc kubenswrapper[4986]: I1203 12:56:30.022542 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:30 crc kubenswrapper[4986]: I1203 12:56:30.022618 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:30 crc kubenswrapper[4986]: I1203 12:56:30.022682 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:30Z","lastTransitionTime":"2025-12-03T12:56:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:30 crc kubenswrapper[4986]: I1203 12:56:30.125083 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:30 crc kubenswrapper[4986]: I1203 12:56:30.125155 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:30 crc kubenswrapper[4986]: I1203 12:56:30.125166 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:30 crc kubenswrapper[4986]: I1203 12:56:30.125181 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:30 crc kubenswrapper[4986]: I1203 12:56:30.125190 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:30Z","lastTransitionTime":"2025-12-03T12:56:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:30 crc kubenswrapper[4986]: I1203 12:56:30.228650 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:30 crc kubenswrapper[4986]: I1203 12:56:30.228731 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:30 crc kubenswrapper[4986]: I1203 12:56:30.228789 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:30 crc kubenswrapper[4986]: I1203 12:56:30.228821 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:30 crc kubenswrapper[4986]: I1203 12:56:30.228844 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:30Z","lastTransitionTime":"2025-12-03T12:56:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:30 crc kubenswrapper[4986]: I1203 12:56:30.331115 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:30 crc kubenswrapper[4986]: I1203 12:56:30.331162 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:30 crc kubenswrapper[4986]: I1203 12:56:30.331176 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:30 crc kubenswrapper[4986]: I1203 12:56:30.331193 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:30 crc kubenswrapper[4986]: I1203 12:56:30.331205 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:30Z","lastTransitionTime":"2025-12-03T12:56:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:30 crc kubenswrapper[4986]: I1203 12:56:30.435273 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:30 crc kubenswrapper[4986]: I1203 12:56:30.435382 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:30 crc kubenswrapper[4986]: I1203 12:56:30.435401 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:30 crc kubenswrapper[4986]: I1203 12:56:30.435430 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:30 crc kubenswrapper[4986]: I1203 12:56:30.435453 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:30Z","lastTransitionTime":"2025-12-03T12:56:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:30 crc kubenswrapper[4986]: I1203 12:56:30.537913 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:30 crc kubenswrapper[4986]: I1203 12:56:30.538031 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:30 crc kubenswrapper[4986]: I1203 12:56:30.538044 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:30 crc kubenswrapper[4986]: I1203 12:56:30.538066 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:30 crc kubenswrapper[4986]: I1203 12:56:30.538080 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:30Z","lastTransitionTime":"2025-12-03T12:56:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:30 crc kubenswrapper[4986]: I1203 12:56:30.641119 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:30 crc kubenswrapper[4986]: I1203 12:56:30.641195 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:30 crc kubenswrapper[4986]: I1203 12:56:30.641209 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:30 crc kubenswrapper[4986]: I1203 12:56:30.641234 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:30 crc kubenswrapper[4986]: I1203 12:56:30.641249 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:30Z","lastTransitionTime":"2025-12-03T12:56:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:30 crc kubenswrapper[4986]: I1203 12:56:30.744413 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:30 crc kubenswrapper[4986]: I1203 12:56:30.744463 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:30 crc kubenswrapper[4986]: I1203 12:56:30.744479 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:30 crc kubenswrapper[4986]: I1203 12:56:30.744499 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:30 crc kubenswrapper[4986]: I1203 12:56:30.744516 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:30Z","lastTransitionTime":"2025-12-03T12:56:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:30 crc kubenswrapper[4986]: I1203 12:56:30.847313 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:30 crc kubenswrapper[4986]: I1203 12:56:30.847363 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:30 crc kubenswrapper[4986]: I1203 12:56:30.847375 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:30 crc kubenswrapper[4986]: I1203 12:56:30.847395 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:30 crc kubenswrapper[4986]: I1203 12:56:30.847409 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:30Z","lastTransitionTime":"2025-12-03T12:56:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:30 crc kubenswrapper[4986]: I1203 12:56:30.949818 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:30 crc kubenswrapper[4986]: I1203 12:56:30.949870 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:30 crc kubenswrapper[4986]: I1203 12:56:30.949882 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:30 crc kubenswrapper[4986]: I1203 12:56:30.949902 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:30 crc kubenswrapper[4986]: I1203 12:56:30.949913 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:30Z","lastTransitionTime":"2025-12-03T12:56:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:30 crc kubenswrapper[4986]: I1203 12:56:30.964015 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee22d4f3-8488-4497-a757-a6dd51e84539\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4fc247caf16eab425363e12219a76c40022f0daf0bc9c8b52f4eb8e3726f242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86d62d991d6516ed42f7cad655018e1ebc85c8c1b100d624c1c2e5e8f616cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4ba9819f4f1746f03a558617fc6ee14f5627c37c362d1198a303b6f1854c7bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79cba0b7394bf25be648502fc84a916d8f2fa0949edce20b5a748046ce9b6f75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:30Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:30 crc kubenswrapper[4986]: I1203 12:56:30.979535 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:30Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:31 crc kubenswrapper[4986]: I1203 12:56:31.001430 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0d71a3ffdbe593936c83e92e92e286eaff092333bb3dd54cb5bc2825ca46fbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a402aeaef03d8d3c684f0d8e7fe8a080bc02d761104e3c1095b3233aade2b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:30Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:31 crc kubenswrapper[4986]: I1203 12:56:31.017035 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rkgd9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a939a03f-0eec-49cb-9b23-40e359e427d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b854d526fa4ac2b270c7f27c1b35b1ff0fbc671e40a928649b578f926d6b51b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc042debcddb7ea2233d6c224020a32acd8c996d5334d8e6eaae8d1329bb3774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc042debcddb7ea2233d6c224020a32acd8c996d5334d8e6eaae8d1329bb3774\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7128a1dd657b9829c1af31ef1a2f5ad08d2bfd65e5d352cf5a69a5be4e9aae9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7128a1dd657b9829c1af31ef1a2f5ad08d2bfd65e5d352cf5a69a5be4e9aae9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cd633caedf88a1bec633cef61b8b30ae422b8c8d91a012395831a09d398f0f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cd633caedf88a1bec633cef61b8b30ae422b8c8d91a012395831a09d398f0f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1b9e8e1f9326660fe6d7c89af18c5f7dd1134e9f2e866da8ba9c795e13cb57d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1b9e8e1f9326660fe6d7c89af18c5f7dd1134e9f2e866da8ba9c795e13cb57d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55ef54ee9f22072e2b437f38f6d06f24e6f1da02555505bc65c488b5818c87a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55ef54ee9f22072e2b437f38f6d06f24e6f1da02555505bc65c488b5818c87a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35fecc2332fdab60719b864653301c5e8322b13ace401a0ff016ff3dc09cd3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35fecc2332fdab60719b864653301c5e8322b13ace401a0ff016ff3dc09cd3d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rkgd9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:31Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:31 crc kubenswrapper[4986]: I1203 12:56:31.028571 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rl2mt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea24f625-ded4-4e37-a23b-f96fe691b0dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhr82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhr82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rl2mt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:31Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:31 crc kubenswrapper[4986]: I1203 12:56:31.042829 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231ed674-1d19-4ee2-b29c-a1b7453ed531\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7eb1602f2d542b1998655456e5c49fca8f55a45a07cb075f989d4b10f91f6c95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0ca216d3d1121b7035d5d7ead25d459f95d56808997c02b3801cac0354c76a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04aa8cf2e01507ace747505514511263d9c560608e33f592a2c10be1edad8674\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70f0aa85e0be54890d4367deaf1a95970a7b0ac9d95afdd07c431680bdf7f15e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c0ed6891f87bfc3c62f0f13739720e012dc8fa3b122a18e799e164e4656abb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T12:56:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 12:56:01.437086 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 12:56:01.437205 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 12:56:01.438166 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1972143202/tls.crt::/tmp/serving-cert-1972143202/tls.key\\\\\\\"\\\\nI1203 12:56:02.167657 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 12:56:02.174337 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 12:56:02.174360 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 12:56:02.174377 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 12:56:02.174382 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 12:56:02.178753 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 12:56:02.179168 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 12:56:02.179175 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 12:56:02.179180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 12:56:02.179184 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 12:56:02.179187 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 12:56:02.179190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 12:56:02.178775 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 12:56:02.181220 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8ecb87e56d3fa7d72b57a6f333db4244ee7f60381778e7099a6fc88d91e1e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:31Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:31 crc kubenswrapper[4986]: I1203 12:56:31.052259 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:31 crc kubenswrapper[4986]: I1203 12:56:31.052374 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:31 crc kubenswrapper[4986]: I1203 12:56:31.052395 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:31 crc kubenswrapper[4986]: I1203 12:56:31.052423 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:31 crc kubenswrapper[4986]: I1203 12:56:31.052442 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:31Z","lastTransitionTime":"2025-12-03T12:56:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:31 crc kubenswrapper[4986]: I1203 12:56:31.063780 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d51894dd-38f0-413a-adaf-22d196474bed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa98006a8736394300cdb9ef3331d2a23e1b3af2e2203f757c51ca9c700d26c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6e7d24b5f1325cfb2d210cb9f311026ab370c91deecc5d0d951bcd4eda79816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9bb9d940d346f9895e24d98173668ea2d0e68fd3035e5b03475bbc936582958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7fa0d673f037c8edcc75ee30b1bbeea73da55d5459347dacc9251fffbb38be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d4007e9ed33227938a5a3656fd74958aef599418fec4cd2048dfa7dd88924dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fb412bfaef1de92aa6be2e23c0dd73fe39eebf29b6814fb3bb6fd0281ddbbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fb412bfaef1de92aa6be2e23c0dd73fe39eebf29b6814fb3bb6fd0281ddbbd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c66ee903635427e08f96e3f96c512c4d25e79bf2fc14eb0c699b3b9a617cfeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c66ee903635427e08f96e3f96c512c4d25e79bf2fc14eb0c699b3b9a617cfeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54fb6e4ecd7f73267d49261875a00a51e5e36e6079170f694346e9d6c3df8f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54fb6e4ecd7f73267d49261875a00a51e5e36e6079170f694346e9d6c3df8f31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:31Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:31 crc kubenswrapper[4986]: I1203 12:56:31.078869 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b923e52e6284c8adca87e7d3d801a432a676e4389d7ab3c0cba584c5b7d96f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:31Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:31 crc kubenswrapper[4986]: I1203 12:56:31.093099 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d66d06e79ffb2ecd12acc0380e5e4839904147f9194e48277ca667df13a0cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:31Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:31 crc kubenswrapper[4986]: I1203 12:56:31.096945 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 03 12:56:31 crc kubenswrapper[4986]: I1203 12:56:31.104869 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://535b7b05dbe9b47808f3d358a1f370b696a5d5bb4dd96b049ac2cbe3d18ea065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgppd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0ab87b22153ba31fa79d2d210e695b274c2ddc878ad679c605aa8fb716534d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgppd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xggpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:31Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:31 crc kubenswrapper[4986]: I1203 12:56:31.106788 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Dec 03 12:56:31 crc kubenswrapper[4986]: I1203 12:56:31.119048 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:31Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:31 crc kubenswrapper[4986]: I1203 12:56:31.130323 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:31Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:31 crc kubenswrapper[4986]: I1203 12:56:31.141137 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8lcrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75e4c223-9952-4d1b-b83b-5c45cfc51432\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b2af5d43c7842f129b6521e5b3b1963f6060b3e0c552a771c7df6a2a9ef867d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnr67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8lcrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:31Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:31 crc kubenswrapper[4986]: I1203 12:56:31.151451 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fszqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a76bcf5c-6a62-4360-826d-ecac337c88ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d94370a14317ade02cb805783e09ffcd8c6ff07f35e9124e302028e6dfa3c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhtrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fszqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:31Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:31 crc kubenswrapper[4986]: I1203 12:56:31.154821 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:31 crc kubenswrapper[4986]: I1203 12:56:31.154858 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:31 crc kubenswrapper[4986]: I1203 12:56:31.154873 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:31 crc kubenswrapper[4986]: I1203 12:56:31.154890 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:31 crc kubenswrapper[4986]: I1203 12:56:31.154902 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:31Z","lastTransitionTime":"2025-12-03T12:56:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:31 crc kubenswrapper[4986]: I1203 12:56:31.165596 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-px97g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97196b6d-75cc-4de4-8805-f9ce3fbd4230\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c28bf1aff27ba408262b5fc561e084ff3813ce95a7c14b57fea24d409eb33bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v4j5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-px97g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:31Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:31 crc kubenswrapper[4986]: I1203 12:56:31.182916 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3a45156-295b-4093-80e7-2059f81ddbd7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c79f84fc019d109bac446dd058b4bafb0b516a13fae47b1924e4f7690658ba8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be6405facfa7726cbe56c48184cba0123774e8e34d3ceb9e18daed6f35560c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d77d078572dbe7f11e91ece0d25e76bd17004167dabce267a2b568e798aae158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2843fb3fcbadb926b104a9f7c4084468b36293aebed6c692c29419a0457b62a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d94eca13821749d7e15a350c411a831d2887f4211ed092a4fb57e26e49fcd9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ff8c1f1602e57602cae3f46dd3d2bf8d7abad6bd0952f4347152678e8606d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dcd8afad237a7ca1c7f5e0d37bec9ab99fd61ac7be8b3604a8f5d0bb6a2181e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dcd8afad237a7ca1c7f5e0d37bec9ab99fd61ac7be8b3604a8f5d0bb6a2181e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T12:56:17Z\\\",\\\"message\\\":\\\"a-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-service-ca-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-service-ca-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.40\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF1203 12:56:15.918790 6490 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-9nf52_openshift-ovn-kubernetes(d3a45156-295b-4093-80e7-2059f81ddbd7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dfc7520e7cb1e09eb10bf9615dbae11d7e73474f1b73eed481d18269b65aef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c34a338f506e8dd09757984f92a29fd76ddd2d25fc00d680cc4883c4766080e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c34a338f506e8dd09757984f92a29fd76ddd2d25fc00d680cc4883c4766080e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9nf52\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:31Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:31 crc kubenswrapper[4986]: I1203 12:56:31.192840 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2p8xn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"957171e6-7b64-4d92-b690-342e3251ed8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d417b35a888f61c40a58512bcff329faed85f3b4f8d1d29a7c6f2f43634a9090\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbhn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27878a8de45ccf0e02719e2a3f9d1339d8910db5947d423a3e7400359cc96253\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbhn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2p8xn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:31Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:31 crc kubenswrapper[4986]: I1203 12:56:31.205721 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231ed674-1d19-4ee2-b29c-a1b7453ed531\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7eb1602f2d542b1998655456e5c49fca8f55a45a07cb075f989d4b10f91f6c95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0ca216d3d1121b7035d5d7ead25d459f95d56808997c02b3801cac0354c76a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04aa8cf2e01507ace747505514511263d9c560608e33f592a2c10be1edad8674\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70f0aa85e0be54890d4367deaf1a95970a7b0ac9d95afdd07c431680bdf7f15e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c0ed6891f87bfc3c62f0f13739720e012dc8fa3b122a18e799e164e4656abb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T12:56:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 12:56:01.437086 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 12:56:01.437205 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 12:56:01.438166 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1972143202/tls.crt::/tmp/serving-cert-1972143202/tls.key\\\\\\\"\\\\nI1203 12:56:02.167657 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 12:56:02.174337 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 12:56:02.174360 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 12:56:02.174377 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 12:56:02.174382 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 12:56:02.178753 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 12:56:02.179168 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 12:56:02.179175 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 12:56:02.179180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 12:56:02.179184 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 12:56:02.179187 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 12:56:02.179190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 12:56:02.178775 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 12:56:02.181220 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8ecb87e56d3fa7d72b57a6f333db4244ee7f60381778e7099a6fc88d91e1e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:31Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:31 crc kubenswrapper[4986]: I1203 12:56:31.227511 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d51894dd-38f0-413a-adaf-22d196474bed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa98006a8736394300cdb9ef3331d2a23e1b3af2e2203f757c51ca9c700d26c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6e7d24b5f1325cfb2d210cb9f311026ab370c91deecc5d0d951bcd4eda79816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9bb9d940d346f9895e24d98173668ea2d0e68fd3035e5b03475bbc936582958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7fa0d673f037c8edcc75ee30b1bbeea73da55d5459347dacc9251fffbb38be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d4007e9ed33227938a5a3656fd74958aef599418fec4cd2048dfa7dd88924dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fb412bfaef1de92aa6be2e23c0dd73fe39eebf29b6814fb3bb6fd0281ddbbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fb412bfaef1de92aa6be2e23c0dd73fe39eebf29b6814fb3bb6fd0281ddbbd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c66ee903635427e08f96e3f96c512c4d25e79bf2fc14eb0c699b3b9a617cfeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c66ee903635427e08f96e3f96c512c4d25e79bf2fc14eb0c699b3b9a617cfeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54fb6e4ecd7f73267d49261875a00a51e5e36e6079170f694346e9d6c3df8f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54fb6e4ecd7f73267d49261875a00a51e5e36e6079170f694346e9d6c3df8f31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:31Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:31 crc kubenswrapper[4986]: I1203 12:56:31.246007 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b923e52e6284c8adca87e7d3d801a432a676e4389d7ab3c0cba584c5b7d96f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:31Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:31 crc kubenswrapper[4986]: I1203 12:56:31.257262 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:31 crc kubenswrapper[4986]: I1203 12:56:31.257334 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:31 crc kubenswrapper[4986]: I1203 12:56:31.257346 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:31 crc kubenswrapper[4986]: I1203 12:56:31.257361 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:31 crc kubenswrapper[4986]: I1203 12:56:31.257370 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:31Z","lastTransitionTime":"2025-12-03T12:56:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:31 crc kubenswrapper[4986]: I1203 12:56:31.259410 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d66d06e79ffb2ecd12acc0380e5e4839904147f9194e48277ca667df13a0cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:31Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:31 crc kubenswrapper[4986]: I1203 12:56:31.272108 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://535b7b05dbe9b47808f3d358a1f370b696a5d5bb4dd96b049ac2cbe3d18ea065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgppd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0ab87b22153ba31fa79d2d210e695b274c2ddc878ad679c605aa8fb716534d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgppd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xggpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:31Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:31 crc kubenswrapper[4986]: I1203 12:56:31.286041 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1aad898-ad11-4c01-95c2-8a4d293aed99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77c695e6dd4e9c2fe64a41b90d59aebe2a6a54a1985702749b0a68eed6551f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0d8158a257a7d5b17302a2d6f4c81326b8c24487c18132807f91366cdb3457b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16164ea50f4f29916a5878eee07e8debc4268fae086f9086aa4c73c4d03dd082\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://865442eb9652b14d8c0642927e9f59c60344dfcffb5bd2d0cc0ec572ede971c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://865442eb9652b14d8c0642927e9f59c60344dfcffb5bd2d0cc0ec572ede971c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:31Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:31 crc kubenswrapper[4986]: I1203 12:56:31.305055 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:31Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:31 crc kubenswrapper[4986]: I1203 12:56:31.318498 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:31Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:31 crc kubenswrapper[4986]: I1203 12:56:31.330488 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8lcrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75e4c223-9952-4d1b-b83b-5c45cfc51432\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b2af5d43c7842f129b6521e5b3b1963f6060b3e0c552a771c7df6a2a9ef867d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnr67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8lcrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:31Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:31 crc kubenswrapper[4986]: I1203 12:56:31.351338 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fszqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a76bcf5c-6a62-4360-826d-ecac337c88ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d94370a14317ade02cb805783e09ffcd8c6ff07f35e9124e302028e6dfa3c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhtrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fszqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:31Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:31 crc kubenswrapper[4986]: I1203 12:56:31.360054 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:31 crc kubenswrapper[4986]: I1203 12:56:31.360105 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:31 crc kubenswrapper[4986]: I1203 12:56:31.360117 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:31 crc kubenswrapper[4986]: I1203 12:56:31.360137 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:31 crc kubenswrapper[4986]: I1203 12:56:31.360149 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:31Z","lastTransitionTime":"2025-12-03T12:56:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:31 crc kubenswrapper[4986]: I1203 12:56:31.366456 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-px97g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97196b6d-75cc-4de4-8805-f9ce3fbd4230\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c28bf1aff27ba408262b5fc561e084ff3813ce95a7c14b57fea24d409eb33bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v4j5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-px97g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:31Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:31 crc kubenswrapper[4986]: I1203 12:56:31.385653 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3a45156-295b-4093-80e7-2059f81ddbd7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c79f84fc019d109bac446dd058b4bafb0b516a13fae47b1924e4f7690658ba8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be6405facfa7726cbe56c48184cba0123774e8e34d3ceb9e18daed6f35560c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d77d078572dbe7f11e91ece0d25e76bd17004167dabce267a2b568e798aae158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2843fb3fcbadb926b104a9f7c4084468b36293aebed6c692c29419a0457b62a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d94eca13821749d7e15a350c411a831d2887f4211ed092a4fb57e26e49fcd9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ff8c1f1602e57602cae3f46dd3d2bf8d7abad6bd0952f4347152678e8606d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dcd8afad237a7ca1c7f5e0d37bec9ab99fd61ac7be8b3604a8f5d0bb6a2181e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dcd8afad237a7ca1c7f5e0d37bec9ab99fd61ac7be8b3604a8f5d0bb6a2181e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T12:56:17Z\\\",\\\"message\\\":\\\"a-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-service-ca-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-service-ca-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.40\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF1203 12:56:15.918790 6490 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-9nf52_openshift-ovn-kubernetes(d3a45156-295b-4093-80e7-2059f81ddbd7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dfc7520e7cb1e09eb10bf9615dbae11d7e73474f1b73eed481d18269b65aef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c34a338f506e8dd09757984f92a29fd76ddd2d25fc00d680cc4883c4766080e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c34a338f506e8dd09757984f92a29fd76ddd2d25fc00d680cc4883c4766080e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9nf52\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:31Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:31 crc kubenswrapper[4986]: I1203 12:56:31.399484 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2p8xn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"957171e6-7b64-4d92-b690-342e3251ed8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d417b35a888f61c40a58512bcff329faed85f3b4f8d1d29a7c6f2f43634a9090\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbhn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27878a8de45ccf0e02719e2a3f9d1339d8910db5947d423a3e7400359cc96253\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbhn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2p8xn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:31Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:31 crc kubenswrapper[4986]: I1203 12:56:31.411240 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee22d4f3-8488-4497-a757-a6dd51e84539\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4fc247caf16eab425363e12219a76c40022f0daf0bc9c8b52f4eb8e3726f242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86d62d991d6516ed42f7cad655018e1ebc85c8c1b100d624c1c2e5e8f616cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4ba9819f4f1746f03a558617fc6ee14f5627c37c362d1198a303b6f1854c7bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79cba0b7394bf25be648502fc84a916d8f2fa0949edce20b5a748046ce9b6f75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:31Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:31 crc kubenswrapper[4986]: I1203 12:56:31.424693 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:31Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:31 crc kubenswrapper[4986]: I1203 12:56:31.441477 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0d71a3ffdbe593936c83e92e92e286eaff092333bb3dd54cb5bc2825ca46fbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a402aeaef03d8d3c684f0d8e7fe8a080bc02d761104e3c1095b3233aade2b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:31Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:31 crc kubenswrapper[4986]: I1203 12:56:31.460729 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rkgd9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a939a03f-0eec-49cb-9b23-40e359e427d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b854d526fa4ac2b270c7f27c1b35b1ff0fbc671e40a928649b578f926d6b51b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc042debcddb7ea2233d6c224020a32acd8c996d5334d8e6eaae8d1329bb3774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc042debcddb7ea2233d6c224020a32acd8c996d5334d8e6eaae8d1329bb3774\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7128a1dd657b9829c1af31ef1a2f5ad08d2bfd65e5d352cf5a69a5be4e9aae9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7128a1dd657b9829c1af31ef1a2f5ad08d2bfd65e5d352cf5a69a5be4e9aae9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cd633caedf88a1bec633cef61b8b30ae422b8c8d91a012395831a09d398f0f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cd633caedf88a1bec633cef61b8b30ae422b8c8d91a012395831a09d398f0f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1b9e8e1f9326660fe6d7c89af18c5f7dd1134e9f2e866da8ba9c795e13cb57d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1b9e8e1f9326660fe6d7c89af18c5f7dd1134e9f2e866da8ba9c795e13cb57d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55ef54ee9f22072e2b437f38f6d06f24e6f1da02555505bc65c488b5818c87a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55ef54ee9f22072e2b437f38f6d06f24e6f1da02555505bc65c488b5818c87a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35fecc2332fdab60719b864653301c5e8322b13ace401a0ff016ff3dc09cd3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35fecc2332fdab60719b864653301c5e8322b13ace401a0ff016ff3dc09cd3d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rkgd9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:31Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:31 crc kubenswrapper[4986]: I1203 12:56:31.462746 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:31 crc kubenswrapper[4986]: I1203 12:56:31.462782 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:31 crc kubenswrapper[4986]: I1203 12:56:31.462793 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:31 crc kubenswrapper[4986]: I1203 12:56:31.462808 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:31 crc kubenswrapper[4986]: I1203 12:56:31.462818 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:31Z","lastTransitionTime":"2025-12-03T12:56:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:31 crc kubenswrapper[4986]: I1203 12:56:31.472312 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rl2mt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea24f625-ded4-4e37-a23b-f96fe691b0dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhr82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhr82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rl2mt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:31Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:31 crc kubenswrapper[4986]: I1203 12:56:31.572389 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:31 crc kubenswrapper[4986]: I1203 12:56:31.572426 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:31 crc kubenswrapper[4986]: I1203 12:56:31.572480 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:31 crc kubenswrapper[4986]: I1203 12:56:31.572500 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:31 crc kubenswrapper[4986]: I1203 12:56:31.572511 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:31Z","lastTransitionTime":"2025-12-03T12:56:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:31 crc kubenswrapper[4986]: I1203 12:56:31.675197 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:31 crc kubenswrapper[4986]: I1203 12:56:31.675249 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:31 crc kubenswrapper[4986]: I1203 12:56:31.675262 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:31 crc kubenswrapper[4986]: I1203 12:56:31.675299 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:31 crc kubenswrapper[4986]: I1203 12:56:31.675311 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:31Z","lastTransitionTime":"2025-12-03T12:56:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:31 crc kubenswrapper[4986]: I1203 12:56:31.777136 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:31 crc kubenswrapper[4986]: I1203 12:56:31.777409 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:31 crc kubenswrapper[4986]: I1203 12:56:31.777520 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:31 crc kubenswrapper[4986]: I1203 12:56:31.777663 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:31 crc kubenswrapper[4986]: I1203 12:56:31.777772 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:31Z","lastTransitionTime":"2025-12-03T12:56:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:31 crc kubenswrapper[4986]: I1203 12:56:31.851262 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ea24f625-ded4-4e37-a23b-f96fe691b0dd-metrics-certs\") pod \"network-metrics-daemon-rl2mt\" (UID: \"ea24f625-ded4-4e37-a23b-f96fe691b0dd\") " pod="openshift-multus/network-metrics-daemon-rl2mt" Dec 03 12:56:31 crc kubenswrapper[4986]: E1203 12:56:31.851417 4986 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 12:56:31 crc kubenswrapper[4986]: E1203 12:56:31.851465 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ea24f625-ded4-4e37-a23b-f96fe691b0dd-metrics-certs podName:ea24f625-ded4-4e37-a23b-f96fe691b0dd nodeName:}" failed. No retries permitted until 2025-12-03 12:56:47.851450352 +0000 UTC m=+67.317881543 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ea24f625-ded4-4e37-a23b-f96fe691b0dd-metrics-certs") pod "network-metrics-daemon-rl2mt" (UID: "ea24f625-ded4-4e37-a23b-f96fe691b0dd") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 12:56:31 crc kubenswrapper[4986]: I1203 12:56:31.879971 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:31 crc kubenswrapper[4986]: I1203 12:56:31.880009 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:31 crc kubenswrapper[4986]: I1203 12:56:31.880020 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:31 crc kubenswrapper[4986]: I1203 12:56:31.880035 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:31 crc kubenswrapper[4986]: I1203 12:56:31.880047 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:31Z","lastTransitionTime":"2025-12-03T12:56:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:31 crc kubenswrapper[4986]: I1203 12:56:31.943201 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl2mt" Dec 03 12:56:31 crc kubenswrapper[4986]: I1203 12:56:31.943252 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 12:56:31 crc kubenswrapper[4986]: E1203 12:56:31.943364 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl2mt" podUID="ea24f625-ded4-4e37-a23b-f96fe691b0dd" Dec 03 12:56:31 crc kubenswrapper[4986]: I1203 12:56:31.943422 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 12:56:31 crc kubenswrapper[4986]: I1203 12:56:31.943442 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 12:56:31 crc kubenswrapper[4986]: E1203 12:56:31.943530 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 12:56:31 crc kubenswrapper[4986]: E1203 12:56:31.943632 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 12:56:31 crc kubenswrapper[4986]: E1203 12:56:31.943719 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 12:56:31 crc kubenswrapper[4986]: I1203 12:56:31.944355 4986 scope.go:117] "RemoveContainer" containerID="6dcd8afad237a7ca1c7f5e0d37bec9ab99fd61ac7be8b3604a8f5d0bb6a2181e" Dec 03 12:56:31 crc kubenswrapper[4986]: I1203 12:56:31.982401 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:31 crc kubenswrapper[4986]: I1203 12:56:31.982442 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:31 crc kubenswrapper[4986]: I1203 12:56:31.982451 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:31 crc kubenswrapper[4986]: I1203 12:56:31.982467 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:31 crc kubenswrapper[4986]: I1203 12:56:31.982478 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:31Z","lastTransitionTime":"2025-12-03T12:56:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:32 crc kubenswrapper[4986]: I1203 12:56:32.085204 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:32 crc kubenswrapper[4986]: I1203 12:56:32.085246 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:32 crc kubenswrapper[4986]: I1203 12:56:32.085256 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:32 crc kubenswrapper[4986]: I1203 12:56:32.085274 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:32 crc kubenswrapper[4986]: I1203 12:56:32.085308 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:32Z","lastTransitionTime":"2025-12-03T12:56:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:32 crc kubenswrapper[4986]: I1203 12:56:32.188164 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:32 crc kubenswrapper[4986]: I1203 12:56:32.188202 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:32 crc kubenswrapper[4986]: I1203 12:56:32.188211 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:32 crc kubenswrapper[4986]: I1203 12:56:32.188226 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:32 crc kubenswrapper[4986]: I1203 12:56:32.188238 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:32Z","lastTransitionTime":"2025-12-03T12:56:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:32 crc kubenswrapper[4986]: I1203 12:56:32.290581 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:32 crc kubenswrapper[4986]: I1203 12:56:32.290638 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:32 crc kubenswrapper[4986]: I1203 12:56:32.290651 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:32 crc kubenswrapper[4986]: I1203 12:56:32.290668 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:32 crc kubenswrapper[4986]: I1203 12:56:32.290681 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:32Z","lastTransitionTime":"2025-12-03T12:56:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:32 crc kubenswrapper[4986]: I1203 12:56:32.300985 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:32 crc kubenswrapper[4986]: I1203 12:56:32.301223 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:32 crc kubenswrapper[4986]: I1203 12:56:32.301498 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:32 crc kubenswrapper[4986]: I1203 12:56:32.301673 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:32 crc kubenswrapper[4986]: I1203 12:56:32.301921 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:32Z","lastTransitionTime":"2025-12-03T12:56:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:32 crc kubenswrapper[4986]: E1203 12:56:32.315016 4986 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b28ccd4-3e47-4e26-a3b8-f87de96f7586\\\",\\\"systemUUID\\\":\\\"52e71ce5-258d-4951-aa26-5d4aac0725ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:32Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:32 crc kubenswrapper[4986]: I1203 12:56:32.318686 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:32 crc kubenswrapper[4986]: I1203 12:56:32.318852 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:32 crc kubenswrapper[4986]: I1203 12:56:32.318918 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:32 crc kubenswrapper[4986]: I1203 12:56:32.318979 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:32 crc kubenswrapper[4986]: I1203 12:56:32.319047 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:32Z","lastTransitionTime":"2025-12-03T12:56:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:32 crc kubenswrapper[4986]: E1203 12:56:32.332876 4986 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b28ccd4-3e47-4e26-a3b8-f87de96f7586\\\",\\\"systemUUID\\\":\\\"52e71ce5-258d-4951-aa26-5d4aac0725ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:32Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:32 crc kubenswrapper[4986]: I1203 12:56:32.336961 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:32 crc kubenswrapper[4986]: I1203 12:56:32.337232 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:32 crc kubenswrapper[4986]: I1203 12:56:32.337348 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:32 crc kubenswrapper[4986]: I1203 12:56:32.337419 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:32 crc kubenswrapper[4986]: I1203 12:56:32.337513 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:32Z","lastTransitionTime":"2025-12-03T12:56:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:32 crc kubenswrapper[4986]: E1203 12:56:32.349419 4986 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b28ccd4-3e47-4e26-a3b8-f87de96f7586\\\",\\\"systemUUID\\\":\\\"52e71ce5-258d-4951-aa26-5d4aac0725ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:32Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:32 crc kubenswrapper[4986]: I1203 12:56:32.353592 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:32 crc kubenswrapper[4986]: I1203 12:56:32.353645 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:32 crc kubenswrapper[4986]: I1203 12:56:32.353661 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:32 crc kubenswrapper[4986]: I1203 12:56:32.353685 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:32 crc kubenswrapper[4986]: I1203 12:56:32.353702 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:32Z","lastTransitionTime":"2025-12-03T12:56:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:32 crc kubenswrapper[4986]: E1203 12:56:32.367685 4986 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b28ccd4-3e47-4e26-a3b8-f87de96f7586\\\",\\\"systemUUID\\\":\\\"52e71ce5-258d-4951-aa26-5d4aac0725ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:32Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:32 crc kubenswrapper[4986]: I1203 12:56:32.371472 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:32 crc kubenswrapper[4986]: I1203 12:56:32.371505 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:32 crc kubenswrapper[4986]: I1203 12:56:32.371517 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:32 crc kubenswrapper[4986]: I1203 12:56:32.371534 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:32 crc kubenswrapper[4986]: I1203 12:56:32.371546 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:32Z","lastTransitionTime":"2025-12-03T12:56:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:32 crc kubenswrapper[4986]: E1203 12:56:32.384319 4986 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b28ccd4-3e47-4e26-a3b8-f87de96f7586\\\",\\\"systemUUID\\\":\\\"52e71ce5-258d-4951-aa26-5d4aac0725ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:32Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:32 crc kubenswrapper[4986]: E1203 12:56:32.384475 4986 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 03 12:56:32 crc kubenswrapper[4986]: I1203 12:56:32.393008 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:32 crc kubenswrapper[4986]: I1203 12:56:32.393044 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:32 crc kubenswrapper[4986]: I1203 12:56:32.393052 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:32 crc kubenswrapper[4986]: I1203 12:56:32.393067 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:32 crc kubenswrapper[4986]: I1203 12:56:32.393076 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:32Z","lastTransitionTime":"2025-12-03T12:56:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:32 crc kubenswrapper[4986]: I1203 12:56:32.496613 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:32 crc kubenswrapper[4986]: I1203 12:56:32.496661 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:32 crc kubenswrapper[4986]: I1203 12:56:32.496684 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:32 crc kubenswrapper[4986]: I1203 12:56:32.496706 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:32 crc kubenswrapper[4986]: I1203 12:56:32.496724 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:32Z","lastTransitionTime":"2025-12-03T12:56:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:32 crc kubenswrapper[4986]: I1203 12:56:32.598925 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:32 crc kubenswrapper[4986]: I1203 12:56:32.599216 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:32 crc kubenswrapper[4986]: I1203 12:56:32.599224 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:32 crc kubenswrapper[4986]: I1203 12:56:32.599237 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:32 crc kubenswrapper[4986]: I1203 12:56:32.599245 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:32Z","lastTransitionTime":"2025-12-03T12:56:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:32 crc kubenswrapper[4986]: I1203 12:56:32.702788 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:32 crc kubenswrapper[4986]: I1203 12:56:32.702839 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:32 crc kubenswrapper[4986]: I1203 12:56:32.702853 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:32 crc kubenswrapper[4986]: I1203 12:56:32.702872 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:32 crc kubenswrapper[4986]: I1203 12:56:32.702890 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:32Z","lastTransitionTime":"2025-12-03T12:56:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:32 crc kubenswrapper[4986]: I1203 12:56:32.805274 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:32 crc kubenswrapper[4986]: I1203 12:56:32.805338 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:32 crc kubenswrapper[4986]: I1203 12:56:32.805351 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:32 crc kubenswrapper[4986]: I1203 12:56:32.805369 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:32 crc kubenswrapper[4986]: I1203 12:56:32.805380 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:32Z","lastTransitionTime":"2025-12-03T12:56:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:32 crc kubenswrapper[4986]: I1203 12:56:32.907833 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:32 crc kubenswrapper[4986]: I1203 12:56:32.907881 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:32 crc kubenswrapper[4986]: I1203 12:56:32.907894 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:32 crc kubenswrapper[4986]: I1203 12:56:32.907911 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:32 crc kubenswrapper[4986]: I1203 12:56:32.907923 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:32Z","lastTransitionTime":"2025-12-03T12:56:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:33 crc kubenswrapper[4986]: I1203 12:56:33.011080 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:33 crc kubenswrapper[4986]: I1203 12:56:33.011131 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:33 crc kubenswrapper[4986]: I1203 12:56:33.011142 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:33 crc kubenswrapper[4986]: I1203 12:56:33.011155 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:33 crc kubenswrapper[4986]: I1203 12:56:33.011164 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:33Z","lastTransitionTime":"2025-12-03T12:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:33 crc kubenswrapper[4986]: I1203 12:56:33.113924 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:33 crc kubenswrapper[4986]: I1203 12:56:33.113958 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:33 crc kubenswrapper[4986]: I1203 12:56:33.113965 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:33 crc kubenswrapper[4986]: I1203 12:56:33.113978 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:33 crc kubenswrapper[4986]: I1203 12:56:33.113986 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:33Z","lastTransitionTime":"2025-12-03T12:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:33 crc kubenswrapper[4986]: I1203 12:56:33.216639 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:33 crc kubenswrapper[4986]: I1203 12:56:33.216670 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:33 crc kubenswrapper[4986]: I1203 12:56:33.216678 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:33 crc kubenswrapper[4986]: I1203 12:56:33.216695 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:33 crc kubenswrapper[4986]: I1203 12:56:33.216706 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:33Z","lastTransitionTime":"2025-12-03T12:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:33 crc kubenswrapper[4986]: I1203 12:56:33.256380 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9nf52_d3a45156-295b-4093-80e7-2059f81ddbd7/ovnkube-controller/1.log" Dec 03 12:56:33 crc kubenswrapper[4986]: I1203 12:56:33.259839 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" event={"ID":"d3a45156-295b-4093-80e7-2059f81ddbd7","Type":"ContainerStarted","Data":"8b002220fe79bb266ddd4f28ab0b9f29bb0ea1e8d204f558d023242437694625"} Dec 03 12:56:33 crc kubenswrapper[4986]: I1203 12:56:33.260327 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" Dec 03 12:56:33 crc kubenswrapper[4986]: I1203 12:56:33.272997 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b923e52e6284c8adca87e7d3d801a432a676e4389d7ab3c0cba584c5b7d96f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:33Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:33 crc kubenswrapper[4986]: I1203 12:56:33.283415 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d66d06e79ffb2ecd12acc0380e5e4839904147f9194e48277ca667df13a0cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:33Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:33 crc kubenswrapper[4986]: I1203 12:56:33.294816 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://535b7b05dbe9b47808f3d358a1f370b696a5d5bb4dd96b049ac2cbe3d18ea065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgppd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0ab87b22153ba31fa79d2d210e695b274c2ddc878ad679c605aa8fb716534d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgppd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xggpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:33Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:33 crc kubenswrapper[4986]: I1203 12:56:33.308650 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231ed674-1d19-4ee2-b29c-a1b7453ed531\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7eb1602f2d542b1998655456e5c49fca8f55a45a07cb075f989d4b10f91f6c95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0ca216d3d1121b7035d5d7ead25d459f95d56808997c02b3801cac0354c76a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04aa8cf2e01507ace747505514511263d9c560608e33f592a2c10be1edad8674\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70f0aa85e0be54890d4367deaf1a95970a7b0ac9d95afdd07c431680bdf7f15e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c0ed6891f87bfc3c62f0f13739720e012dc8fa3b122a18e799e164e4656abb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T12:56:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 12:56:01.437086 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 12:56:01.437205 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 12:56:01.438166 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1972143202/tls.crt::/tmp/serving-cert-1972143202/tls.key\\\\\\\"\\\\nI1203 12:56:02.167657 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 12:56:02.174337 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 12:56:02.174360 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 12:56:02.174377 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 12:56:02.174382 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 12:56:02.178753 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 12:56:02.179168 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 12:56:02.179175 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 12:56:02.179180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 12:56:02.179184 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 12:56:02.179187 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 12:56:02.179190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 12:56:02.178775 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 12:56:02.181220 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8ecb87e56d3fa7d72b57a6f333db4244ee7f60381778e7099a6fc88d91e1e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:33Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:33 crc kubenswrapper[4986]: I1203 12:56:33.318544 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:33 crc kubenswrapper[4986]: I1203 12:56:33.318571 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:33 crc kubenswrapper[4986]: I1203 12:56:33.318579 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:33 crc kubenswrapper[4986]: I1203 12:56:33.318592 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:33 crc kubenswrapper[4986]: I1203 12:56:33.318600 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:33Z","lastTransitionTime":"2025-12-03T12:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:33 crc kubenswrapper[4986]: I1203 12:56:33.339718 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d51894dd-38f0-413a-adaf-22d196474bed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa98006a8736394300cdb9ef3331d2a23e1b3af2e2203f757c51ca9c700d26c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6e7d24b5f1325cfb2d210cb9f311026ab370c91deecc5d0d951bcd4eda79816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9bb9d940d346f9895e24d98173668ea2d0e68fd3035e5b03475bbc936582958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7fa0d673f037c8edcc75ee30b1bbeea73da55d5459347dacc9251fffbb38be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d4007e9ed33227938a5a3656fd74958aef599418fec4cd2048dfa7dd88924dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fb412bfaef1de92aa6be2e23c0dd73fe39eebf29b6814fb3bb6fd0281ddbbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fb412bfaef1de92aa6be2e23c0dd73fe39eebf29b6814fb3bb6fd0281ddbbd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c66ee903635427e08f96e3f96c512c4d25e79bf2fc14eb0c699b3b9a617cfeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c66ee903635427e08f96e3f96c512c4d25e79bf2fc14eb0c699b3b9a617cfeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54fb6e4ecd7f73267d49261875a00a51e5e36e6079170f694346e9d6c3df8f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54fb6e4ecd7f73267d49261875a00a51e5e36e6079170f694346e9d6c3df8f31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:33Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:33 crc kubenswrapper[4986]: I1203 12:56:33.352572 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:33Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:33 crc kubenswrapper[4986]: I1203 12:56:33.363820 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8lcrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75e4c223-9952-4d1b-b83b-5c45cfc51432\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b2af5d43c7842f129b6521e5b3b1963f6060b3e0c552a771c7df6a2a9ef867d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnr67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8lcrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:33Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:33 crc kubenswrapper[4986]: I1203 12:56:33.375753 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1aad898-ad11-4c01-95c2-8a4d293aed99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77c695e6dd4e9c2fe64a41b90d59aebe2a6a54a1985702749b0a68eed6551f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0d8158a257a7d5b17302a2d6f4c81326b8c24487c18132807f91366cdb3457b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16164ea50f4f29916a5878eee07e8debc4268fae086f9086aa4c73c4d03dd082\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://865442eb9652b14d8c0642927e9f59c60344dfcffb5bd2d0cc0ec572ede971c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://865442eb9652b14d8c0642927e9f59c60344dfcffb5bd2d0cc0ec572ede971c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:33Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:33 crc kubenswrapper[4986]: I1203 12:56:33.386959 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:33Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:33 crc kubenswrapper[4986]: I1203 12:56:33.399322 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-px97g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97196b6d-75cc-4de4-8805-f9ce3fbd4230\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c28bf1aff27ba408262b5fc561e084ff3813ce95a7c14b57fea24d409eb33bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v4j5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-px97g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:33Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:33 crc kubenswrapper[4986]: I1203 12:56:33.419032 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3a45156-295b-4093-80e7-2059f81ddbd7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c79f84fc019d109bac446dd058b4bafb0b516a13fae47b1924e4f7690658ba8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be6405facfa7726cbe56c48184cba0123774e8e34d3ceb9e18daed6f35560c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d77d078572dbe7f11e91ece0d25e76bd17004167dabce267a2b568e798aae158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2843fb3fcbadb926b104a9f7c4084468b36293aebed6c692c29419a0457b62a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d94eca13821749d7e15a350c411a831d2887f4211ed092a4fb57e26e49fcd9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ff8c1f1602e57602cae3f46dd3d2bf8d7abad6bd0952f4347152678e8606d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b002220fe79bb266ddd4f28ab0b9f29bb0ea1e8d204f558d023242437694625\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dcd8afad237a7ca1c7f5e0d37bec9ab99fd61ac7be8b3604a8f5d0bb6a2181e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T12:56:17Z\\\",\\\"message\\\":\\\"a-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-service-ca-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-service-ca-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.40\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF1203 12:56:15.918790 6490 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dfc7520e7cb1e09eb10bf9615dbae11d7e73474f1b73eed481d18269b65aef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c34a338f506e8dd09757984f92a29fd76ddd2d25fc00d680cc4883c4766080e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c34a338f506e8dd09757984f92a29fd76ddd2d25fc00d680cc4883c4766080e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9nf52\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:33Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:33 crc kubenswrapper[4986]: I1203 12:56:33.420258 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:33 crc kubenswrapper[4986]: I1203 12:56:33.420301 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:33 crc kubenswrapper[4986]: I1203 12:56:33.420311 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:33 crc kubenswrapper[4986]: I1203 12:56:33.420323 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:33 crc kubenswrapper[4986]: I1203 12:56:33.420332 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:33Z","lastTransitionTime":"2025-12-03T12:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:33 crc kubenswrapper[4986]: I1203 12:56:33.431145 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2p8xn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"957171e6-7b64-4d92-b690-342e3251ed8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d417b35a888f61c40a58512bcff329faed85f3b4f8d1d29a7c6f2f43634a9090\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbhn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27878a8de45ccf0e02719e2a3f9d1339d8910db5947d423a3e7400359cc96253\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbhn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2p8xn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:33Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:33 crc kubenswrapper[4986]: I1203 12:56:33.444190 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fszqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a76bcf5c-6a62-4360-826d-ecac337c88ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d94370a14317ade02cb805783e09ffcd8c6ff07f35e9124e302028e6dfa3c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhtrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fszqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:33Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:33 crc kubenswrapper[4986]: I1203 12:56:33.456078 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0d71a3ffdbe593936c83e92e92e286eaff092333bb3dd54cb5bc2825ca46fbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a402aeaef03d8d3c684f0d8e7fe8a080bc02d761104e3c1095b3233aade2b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:33Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:33 crc kubenswrapper[4986]: I1203 12:56:33.478582 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rkgd9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a939a03f-0eec-49cb-9b23-40e359e427d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b854d526fa4ac2b270c7f27c1b35b1ff0fbc671e40a928649b578f926d6b51b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc042debcddb7ea2233d6c224020a32acd8c996d5334d8e6eaae8d1329bb3774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc042debcddb7ea2233d6c224020a32acd8c996d5334d8e6eaae8d1329bb3774\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7128a1dd657b9829c1af31ef1a2f5ad08d2bfd65e5d352cf5a69a5be4e9aae9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7128a1dd657b9829c1af31ef1a2f5ad08d2bfd65e5d352cf5a69a5be4e9aae9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cd633caedf88a1bec633cef61b8b30ae422b8c8d91a012395831a09d398f0f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cd633caedf88a1bec633cef61b8b30ae422b8c8d91a012395831a09d398f0f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1b9e8e1f9326660fe6d7c89af18c5f7dd1134e9f2e866da8ba9c795e13cb57d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1b9e8e1f9326660fe6d7c89af18c5f7dd1134e9f2e866da8ba9c795e13cb57d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55ef54ee9f22072e2b437f38f6d06f24e6f1da02555505bc65c488b5818c87a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55ef54ee9f22072e2b437f38f6d06f24e6f1da02555505bc65c488b5818c87a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35fecc2332fdab60719b864653301c5e8322b13ace401a0ff016ff3dc09cd3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35fecc2332fdab60719b864653301c5e8322b13ace401a0ff016ff3dc09cd3d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rkgd9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:33Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:33 crc kubenswrapper[4986]: I1203 12:56:33.494805 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rl2mt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea24f625-ded4-4e37-a23b-f96fe691b0dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhr82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhr82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rl2mt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:33Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:33 crc kubenswrapper[4986]: I1203 12:56:33.507649 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee22d4f3-8488-4497-a757-a6dd51e84539\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4fc247caf16eab425363e12219a76c40022f0daf0bc9c8b52f4eb8e3726f242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86d62d991d6516ed42f7cad655018e1ebc85c8c1b100d624c1c2e5e8f616cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4ba9819f4f1746f03a558617fc6ee14f5627c37c362d1198a303b6f1854c7bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79cba0b7394bf25be648502fc84a916d8f2fa0949edce20b5a748046ce9b6f75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:33Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:33 crc kubenswrapper[4986]: I1203 12:56:33.522937 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:33Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:33 crc kubenswrapper[4986]: I1203 12:56:33.523075 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:33 crc kubenswrapper[4986]: I1203 12:56:33.523127 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:33 crc kubenswrapper[4986]: I1203 12:56:33.523142 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:33 crc kubenswrapper[4986]: I1203 12:56:33.523164 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:33 crc kubenswrapper[4986]: I1203 12:56:33.523179 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:33Z","lastTransitionTime":"2025-12-03T12:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:33 crc kubenswrapper[4986]: I1203 12:56:33.625810 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:33 crc kubenswrapper[4986]: I1203 12:56:33.625873 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:33 crc kubenswrapper[4986]: I1203 12:56:33.625893 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:33 crc kubenswrapper[4986]: I1203 12:56:33.625919 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:33 crc kubenswrapper[4986]: I1203 12:56:33.625942 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:33Z","lastTransitionTime":"2025-12-03T12:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:33 crc kubenswrapper[4986]: I1203 12:56:33.728806 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:33 crc kubenswrapper[4986]: I1203 12:56:33.728907 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:33 crc kubenswrapper[4986]: I1203 12:56:33.728926 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:33 crc kubenswrapper[4986]: I1203 12:56:33.728948 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:33 crc kubenswrapper[4986]: I1203 12:56:33.728965 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:33Z","lastTransitionTime":"2025-12-03T12:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:33 crc kubenswrapper[4986]: I1203 12:56:33.769743 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 12:56:33 crc kubenswrapper[4986]: I1203 12:56:33.769838 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 12:56:33 crc kubenswrapper[4986]: I1203 12:56:33.769896 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 12:56:33 crc kubenswrapper[4986]: E1203 12:56:33.770008 4986 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 12:56:33 crc kubenswrapper[4986]: E1203 12:56:33.770025 4986 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 12:56:33 crc kubenswrapper[4986]: E1203 12:56:33.770055 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 12:57:05.770020137 +0000 UTC m=+85.236451358 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:56:33 crc kubenswrapper[4986]: E1203 12:56:33.770119 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 12:57:05.770104849 +0000 UTC m=+85.236536070 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 12:56:33 crc kubenswrapper[4986]: E1203 12:56:33.770153 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 12:57:05.77014113 +0000 UTC m=+85.236572371 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 12:56:33 crc kubenswrapper[4986]: I1203 12:56:33.832441 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:33 crc kubenswrapper[4986]: I1203 12:56:33.832512 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:33 crc kubenswrapper[4986]: I1203 12:56:33.832536 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:33 crc kubenswrapper[4986]: I1203 12:56:33.832566 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:33 crc kubenswrapper[4986]: I1203 12:56:33.832587 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:33Z","lastTransitionTime":"2025-12-03T12:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:33 crc kubenswrapper[4986]: I1203 12:56:33.871019 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 12:56:33 crc kubenswrapper[4986]: I1203 12:56:33.871126 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 12:56:33 crc kubenswrapper[4986]: E1203 12:56:33.871161 4986 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 12:56:33 crc kubenswrapper[4986]: E1203 12:56:33.871186 4986 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 12:56:33 crc kubenswrapper[4986]: E1203 12:56:33.871199 4986 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 12:56:33 crc kubenswrapper[4986]: E1203 12:56:33.871222 4986 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 12:56:33 crc kubenswrapper[4986]: E1203 12:56:33.871237 4986 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 12:56:33 crc kubenswrapper[4986]: E1203 12:56:33.871247 4986 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 12:56:33 crc kubenswrapper[4986]: E1203 12:56:33.871252 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-03 12:57:05.871234469 +0000 UTC m=+85.337665660 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 12:56:33 crc kubenswrapper[4986]: E1203 12:56:33.871311 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-03 12:57:05.87126788 +0000 UTC m=+85.337699071 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 12:56:33 crc kubenswrapper[4986]: I1203 12:56:33.934937 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:33 crc kubenswrapper[4986]: I1203 12:56:33.934970 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:33 crc kubenswrapper[4986]: I1203 12:56:33.934982 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:33 crc kubenswrapper[4986]: I1203 12:56:33.934997 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:33 crc kubenswrapper[4986]: I1203 12:56:33.935008 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:33Z","lastTransitionTime":"2025-12-03T12:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:33 crc kubenswrapper[4986]: I1203 12:56:33.942729 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 12:56:33 crc kubenswrapper[4986]: E1203 12:56:33.942842 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 12:56:33 crc kubenswrapper[4986]: I1203 12:56:33.942993 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 12:56:33 crc kubenswrapper[4986]: E1203 12:56:33.943044 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 12:56:33 crc kubenswrapper[4986]: I1203 12:56:33.943308 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl2mt" Dec 03 12:56:33 crc kubenswrapper[4986]: E1203 12:56:33.943363 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl2mt" podUID="ea24f625-ded4-4e37-a23b-f96fe691b0dd" Dec 03 12:56:33 crc kubenswrapper[4986]: I1203 12:56:33.943394 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 12:56:33 crc kubenswrapper[4986]: E1203 12:56:33.943432 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 12:56:34 crc kubenswrapper[4986]: I1203 12:56:34.037760 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:34 crc kubenswrapper[4986]: I1203 12:56:34.037809 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:34 crc kubenswrapper[4986]: I1203 12:56:34.037825 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:34 crc kubenswrapper[4986]: I1203 12:56:34.037848 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:34 crc kubenswrapper[4986]: I1203 12:56:34.037864 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:34Z","lastTransitionTime":"2025-12-03T12:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:34 crc kubenswrapper[4986]: I1203 12:56:34.140367 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:34 crc kubenswrapper[4986]: I1203 12:56:34.140404 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:34 crc kubenswrapper[4986]: I1203 12:56:34.140416 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:34 crc kubenswrapper[4986]: I1203 12:56:34.140436 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:34 crc kubenswrapper[4986]: I1203 12:56:34.140448 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:34Z","lastTransitionTime":"2025-12-03T12:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:34 crc kubenswrapper[4986]: I1203 12:56:34.243017 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:34 crc kubenswrapper[4986]: I1203 12:56:34.243047 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:34 crc kubenswrapper[4986]: I1203 12:56:34.243056 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:34 crc kubenswrapper[4986]: I1203 12:56:34.243069 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:34 crc kubenswrapper[4986]: I1203 12:56:34.243078 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:34Z","lastTransitionTime":"2025-12-03T12:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:34 crc kubenswrapper[4986]: I1203 12:56:34.264166 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9nf52_d3a45156-295b-4093-80e7-2059f81ddbd7/ovnkube-controller/2.log" Dec 03 12:56:34 crc kubenswrapper[4986]: I1203 12:56:34.264744 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9nf52_d3a45156-295b-4093-80e7-2059f81ddbd7/ovnkube-controller/1.log" Dec 03 12:56:34 crc kubenswrapper[4986]: I1203 12:56:34.267394 4986 generic.go:334] "Generic (PLEG): container finished" podID="d3a45156-295b-4093-80e7-2059f81ddbd7" containerID="8b002220fe79bb266ddd4f28ab0b9f29bb0ea1e8d204f558d023242437694625" exitCode=1 Dec 03 12:56:34 crc kubenswrapper[4986]: I1203 12:56:34.267465 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" event={"ID":"d3a45156-295b-4093-80e7-2059f81ddbd7","Type":"ContainerDied","Data":"8b002220fe79bb266ddd4f28ab0b9f29bb0ea1e8d204f558d023242437694625"} Dec 03 12:56:34 crc kubenswrapper[4986]: I1203 12:56:34.267512 4986 scope.go:117] "RemoveContainer" containerID="6dcd8afad237a7ca1c7f5e0d37bec9ab99fd61ac7be8b3604a8f5d0bb6a2181e" Dec 03 12:56:34 crc kubenswrapper[4986]: I1203 12:56:34.268672 4986 scope.go:117] "RemoveContainer" containerID="8b002220fe79bb266ddd4f28ab0b9f29bb0ea1e8d204f558d023242437694625" Dec 03 12:56:34 crc kubenswrapper[4986]: E1203 12:56:34.268921 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-9nf52_openshift-ovn-kubernetes(d3a45156-295b-4093-80e7-2059f81ddbd7)\"" pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" podUID="d3a45156-295b-4093-80e7-2059f81ddbd7" Dec 03 12:56:34 crc kubenswrapper[4986]: I1203 12:56:34.286742 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b923e52e6284c8adca87e7d3d801a432a676e4389d7ab3c0cba584c5b7d96f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:34Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:34 crc kubenswrapper[4986]: I1203 12:56:34.300930 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d66d06e79ffb2ecd12acc0380e5e4839904147f9194e48277ca667df13a0cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:34Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:34 crc kubenswrapper[4986]: I1203 12:56:34.317756 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://535b7b05dbe9b47808f3d358a1f370b696a5d5bb4dd96b049ac2cbe3d18ea065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgppd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0ab87b22153ba31fa79d2d210e695b274c2ddc878ad679c605aa8fb716534d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgppd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xggpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:34Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:34 crc kubenswrapper[4986]: I1203 12:56:34.340444 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231ed674-1d19-4ee2-b29c-a1b7453ed531\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7eb1602f2d542b1998655456e5c49fca8f55a45a07cb075f989d4b10f91f6c95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0ca216d3d1121b7035d5d7ead25d459f95d56808997c02b3801cac0354c76a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04aa8cf2e01507ace747505514511263d9c560608e33f592a2c10be1edad8674\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70f0aa85e0be54890d4367deaf1a95970a7b0ac9d95afdd07c431680bdf7f15e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c0ed6891f87bfc3c62f0f13739720e012dc8fa3b122a18e799e164e4656abb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T12:56:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 12:56:01.437086 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 12:56:01.437205 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 12:56:01.438166 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1972143202/tls.crt::/tmp/serving-cert-1972143202/tls.key\\\\\\\"\\\\nI1203 12:56:02.167657 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 12:56:02.174337 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 12:56:02.174360 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 12:56:02.174377 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 12:56:02.174382 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 12:56:02.178753 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 12:56:02.179168 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 12:56:02.179175 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 12:56:02.179180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 12:56:02.179184 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 12:56:02.179187 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 12:56:02.179190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 12:56:02.178775 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 12:56:02.181220 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8ecb87e56d3fa7d72b57a6f333db4244ee7f60381778e7099a6fc88d91e1e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:34Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:34 crc kubenswrapper[4986]: I1203 12:56:34.345835 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:34 crc kubenswrapper[4986]: I1203 12:56:34.345901 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:34 crc kubenswrapper[4986]: I1203 12:56:34.345914 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:34 crc kubenswrapper[4986]: I1203 12:56:34.345931 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:34 crc kubenswrapper[4986]: I1203 12:56:34.345943 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:34Z","lastTransitionTime":"2025-12-03T12:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:34 crc kubenswrapper[4986]: I1203 12:56:34.360876 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d51894dd-38f0-413a-adaf-22d196474bed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa98006a8736394300cdb9ef3331d2a23e1b3af2e2203f757c51ca9c700d26c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6e7d24b5f1325cfb2d210cb9f311026ab370c91deecc5d0d951bcd4eda79816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9bb9d940d346f9895e24d98173668ea2d0e68fd3035e5b03475bbc936582958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7fa0d673f037c8edcc75ee30b1bbeea73da55d5459347dacc9251fffbb38be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d4007e9ed33227938a5a3656fd74958aef599418fec4cd2048dfa7dd88924dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fb412bfaef1de92aa6be2e23c0dd73fe39eebf29b6814fb3bb6fd0281ddbbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fb412bfaef1de92aa6be2e23c0dd73fe39eebf29b6814fb3bb6fd0281ddbbd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c66ee903635427e08f96e3f96c512c4d25e79bf2fc14eb0c699b3b9a617cfeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c66ee903635427e08f96e3f96c512c4d25e79bf2fc14eb0c699b3b9a617cfeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54fb6e4ecd7f73267d49261875a00a51e5e36e6079170f694346e9d6c3df8f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54fb6e4ecd7f73267d49261875a00a51e5e36e6079170f694346e9d6c3df8f31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:34Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:34 crc kubenswrapper[4986]: I1203 12:56:34.375656 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:34Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:34 crc kubenswrapper[4986]: I1203 12:56:34.390027 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8lcrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75e4c223-9952-4d1b-b83b-5c45cfc51432\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b2af5d43c7842f129b6521e5b3b1963f6060b3e0c552a771c7df6a2a9ef867d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnr67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8lcrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:34Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:34 crc kubenswrapper[4986]: I1203 12:56:34.405919 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1aad898-ad11-4c01-95c2-8a4d293aed99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77c695e6dd4e9c2fe64a41b90d59aebe2a6a54a1985702749b0a68eed6551f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0d8158a257a7d5b17302a2d6f4c81326b8c24487c18132807f91366cdb3457b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16164ea50f4f29916a5878eee07e8debc4268fae086f9086aa4c73c4d03dd082\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://865442eb9652b14d8c0642927e9f59c60344dfcffb5bd2d0cc0ec572ede971c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://865442eb9652b14d8c0642927e9f59c60344dfcffb5bd2d0cc0ec572ede971c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:34Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:34 crc kubenswrapper[4986]: I1203 12:56:34.422374 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:34Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:34 crc kubenswrapper[4986]: I1203 12:56:34.437960 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-px97g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97196b6d-75cc-4de4-8805-f9ce3fbd4230\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c28bf1aff27ba408262b5fc561e084ff3813ce95a7c14b57fea24d409eb33bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v4j5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-px97g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:34Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:34 crc kubenswrapper[4986]: I1203 12:56:34.448172 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:34 crc kubenswrapper[4986]: I1203 12:56:34.448451 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:34 crc kubenswrapper[4986]: I1203 12:56:34.448521 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:34 crc kubenswrapper[4986]: I1203 12:56:34.448588 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:34 crc kubenswrapper[4986]: I1203 12:56:34.448650 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:34Z","lastTransitionTime":"2025-12-03T12:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:34 crc kubenswrapper[4986]: I1203 12:56:34.454848 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3a45156-295b-4093-80e7-2059f81ddbd7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c79f84fc019d109bac446dd058b4bafb0b516a13fae47b1924e4f7690658ba8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be6405facfa7726cbe56c48184cba0123774e8e34d3ceb9e18daed6f35560c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d77d078572dbe7f11e91ece0d25e76bd17004167dabce267a2b568e798aae158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2843fb3fcbadb926b104a9f7c4084468b36293aebed6c692c29419a0457b62a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d94eca13821749d7e15a350c411a831d2887f4211ed092a4fb57e26e49fcd9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ff8c1f1602e57602cae3f46dd3d2bf8d7abad6bd0952f4347152678e8606d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b002220fe79bb266ddd4f28ab0b9f29bb0ea1e8d204f558d023242437694625\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dcd8afad237a7ca1c7f5e0d37bec9ab99fd61ac7be8b3604a8f5d0bb6a2181e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T12:56:17Z\\\",\\\"message\\\":\\\"a-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-service-ca-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-service-ca-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.40\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF1203 12:56:15.918790 6490 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b002220fe79bb266ddd4f28ab0b9f29bb0ea1e8d204f558d023242437694625\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T12:56:33Z\\\",\\\"message\\\":\\\"go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1203 12:56:33.207496 6692 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 12:56:33.207580 6692 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 12:56:33.207656 6692 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 12:56:33.208012 6692 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1203 12:56:33.208030 6692 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1203 12:56:33.208054 6692 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1203 12:56:33.208069 6692 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1203 12:56:33.208132 6692 factory.go:656] Stopping watch factory\\\\nI1203 12:56:33.208149 6692 ovnkube.go:599] Stopped ovnkube\\\\nI1203 12:56:33.208174 6692 handler.go:208] Removed *v1.Node event handler 2\\\\nI1203 12:56:33.208187 6692 handler.go:208] Removed *v1.Node event handler 7\\\\nI1203 12:56:33.208197 6692 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1203 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dfc7520e7cb1e09eb10bf9615dbae11d7e73474f1b73eed481d18269b65aef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c34a338f506e8dd09757984f92a29fd76ddd2d25fc00d680cc4883c4766080e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c34a338f506e8dd09757984f92a29fd76ddd2d25fc00d680cc4883c4766080e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9nf52\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:34Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:34 crc kubenswrapper[4986]: I1203 12:56:34.465855 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2p8xn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"957171e6-7b64-4d92-b690-342e3251ed8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d417b35a888f61c40a58512bcff329faed85f3b4f8d1d29a7c6f2f43634a9090\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbhn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27878a8de45ccf0e02719e2a3f9d1339d8910db5947d423a3e7400359cc96253\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbhn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2p8xn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:34Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:34 crc kubenswrapper[4986]: I1203 12:56:34.475363 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fszqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a76bcf5c-6a62-4360-826d-ecac337c88ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d94370a14317ade02cb805783e09ffcd8c6ff07f35e9124e302028e6dfa3c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhtrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fszqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:34Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:34 crc kubenswrapper[4986]: I1203 12:56:34.486780 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0d71a3ffdbe593936c83e92e92e286eaff092333bb3dd54cb5bc2825ca46fbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a402aeaef03d8d3c684f0d8e7fe8a080bc02d761104e3c1095b3233aade2b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:34Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:34 crc kubenswrapper[4986]: I1203 12:56:34.502499 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rkgd9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a939a03f-0eec-49cb-9b23-40e359e427d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b854d526fa4ac2b270c7f27c1b35b1ff0fbc671e40a928649b578f926d6b51b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc042debcddb7ea2233d6c224020a32acd8c996d5334d8e6eaae8d1329bb3774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc042debcddb7ea2233d6c224020a32acd8c996d5334d8e6eaae8d1329bb3774\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7128a1dd657b9829c1af31ef1a2f5ad08d2bfd65e5d352cf5a69a5be4e9aae9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7128a1dd657b9829c1af31ef1a2f5ad08d2bfd65e5d352cf5a69a5be4e9aae9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cd633caedf88a1bec633cef61b8b30ae422b8c8d91a012395831a09d398f0f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cd633caedf88a1bec633cef61b8b30ae422b8c8d91a012395831a09d398f0f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1b9e8e1f9326660fe6d7c89af18c5f7dd1134e9f2e866da8ba9c795e13cb57d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1b9e8e1f9326660fe6d7c89af18c5f7dd1134e9f2e866da8ba9c795e13cb57d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55ef54ee9f22072e2b437f38f6d06f24e6f1da02555505bc65c488b5818c87a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55ef54ee9f22072e2b437f38f6d06f24e6f1da02555505bc65c488b5818c87a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35fecc2332fdab60719b864653301c5e8322b13ace401a0ff016ff3dc09cd3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35fecc2332fdab60719b864653301c5e8322b13ace401a0ff016ff3dc09cd3d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rkgd9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:34Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:34 crc kubenswrapper[4986]: I1203 12:56:34.513755 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rl2mt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea24f625-ded4-4e37-a23b-f96fe691b0dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhr82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhr82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rl2mt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:34Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:34 crc kubenswrapper[4986]: I1203 12:56:34.526031 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee22d4f3-8488-4497-a757-a6dd51e84539\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4fc247caf16eab425363e12219a76c40022f0daf0bc9c8b52f4eb8e3726f242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86d62d991d6516ed42f7cad655018e1ebc85c8c1b100d624c1c2e5e8f616cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4ba9819f4f1746f03a558617fc6ee14f5627c37c362d1198a303b6f1854c7bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79cba0b7394bf25be648502fc84a916d8f2fa0949edce20b5a748046ce9b6f75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:34Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:34 crc kubenswrapper[4986]: I1203 12:56:34.539513 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:34Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:34 crc kubenswrapper[4986]: I1203 12:56:34.550998 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:34 crc kubenswrapper[4986]: I1203 12:56:34.551066 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:34 crc kubenswrapper[4986]: I1203 12:56:34.551077 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:34 crc kubenswrapper[4986]: I1203 12:56:34.551090 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:34 crc kubenswrapper[4986]: I1203 12:56:34.551098 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:34Z","lastTransitionTime":"2025-12-03T12:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:34 crc kubenswrapper[4986]: I1203 12:56:34.653926 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:34 crc kubenswrapper[4986]: I1203 12:56:34.653978 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:34 crc kubenswrapper[4986]: I1203 12:56:34.653989 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:34 crc kubenswrapper[4986]: I1203 12:56:34.654008 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:34 crc kubenswrapper[4986]: I1203 12:56:34.654023 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:34Z","lastTransitionTime":"2025-12-03T12:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:34 crc kubenswrapper[4986]: I1203 12:56:34.757215 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:34 crc kubenswrapper[4986]: I1203 12:56:34.757375 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:34 crc kubenswrapper[4986]: I1203 12:56:34.757396 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:34 crc kubenswrapper[4986]: I1203 12:56:34.757423 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:34 crc kubenswrapper[4986]: I1203 12:56:34.757442 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:34Z","lastTransitionTime":"2025-12-03T12:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:34 crc kubenswrapper[4986]: I1203 12:56:34.860638 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:34 crc kubenswrapper[4986]: I1203 12:56:34.860680 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:34 crc kubenswrapper[4986]: I1203 12:56:34.860690 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:34 crc kubenswrapper[4986]: I1203 12:56:34.860703 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:34 crc kubenswrapper[4986]: I1203 12:56:34.860713 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:34Z","lastTransitionTime":"2025-12-03T12:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:34 crc kubenswrapper[4986]: I1203 12:56:34.963045 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:34 crc kubenswrapper[4986]: I1203 12:56:34.963101 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:34 crc kubenswrapper[4986]: I1203 12:56:34.963118 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:34 crc kubenswrapper[4986]: I1203 12:56:34.963138 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:34 crc kubenswrapper[4986]: I1203 12:56:34.963153 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:34Z","lastTransitionTime":"2025-12-03T12:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:35 crc kubenswrapper[4986]: I1203 12:56:35.065893 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:35 crc kubenswrapper[4986]: I1203 12:56:35.065948 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:35 crc kubenswrapper[4986]: I1203 12:56:35.065963 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:35 crc kubenswrapper[4986]: I1203 12:56:35.065982 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:35 crc kubenswrapper[4986]: I1203 12:56:35.065994 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:35Z","lastTransitionTime":"2025-12-03T12:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:35 crc kubenswrapper[4986]: I1203 12:56:35.168940 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:35 crc kubenswrapper[4986]: I1203 12:56:35.168999 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:35 crc kubenswrapper[4986]: I1203 12:56:35.169013 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:35 crc kubenswrapper[4986]: I1203 12:56:35.169027 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:35 crc kubenswrapper[4986]: I1203 12:56:35.169038 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:35Z","lastTransitionTime":"2025-12-03T12:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:35 crc kubenswrapper[4986]: I1203 12:56:35.270819 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:35 crc kubenswrapper[4986]: I1203 12:56:35.270863 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:35 crc kubenswrapper[4986]: I1203 12:56:35.270875 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:35 crc kubenswrapper[4986]: I1203 12:56:35.270893 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:35 crc kubenswrapper[4986]: I1203 12:56:35.270905 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:35Z","lastTransitionTime":"2025-12-03T12:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:35 crc kubenswrapper[4986]: I1203 12:56:35.272493 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9nf52_d3a45156-295b-4093-80e7-2059f81ddbd7/ovnkube-controller/2.log" Dec 03 12:56:35 crc kubenswrapper[4986]: I1203 12:56:35.279704 4986 scope.go:117] "RemoveContainer" containerID="8b002220fe79bb266ddd4f28ab0b9f29bb0ea1e8d204f558d023242437694625" Dec 03 12:56:35 crc kubenswrapper[4986]: E1203 12:56:35.279935 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-9nf52_openshift-ovn-kubernetes(d3a45156-295b-4093-80e7-2059f81ddbd7)\"" pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" podUID="d3a45156-295b-4093-80e7-2059f81ddbd7" Dec 03 12:56:35 crc kubenswrapper[4986]: I1203 12:56:35.290513 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rl2mt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea24f625-ded4-4e37-a23b-f96fe691b0dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhr82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhr82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rl2mt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:35Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:35 crc kubenswrapper[4986]: I1203 12:56:35.304353 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee22d4f3-8488-4497-a757-a6dd51e84539\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4fc247caf16eab425363e12219a76c40022f0daf0bc9c8b52f4eb8e3726f242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86d62d991d6516ed42f7cad655018e1ebc85c8c1b100d624c1c2e5e8f616cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4ba9819f4f1746f03a558617fc6ee14f5627c37c362d1198a303b6f1854c7bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79cba0b7394bf25be648502fc84a916d8f2fa0949edce20b5a748046ce9b6f75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:35Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:35 crc kubenswrapper[4986]: I1203 12:56:35.315621 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:35Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:35 crc kubenswrapper[4986]: I1203 12:56:35.329954 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0d71a3ffdbe593936c83e92e92e286eaff092333bb3dd54cb5bc2825ca46fbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a402aeaef03d8d3c684f0d8e7fe8a080bc02d761104e3c1095b3233aade2b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:35Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:35 crc kubenswrapper[4986]: I1203 12:56:35.346631 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rkgd9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a939a03f-0eec-49cb-9b23-40e359e427d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b854d526fa4ac2b270c7f27c1b35b1ff0fbc671e40a928649b578f926d6b51b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc042debcddb7ea2233d6c224020a32acd8c996d5334d8e6eaae8d1329bb3774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc042debcddb7ea2233d6c224020a32acd8c996d5334d8e6eaae8d1329bb3774\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7128a1dd657b9829c1af31ef1a2f5ad08d2bfd65e5d352cf5a69a5be4e9aae9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7128a1dd657b9829c1af31ef1a2f5ad08d2bfd65e5d352cf5a69a5be4e9aae9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cd633caedf88a1bec633cef61b8b30ae422b8c8d91a012395831a09d398f0f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cd633caedf88a1bec633cef61b8b30ae422b8c8d91a012395831a09d398f0f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1b9e8e1f9326660fe6d7c89af18c5f7dd1134e9f2e866da8ba9c795e13cb57d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1b9e8e1f9326660fe6d7c89af18c5f7dd1134e9f2e866da8ba9c795e13cb57d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55ef54ee9f22072e2b437f38f6d06f24e6f1da02555505bc65c488b5818c87a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55ef54ee9f22072e2b437f38f6d06f24e6f1da02555505bc65c488b5818c87a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35fecc2332fdab60719b864653301c5e8322b13ace401a0ff016ff3dc09cd3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35fecc2332fdab60719b864653301c5e8322b13ace401a0ff016ff3dc09cd3d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rkgd9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:35Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:35 crc kubenswrapper[4986]: I1203 12:56:35.358685 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://535b7b05dbe9b47808f3d358a1f370b696a5d5bb4dd96b049ac2cbe3d18ea065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgppd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0ab87b22153ba31fa79d2d210e695b274c2ddc878ad679c605aa8fb716534d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgppd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xggpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:35Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:35 crc kubenswrapper[4986]: I1203 12:56:35.372828 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:35 crc kubenswrapper[4986]: I1203 12:56:35.373167 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:35 crc kubenswrapper[4986]: I1203 12:56:35.373245 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:35 crc kubenswrapper[4986]: I1203 12:56:35.373351 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:35 crc kubenswrapper[4986]: I1203 12:56:35.372814 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231ed674-1d19-4ee2-b29c-a1b7453ed531\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7eb1602f2d542b1998655456e5c49fca8f55a45a07cb075f989d4b10f91f6c95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0ca216d3d1121b7035d5d7ead25d459f95d56808997c02b3801cac0354c76a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04aa8cf2e01507ace747505514511263d9c560608e33f592a2c10be1edad8674\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70f0aa85e0be54890d4367deaf1a95970a7b0ac9d95afdd07c431680bdf7f15e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c0ed6891f87bfc3c62f0f13739720e012dc8fa3b122a18e799e164e4656abb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T12:56:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 12:56:01.437086 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 12:56:01.437205 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 12:56:01.438166 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1972143202/tls.crt::/tmp/serving-cert-1972143202/tls.key\\\\\\\"\\\\nI1203 12:56:02.167657 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 12:56:02.174337 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 12:56:02.174360 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 12:56:02.174377 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 12:56:02.174382 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 12:56:02.178753 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 12:56:02.179168 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 12:56:02.179175 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 12:56:02.179180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 12:56:02.179184 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 12:56:02.179187 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 12:56:02.179190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 12:56:02.178775 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 12:56:02.181220 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8ecb87e56d3fa7d72b57a6f333db4244ee7f60381778e7099a6fc88d91e1e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:35Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:35 crc kubenswrapper[4986]: I1203 12:56:35.373427 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:35Z","lastTransitionTime":"2025-12-03T12:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:35 crc kubenswrapper[4986]: I1203 12:56:35.391563 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d51894dd-38f0-413a-adaf-22d196474bed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa98006a8736394300cdb9ef3331d2a23e1b3af2e2203f757c51ca9c700d26c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6e7d24b5f1325cfb2d210cb9f311026ab370c91deecc5d0d951bcd4eda79816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9bb9d940d346f9895e24d98173668ea2d0e68fd3035e5b03475bbc936582958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7fa0d673f037c8edcc75ee30b1bbeea73da55d5459347dacc9251fffbb38be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d4007e9ed33227938a5a3656fd74958aef599418fec4cd2048dfa7dd88924dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fb412bfaef1de92aa6be2e23c0dd73fe39eebf29b6814fb3bb6fd0281ddbbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fb412bfaef1de92aa6be2e23c0dd73fe39eebf29b6814fb3bb6fd0281ddbbd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c66ee903635427e08f96e3f96c512c4d25e79bf2fc14eb0c699b3b9a617cfeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c66ee903635427e08f96e3f96c512c4d25e79bf2fc14eb0c699b3b9a617cfeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54fb6e4ecd7f73267d49261875a00a51e5e36e6079170f694346e9d6c3df8f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54fb6e4ecd7f73267d49261875a00a51e5e36e6079170f694346e9d6c3df8f31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:35Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:35 crc kubenswrapper[4986]: I1203 12:56:35.403516 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b923e52e6284c8adca87e7d3d801a432a676e4389d7ab3c0cba584c5b7d96f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:35Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:35 crc kubenswrapper[4986]: I1203 12:56:35.414721 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d66d06e79ffb2ecd12acc0380e5e4839904147f9194e48277ca667df13a0cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:35Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:35 crc kubenswrapper[4986]: I1203 12:56:35.425318 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1aad898-ad11-4c01-95c2-8a4d293aed99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77c695e6dd4e9c2fe64a41b90d59aebe2a6a54a1985702749b0a68eed6551f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0d8158a257a7d5b17302a2d6f4c81326b8c24487c18132807f91366cdb3457b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16164ea50f4f29916a5878eee07e8debc4268fae086f9086aa4c73c4d03dd082\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://865442eb9652b14d8c0642927e9f59c60344dfcffb5bd2d0cc0ec572ede971c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://865442eb9652b14d8c0642927e9f59c60344dfcffb5bd2d0cc0ec572ede971c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:35Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:35 crc kubenswrapper[4986]: I1203 12:56:35.435750 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:35Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:35 crc kubenswrapper[4986]: I1203 12:56:35.450123 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:35Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:35 crc kubenswrapper[4986]: I1203 12:56:35.464964 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8lcrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75e4c223-9952-4d1b-b83b-5c45cfc51432\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b2af5d43c7842f129b6521e5b3b1963f6060b3e0c552a771c7df6a2a9ef867d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnr67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8lcrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:35Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:35 crc kubenswrapper[4986]: I1203 12:56:35.475492 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:35 crc kubenswrapper[4986]: I1203 12:56:35.475888 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:35 crc kubenswrapper[4986]: I1203 12:56:35.476007 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:35 crc kubenswrapper[4986]: I1203 12:56:35.476115 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:35 crc kubenswrapper[4986]: I1203 12:56:35.476231 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:35Z","lastTransitionTime":"2025-12-03T12:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:35 crc kubenswrapper[4986]: I1203 12:56:35.477392 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2p8xn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"957171e6-7b64-4d92-b690-342e3251ed8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d417b35a888f61c40a58512bcff329faed85f3b4f8d1d29a7c6f2f43634a9090\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbhn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27878a8de45ccf0e02719e2a3f9d1339d8910db5947d423a3e7400359cc96253\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbhn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2p8xn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:35Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:35 crc kubenswrapper[4986]: I1203 12:56:35.489060 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fszqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a76bcf5c-6a62-4360-826d-ecac337c88ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d94370a14317ade02cb805783e09ffcd8c6ff07f35e9124e302028e6dfa3c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhtrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fszqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:35Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:35 crc kubenswrapper[4986]: I1203 12:56:35.500686 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-px97g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97196b6d-75cc-4de4-8805-f9ce3fbd4230\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c28bf1aff27ba408262b5fc561e084ff3813ce95a7c14b57fea24d409eb33bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v4j5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-px97g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:35Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:35 crc kubenswrapper[4986]: I1203 12:56:35.520363 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3a45156-295b-4093-80e7-2059f81ddbd7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c79f84fc019d109bac446dd058b4bafb0b516a13fae47b1924e4f7690658ba8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be6405facfa7726cbe56c48184cba0123774e8e34d3ceb9e18daed6f35560c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d77d078572dbe7f11e91ece0d25e76bd17004167dabce267a2b568e798aae158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2843fb3fcbadb926b104a9f7c4084468b36293aebed6c692c29419a0457b62a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d94eca13821749d7e15a350c411a831d2887f4211ed092a4fb57e26e49fcd9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ff8c1f1602e57602cae3f46dd3d2bf8d7abad6bd0952f4347152678e8606d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b002220fe79bb266ddd4f28ab0b9f29bb0ea1e8d204f558d023242437694625\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b002220fe79bb266ddd4f28ab0b9f29bb0ea1e8d204f558d023242437694625\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T12:56:33Z\\\",\\\"message\\\":\\\"go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1203 12:56:33.207496 6692 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 12:56:33.207580 6692 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 12:56:33.207656 6692 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 12:56:33.208012 6692 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1203 12:56:33.208030 6692 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1203 12:56:33.208054 6692 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1203 12:56:33.208069 6692 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1203 12:56:33.208132 6692 factory.go:656] Stopping watch factory\\\\nI1203 12:56:33.208149 6692 ovnkube.go:599] Stopped ovnkube\\\\nI1203 12:56:33.208174 6692 handler.go:208] Removed *v1.Node event handler 2\\\\nI1203 12:56:33.208187 6692 handler.go:208] Removed *v1.Node event handler 7\\\\nI1203 12:56:33.208197 6692 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1203 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:32Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-9nf52_openshift-ovn-kubernetes(d3a45156-295b-4093-80e7-2059f81ddbd7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dfc7520e7cb1e09eb10bf9615dbae11d7e73474f1b73eed481d18269b65aef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c34a338f506e8dd09757984f92a29fd76ddd2d25fc00d680cc4883c4766080e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c34a338f506e8dd09757984f92a29fd76ddd2d25fc00d680cc4883c4766080e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9nf52\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:35Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:35 crc kubenswrapper[4986]: I1203 12:56:35.578210 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:35 crc kubenswrapper[4986]: I1203 12:56:35.578257 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:35 crc kubenswrapper[4986]: I1203 12:56:35.578316 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:35 crc kubenswrapper[4986]: I1203 12:56:35.578335 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:35 crc kubenswrapper[4986]: I1203 12:56:35.578346 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:35Z","lastTransitionTime":"2025-12-03T12:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:35 crc kubenswrapper[4986]: I1203 12:56:35.680265 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:35 crc kubenswrapper[4986]: I1203 12:56:35.680393 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:35 crc kubenswrapper[4986]: I1203 12:56:35.680414 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:35 crc kubenswrapper[4986]: I1203 12:56:35.680435 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:35 crc kubenswrapper[4986]: I1203 12:56:35.680450 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:35Z","lastTransitionTime":"2025-12-03T12:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:35 crc kubenswrapper[4986]: I1203 12:56:35.782619 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:35 crc kubenswrapper[4986]: I1203 12:56:35.782677 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:35 crc kubenswrapper[4986]: I1203 12:56:35.782693 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:35 crc kubenswrapper[4986]: I1203 12:56:35.782717 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:35 crc kubenswrapper[4986]: I1203 12:56:35.782782 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:35Z","lastTransitionTime":"2025-12-03T12:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:35 crc kubenswrapper[4986]: I1203 12:56:35.885201 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:35 crc kubenswrapper[4986]: I1203 12:56:35.885234 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:35 crc kubenswrapper[4986]: I1203 12:56:35.885249 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:35 crc kubenswrapper[4986]: I1203 12:56:35.885264 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:35 crc kubenswrapper[4986]: I1203 12:56:35.885273 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:35Z","lastTransitionTime":"2025-12-03T12:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:35 crc kubenswrapper[4986]: I1203 12:56:35.987523 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:35 crc kubenswrapper[4986]: I1203 12:56:35.987552 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:35 crc kubenswrapper[4986]: I1203 12:56:35.987559 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:35 crc kubenswrapper[4986]: I1203 12:56:35.987571 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:35 crc kubenswrapper[4986]: I1203 12:56:35.987580 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:35Z","lastTransitionTime":"2025-12-03T12:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:36 crc kubenswrapper[4986]: I1203 12:56:36.090335 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:36 crc kubenswrapper[4986]: I1203 12:56:36.090372 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:36 crc kubenswrapper[4986]: I1203 12:56:36.090383 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:36 crc kubenswrapper[4986]: I1203 12:56:36.090399 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:36 crc kubenswrapper[4986]: I1203 12:56:36.090410 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:36Z","lastTransitionTime":"2025-12-03T12:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:36 crc kubenswrapper[4986]: I1203 12:56:36.193511 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:36 crc kubenswrapper[4986]: I1203 12:56:36.193578 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:36 crc kubenswrapper[4986]: I1203 12:56:36.193598 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:36 crc kubenswrapper[4986]: I1203 12:56:36.193626 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:36 crc kubenswrapper[4986]: I1203 12:56:36.193646 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:36Z","lastTransitionTime":"2025-12-03T12:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:36 crc kubenswrapper[4986]: I1203 12:56:36.296130 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:36 crc kubenswrapper[4986]: I1203 12:56:36.296183 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:36 crc kubenswrapper[4986]: I1203 12:56:36.296202 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:36 crc kubenswrapper[4986]: I1203 12:56:36.296224 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:36 crc kubenswrapper[4986]: I1203 12:56:36.296240 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:36Z","lastTransitionTime":"2025-12-03T12:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:36 crc kubenswrapper[4986]: I1203 12:56:36.398119 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:36 crc kubenswrapper[4986]: I1203 12:56:36.398166 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:36 crc kubenswrapper[4986]: I1203 12:56:36.398181 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:36 crc kubenswrapper[4986]: I1203 12:56:36.398198 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:36 crc kubenswrapper[4986]: I1203 12:56:36.398211 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:36Z","lastTransitionTime":"2025-12-03T12:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:36 crc kubenswrapper[4986]: I1203 12:56:36.500662 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:36 crc kubenswrapper[4986]: I1203 12:56:36.500691 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:36 crc kubenswrapper[4986]: I1203 12:56:36.500698 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:36 crc kubenswrapper[4986]: I1203 12:56:36.500710 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:36 crc kubenswrapper[4986]: I1203 12:56:36.500718 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:36Z","lastTransitionTime":"2025-12-03T12:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:36 crc kubenswrapper[4986]: I1203 12:56:36.545339 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 12:56:36 crc kubenswrapper[4986]: E1203 12:56:36.545520 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 12:56:36 crc kubenswrapper[4986]: I1203 12:56:36.545699 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl2mt" Dec 03 12:56:36 crc kubenswrapper[4986]: I1203 12:56:36.545828 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 12:56:36 crc kubenswrapper[4986]: E1203 12:56:36.545834 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl2mt" podUID="ea24f625-ded4-4e37-a23b-f96fe691b0dd" Dec 03 12:56:36 crc kubenswrapper[4986]: I1203 12:56:36.545862 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 12:56:36 crc kubenswrapper[4986]: E1203 12:56:36.546086 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 12:56:36 crc kubenswrapper[4986]: E1203 12:56:36.546192 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 12:56:36 crc kubenswrapper[4986]: I1203 12:56:36.603175 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:36 crc kubenswrapper[4986]: I1203 12:56:36.603211 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:36 crc kubenswrapper[4986]: I1203 12:56:36.603230 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:36 crc kubenswrapper[4986]: I1203 12:56:36.603251 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:36 crc kubenswrapper[4986]: I1203 12:56:36.603260 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:36Z","lastTransitionTime":"2025-12-03T12:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:36 crc kubenswrapper[4986]: I1203 12:56:36.705782 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:36 crc kubenswrapper[4986]: I1203 12:56:36.705818 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:36 crc kubenswrapper[4986]: I1203 12:56:36.705841 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:36 crc kubenswrapper[4986]: I1203 12:56:36.705859 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:36 crc kubenswrapper[4986]: I1203 12:56:36.705891 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:36Z","lastTransitionTime":"2025-12-03T12:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:36 crc kubenswrapper[4986]: I1203 12:56:36.807989 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:36 crc kubenswrapper[4986]: I1203 12:56:36.808046 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:36 crc kubenswrapper[4986]: I1203 12:56:36.808063 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:36 crc kubenswrapper[4986]: I1203 12:56:36.808088 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:36 crc kubenswrapper[4986]: I1203 12:56:36.808105 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:36Z","lastTransitionTime":"2025-12-03T12:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:36 crc kubenswrapper[4986]: I1203 12:56:36.911018 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:36 crc kubenswrapper[4986]: I1203 12:56:36.911299 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:36 crc kubenswrapper[4986]: I1203 12:56:36.911378 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:36 crc kubenswrapper[4986]: I1203 12:56:36.911453 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:36 crc kubenswrapper[4986]: I1203 12:56:36.911528 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:36Z","lastTransitionTime":"2025-12-03T12:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:37 crc kubenswrapper[4986]: I1203 12:56:37.013452 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:37 crc kubenswrapper[4986]: I1203 12:56:37.013724 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:37 crc kubenswrapper[4986]: I1203 12:56:37.013800 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:37 crc kubenswrapper[4986]: I1203 12:56:37.013884 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:37 crc kubenswrapper[4986]: I1203 12:56:37.013963 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:37Z","lastTransitionTime":"2025-12-03T12:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:37 crc kubenswrapper[4986]: I1203 12:56:37.116814 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:37 crc kubenswrapper[4986]: I1203 12:56:37.116851 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:37 crc kubenswrapper[4986]: I1203 12:56:37.116861 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:37 crc kubenswrapper[4986]: I1203 12:56:37.116877 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:37 crc kubenswrapper[4986]: I1203 12:56:37.116890 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:37Z","lastTransitionTime":"2025-12-03T12:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:37 crc kubenswrapper[4986]: I1203 12:56:37.220128 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:37 crc kubenswrapper[4986]: I1203 12:56:37.220179 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:37 crc kubenswrapper[4986]: I1203 12:56:37.220191 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:37 crc kubenswrapper[4986]: I1203 12:56:37.220209 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:37 crc kubenswrapper[4986]: I1203 12:56:37.220221 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:37Z","lastTransitionTime":"2025-12-03T12:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:37 crc kubenswrapper[4986]: I1203 12:56:37.323172 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:37 crc kubenswrapper[4986]: I1203 12:56:37.323234 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:37 crc kubenswrapper[4986]: I1203 12:56:37.323245 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:37 crc kubenswrapper[4986]: I1203 12:56:37.323267 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:37 crc kubenswrapper[4986]: I1203 12:56:37.323278 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:37Z","lastTransitionTime":"2025-12-03T12:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:37 crc kubenswrapper[4986]: I1203 12:56:37.426508 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:37 crc kubenswrapper[4986]: I1203 12:56:37.426551 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:37 crc kubenswrapper[4986]: I1203 12:56:37.426562 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:37 crc kubenswrapper[4986]: I1203 12:56:37.426617 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:37 crc kubenswrapper[4986]: I1203 12:56:37.426639 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:37Z","lastTransitionTime":"2025-12-03T12:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:37 crc kubenswrapper[4986]: I1203 12:56:37.529239 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:37 crc kubenswrapper[4986]: I1203 12:56:37.529301 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:37 crc kubenswrapper[4986]: I1203 12:56:37.529313 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:37 crc kubenswrapper[4986]: I1203 12:56:37.529328 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:37 crc kubenswrapper[4986]: I1203 12:56:37.529340 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:37Z","lastTransitionTime":"2025-12-03T12:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:37 crc kubenswrapper[4986]: I1203 12:56:37.632773 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:37 crc kubenswrapper[4986]: I1203 12:56:37.632833 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:37 crc kubenswrapper[4986]: I1203 12:56:37.632847 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:37 crc kubenswrapper[4986]: I1203 12:56:37.632867 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:37 crc kubenswrapper[4986]: I1203 12:56:37.632877 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:37Z","lastTransitionTime":"2025-12-03T12:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:37 crc kubenswrapper[4986]: I1203 12:56:37.734763 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:37 crc kubenswrapper[4986]: I1203 12:56:37.734800 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:37 crc kubenswrapper[4986]: I1203 12:56:37.734809 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:37 crc kubenswrapper[4986]: I1203 12:56:37.734822 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:37 crc kubenswrapper[4986]: I1203 12:56:37.734832 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:37Z","lastTransitionTime":"2025-12-03T12:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:37 crc kubenswrapper[4986]: I1203 12:56:37.837121 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:37 crc kubenswrapper[4986]: I1203 12:56:37.837162 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:37 crc kubenswrapper[4986]: I1203 12:56:37.837171 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:37 crc kubenswrapper[4986]: I1203 12:56:37.837188 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:37 crc kubenswrapper[4986]: I1203 12:56:37.837199 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:37Z","lastTransitionTime":"2025-12-03T12:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:37 crc kubenswrapper[4986]: I1203 12:56:37.940732 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:37 crc kubenswrapper[4986]: I1203 12:56:37.940860 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:37 crc kubenswrapper[4986]: I1203 12:56:37.940884 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:37 crc kubenswrapper[4986]: I1203 12:56:37.940921 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:37 crc kubenswrapper[4986]: I1203 12:56:37.940944 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:37Z","lastTransitionTime":"2025-12-03T12:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:37 crc kubenswrapper[4986]: I1203 12:56:37.942930 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl2mt" Dec 03 12:56:37 crc kubenswrapper[4986]: I1203 12:56:37.942984 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 12:56:37 crc kubenswrapper[4986]: I1203 12:56:37.942981 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 12:56:37 crc kubenswrapper[4986]: I1203 12:56:37.943053 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 12:56:37 crc kubenswrapper[4986]: E1203 12:56:37.943215 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl2mt" podUID="ea24f625-ded4-4e37-a23b-f96fe691b0dd" Dec 03 12:56:37 crc kubenswrapper[4986]: E1203 12:56:37.943370 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 12:56:37 crc kubenswrapper[4986]: E1203 12:56:37.943482 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 12:56:37 crc kubenswrapper[4986]: E1203 12:56:37.943590 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 12:56:38 crc kubenswrapper[4986]: I1203 12:56:38.043659 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:38 crc kubenswrapper[4986]: I1203 12:56:38.043744 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:38 crc kubenswrapper[4986]: I1203 12:56:38.043775 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:38 crc kubenswrapper[4986]: I1203 12:56:38.043803 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:38 crc kubenswrapper[4986]: I1203 12:56:38.043824 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:38Z","lastTransitionTime":"2025-12-03T12:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:38 crc kubenswrapper[4986]: I1203 12:56:38.146514 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:38 crc kubenswrapper[4986]: I1203 12:56:38.146574 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:38 crc kubenswrapper[4986]: I1203 12:56:38.146596 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:38 crc kubenswrapper[4986]: I1203 12:56:38.146625 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:38 crc kubenswrapper[4986]: I1203 12:56:38.146646 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:38Z","lastTransitionTime":"2025-12-03T12:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:38 crc kubenswrapper[4986]: I1203 12:56:38.249752 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:38 crc kubenswrapper[4986]: I1203 12:56:38.249793 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:38 crc kubenswrapper[4986]: I1203 12:56:38.249809 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:38 crc kubenswrapper[4986]: I1203 12:56:38.249832 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:38 crc kubenswrapper[4986]: I1203 12:56:38.249883 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:38Z","lastTransitionTime":"2025-12-03T12:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:38 crc kubenswrapper[4986]: I1203 12:56:38.352956 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:38 crc kubenswrapper[4986]: I1203 12:56:38.352996 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:38 crc kubenswrapper[4986]: I1203 12:56:38.353029 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:38 crc kubenswrapper[4986]: I1203 12:56:38.353044 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:38 crc kubenswrapper[4986]: I1203 12:56:38.353053 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:38Z","lastTransitionTime":"2025-12-03T12:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:38 crc kubenswrapper[4986]: I1203 12:56:38.455746 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:38 crc kubenswrapper[4986]: I1203 12:56:38.455783 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:38 crc kubenswrapper[4986]: I1203 12:56:38.455790 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:38 crc kubenswrapper[4986]: I1203 12:56:38.456005 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:38 crc kubenswrapper[4986]: I1203 12:56:38.456018 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:38Z","lastTransitionTime":"2025-12-03T12:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:38 crc kubenswrapper[4986]: I1203 12:56:38.558629 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:38 crc kubenswrapper[4986]: I1203 12:56:38.558667 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:38 crc kubenswrapper[4986]: I1203 12:56:38.558697 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:38 crc kubenswrapper[4986]: I1203 12:56:38.558712 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:38 crc kubenswrapper[4986]: I1203 12:56:38.558722 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:38Z","lastTransitionTime":"2025-12-03T12:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:38 crc kubenswrapper[4986]: I1203 12:56:38.661049 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:38 crc kubenswrapper[4986]: I1203 12:56:38.661082 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:38 crc kubenswrapper[4986]: I1203 12:56:38.661090 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:38 crc kubenswrapper[4986]: I1203 12:56:38.661102 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:38 crc kubenswrapper[4986]: I1203 12:56:38.661111 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:38Z","lastTransitionTime":"2025-12-03T12:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:38 crc kubenswrapper[4986]: I1203 12:56:38.763604 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:38 crc kubenswrapper[4986]: I1203 12:56:38.763644 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:38 crc kubenswrapper[4986]: I1203 12:56:38.763653 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:38 crc kubenswrapper[4986]: I1203 12:56:38.763668 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:38 crc kubenswrapper[4986]: I1203 12:56:38.763678 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:38Z","lastTransitionTime":"2025-12-03T12:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:38 crc kubenswrapper[4986]: I1203 12:56:38.866886 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:38 crc kubenswrapper[4986]: I1203 12:56:38.866940 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:38 crc kubenswrapper[4986]: I1203 12:56:38.866951 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:38 crc kubenswrapper[4986]: I1203 12:56:38.866968 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:38 crc kubenswrapper[4986]: I1203 12:56:38.866978 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:38Z","lastTransitionTime":"2025-12-03T12:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:38 crc kubenswrapper[4986]: I1203 12:56:38.969319 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:38 crc kubenswrapper[4986]: I1203 12:56:38.969417 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:38 crc kubenswrapper[4986]: I1203 12:56:38.969438 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:38 crc kubenswrapper[4986]: I1203 12:56:38.969461 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:38 crc kubenswrapper[4986]: I1203 12:56:38.969480 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:38Z","lastTransitionTime":"2025-12-03T12:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:39 crc kubenswrapper[4986]: I1203 12:56:39.071187 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:39 crc kubenswrapper[4986]: I1203 12:56:39.071228 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:39 crc kubenswrapper[4986]: I1203 12:56:39.071241 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:39 crc kubenswrapper[4986]: I1203 12:56:39.071255 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:39 crc kubenswrapper[4986]: I1203 12:56:39.071267 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:39Z","lastTransitionTime":"2025-12-03T12:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:39 crc kubenswrapper[4986]: I1203 12:56:39.174416 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:39 crc kubenswrapper[4986]: I1203 12:56:39.174472 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:39 crc kubenswrapper[4986]: I1203 12:56:39.174481 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:39 crc kubenswrapper[4986]: I1203 12:56:39.174496 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:39 crc kubenswrapper[4986]: I1203 12:56:39.174505 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:39Z","lastTransitionTime":"2025-12-03T12:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:39 crc kubenswrapper[4986]: I1203 12:56:39.279936 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:39 crc kubenswrapper[4986]: I1203 12:56:39.280074 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:39 crc kubenswrapper[4986]: I1203 12:56:39.280091 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:39 crc kubenswrapper[4986]: I1203 12:56:39.280118 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:39 crc kubenswrapper[4986]: I1203 12:56:39.280130 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:39Z","lastTransitionTime":"2025-12-03T12:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:39 crc kubenswrapper[4986]: I1203 12:56:39.383885 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:39 crc kubenswrapper[4986]: I1203 12:56:39.383924 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:39 crc kubenswrapper[4986]: I1203 12:56:39.383933 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:39 crc kubenswrapper[4986]: I1203 12:56:39.383947 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:39 crc kubenswrapper[4986]: I1203 12:56:39.383958 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:39Z","lastTransitionTime":"2025-12-03T12:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:39 crc kubenswrapper[4986]: I1203 12:56:39.486392 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:39 crc kubenswrapper[4986]: I1203 12:56:39.486681 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:39 crc kubenswrapper[4986]: I1203 12:56:39.486773 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:39 crc kubenswrapper[4986]: I1203 12:56:39.486875 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:39 crc kubenswrapper[4986]: I1203 12:56:39.487006 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:39Z","lastTransitionTime":"2025-12-03T12:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:39 crc kubenswrapper[4986]: I1203 12:56:39.589721 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:39 crc kubenswrapper[4986]: I1203 12:56:39.590013 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:39 crc kubenswrapper[4986]: I1203 12:56:39.590131 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:39 crc kubenswrapper[4986]: I1203 12:56:39.590241 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:39 crc kubenswrapper[4986]: I1203 12:56:39.590369 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:39Z","lastTransitionTime":"2025-12-03T12:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:39 crc kubenswrapper[4986]: I1203 12:56:39.692172 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:39 crc kubenswrapper[4986]: I1203 12:56:39.692208 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:39 crc kubenswrapper[4986]: I1203 12:56:39.692217 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:39 crc kubenswrapper[4986]: I1203 12:56:39.692235 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:39 crc kubenswrapper[4986]: I1203 12:56:39.692245 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:39Z","lastTransitionTime":"2025-12-03T12:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:39 crc kubenswrapper[4986]: I1203 12:56:39.794787 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:39 crc kubenswrapper[4986]: I1203 12:56:39.794850 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:39 crc kubenswrapper[4986]: I1203 12:56:39.794861 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:39 crc kubenswrapper[4986]: I1203 12:56:39.794878 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:39 crc kubenswrapper[4986]: I1203 12:56:39.794888 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:39Z","lastTransitionTime":"2025-12-03T12:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:39 crc kubenswrapper[4986]: I1203 12:56:39.897851 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:39 crc kubenswrapper[4986]: I1203 12:56:39.898206 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:39 crc kubenswrapper[4986]: I1203 12:56:39.898339 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:39 crc kubenswrapper[4986]: I1203 12:56:39.898449 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:39 crc kubenswrapper[4986]: I1203 12:56:39.898545 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:39Z","lastTransitionTime":"2025-12-03T12:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:39 crc kubenswrapper[4986]: I1203 12:56:39.942748 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 12:56:39 crc kubenswrapper[4986]: E1203 12:56:39.943124 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 12:56:39 crc kubenswrapper[4986]: I1203 12:56:39.942811 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 12:56:39 crc kubenswrapper[4986]: I1203 12:56:39.942816 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl2mt" Dec 03 12:56:39 crc kubenswrapper[4986]: E1203 12:56:39.943632 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl2mt" podUID="ea24f625-ded4-4e37-a23b-f96fe691b0dd" Dec 03 12:56:39 crc kubenswrapper[4986]: E1203 12:56:39.943521 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 12:56:39 crc kubenswrapper[4986]: I1203 12:56:39.942787 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 12:56:39 crc kubenswrapper[4986]: E1203 12:56:39.943702 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 12:56:40 crc kubenswrapper[4986]: I1203 12:56:40.001383 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:40 crc kubenswrapper[4986]: I1203 12:56:40.001858 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:40 crc kubenswrapper[4986]: I1203 12:56:40.002098 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:40 crc kubenswrapper[4986]: I1203 12:56:40.002257 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:40 crc kubenswrapper[4986]: I1203 12:56:40.002486 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:40Z","lastTransitionTime":"2025-12-03T12:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:40 crc kubenswrapper[4986]: I1203 12:56:40.104747 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:40 crc kubenswrapper[4986]: I1203 12:56:40.104779 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:40 crc kubenswrapper[4986]: I1203 12:56:40.104787 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:40 crc kubenswrapper[4986]: I1203 12:56:40.104799 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:40 crc kubenswrapper[4986]: I1203 12:56:40.104808 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:40Z","lastTransitionTime":"2025-12-03T12:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:40 crc kubenswrapper[4986]: I1203 12:56:40.207098 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:40 crc kubenswrapper[4986]: I1203 12:56:40.207367 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:40 crc kubenswrapper[4986]: I1203 12:56:40.207450 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:40 crc kubenswrapper[4986]: I1203 12:56:40.207512 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:40 crc kubenswrapper[4986]: I1203 12:56:40.207581 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:40Z","lastTransitionTime":"2025-12-03T12:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:40 crc kubenswrapper[4986]: I1203 12:56:40.310117 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:40 crc kubenswrapper[4986]: I1203 12:56:40.310182 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:40 crc kubenswrapper[4986]: I1203 12:56:40.310201 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:40 crc kubenswrapper[4986]: I1203 12:56:40.310237 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:40 crc kubenswrapper[4986]: I1203 12:56:40.310272 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:40Z","lastTransitionTime":"2025-12-03T12:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:40 crc kubenswrapper[4986]: I1203 12:56:40.412736 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:40 crc kubenswrapper[4986]: I1203 12:56:40.412783 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:40 crc kubenswrapper[4986]: I1203 12:56:40.412795 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:40 crc kubenswrapper[4986]: I1203 12:56:40.412811 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:40 crc kubenswrapper[4986]: I1203 12:56:40.412822 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:40Z","lastTransitionTime":"2025-12-03T12:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:40 crc kubenswrapper[4986]: I1203 12:56:40.516417 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:40 crc kubenswrapper[4986]: I1203 12:56:40.516469 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:40 crc kubenswrapper[4986]: I1203 12:56:40.516481 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:40 crc kubenswrapper[4986]: I1203 12:56:40.516498 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:40 crc kubenswrapper[4986]: I1203 12:56:40.516509 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:40Z","lastTransitionTime":"2025-12-03T12:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:40 crc kubenswrapper[4986]: I1203 12:56:40.618915 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:40 crc kubenswrapper[4986]: I1203 12:56:40.618967 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:40 crc kubenswrapper[4986]: I1203 12:56:40.618981 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:40 crc kubenswrapper[4986]: I1203 12:56:40.618998 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:40 crc kubenswrapper[4986]: I1203 12:56:40.619010 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:40Z","lastTransitionTime":"2025-12-03T12:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:40 crc kubenswrapper[4986]: I1203 12:56:40.721757 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:40 crc kubenswrapper[4986]: I1203 12:56:40.721800 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:40 crc kubenswrapper[4986]: I1203 12:56:40.721811 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:40 crc kubenswrapper[4986]: I1203 12:56:40.721827 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:40 crc kubenswrapper[4986]: I1203 12:56:40.721841 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:40Z","lastTransitionTime":"2025-12-03T12:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:40 crc kubenswrapper[4986]: I1203 12:56:40.824546 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:40 crc kubenswrapper[4986]: I1203 12:56:40.824607 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:40 crc kubenswrapper[4986]: I1203 12:56:40.824624 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:40 crc kubenswrapper[4986]: I1203 12:56:40.824648 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:40 crc kubenswrapper[4986]: I1203 12:56:40.824667 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:40Z","lastTransitionTime":"2025-12-03T12:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:40 crc kubenswrapper[4986]: I1203 12:56:40.927261 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:40 crc kubenswrapper[4986]: I1203 12:56:40.927345 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:40 crc kubenswrapper[4986]: I1203 12:56:40.927363 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:40 crc kubenswrapper[4986]: I1203 12:56:40.927386 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:40 crc kubenswrapper[4986]: I1203 12:56:40.927403 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:40Z","lastTransitionTime":"2025-12-03T12:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:40 crc kubenswrapper[4986]: I1203 12:56:40.961330 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fszqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a76bcf5c-6a62-4360-826d-ecac337c88ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d94370a14317ade02cb805783e09ffcd8c6ff07f35e9124e302028e6dfa3c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhtrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fszqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:40Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:40 crc kubenswrapper[4986]: I1203 12:56:40.979083 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-px97g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97196b6d-75cc-4de4-8805-f9ce3fbd4230\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c28bf1aff27ba408262b5fc561e084ff3813ce95a7c14b57fea24d409eb33bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v4j5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-px97g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:40Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:40 crc kubenswrapper[4986]: I1203 12:56:40.998597 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3a45156-295b-4093-80e7-2059f81ddbd7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c79f84fc019d109bac446dd058b4bafb0b516a13fae47b1924e4f7690658ba8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be6405facfa7726cbe56c48184cba0123774e8e34d3ceb9e18daed6f35560c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d77d078572dbe7f11e91ece0d25e76bd17004167dabce267a2b568e798aae158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2843fb3fcbadb926b104a9f7c4084468b36293aebed6c692c29419a0457b62a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d94eca13821749d7e15a350c411a831d2887f4211ed092a4fb57e26e49fcd9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ff8c1f1602e57602cae3f46dd3d2bf8d7abad6bd0952f4347152678e8606d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b002220fe79bb266ddd4f28ab0b9f29bb0ea1e8d204f558d023242437694625\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b002220fe79bb266ddd4f28ab0b9f29bb0ea1e8d204f558d023242437694625\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T12:56:33Z\\\",\\\"message\\\":\\\"go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1203 12:56:33.207496 6692 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 12:56:33.207580 6692 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 12:56:33.207656 6692 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 12:56:33.208012 6692 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1203 12:56:33.208030 6692 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1203 12:56:33.208054 6692 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1203 12:56:33.208069 6692 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1203 12:56:33.208132 6692 factory.go:656] Stopping watch factory\\\\nI1203 12:56:33.208149 6692 ovnkube.go:599] Stopped ovnkube\\\\nI1203 12:56:33.208174 6692 handler.go:208] Removed *v1.Node event handler 2\\\\nI1203 12:56:33.208187 6692 handler.go:208] Removed *v1.Node event handler 7\\\\nI1203 12:56:33.208197 6692 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1203 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:32Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-9nf52_openshift-ovn-kubernetes(d3a45156-295b-4093-80e7-2059f81ddbd7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dfc7520e7cb1e09eb10bf9615dbae11d7e73474f1b73eed481d18269b65aef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c34a338f506e8dd09757984f92a29fd76ddd2d25fc00d680cc4883c4766080e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c34a338f506e8dd09757984f92a29fd76ddd2d25fc00d680cc4883c4766080e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9nf52\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:40Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:41 crc kubenswrapper[4986]: I1203 12:56:41.014956 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2p8xn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"957171e6-7b64-4d92-b690-342e3251ed8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d417b35a888f61c40a58512bcff329faed85f3b4f8d1d29a7c6f2f43634a9090\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbhn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27878a8de45ccf0e02719e2a3f9d1339d8910db5947d423a3e7400359cc96253\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbhn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2p8xn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:41Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:41 crc kubenswrapper[4986]: I1203 12:56:41.027830 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee22d4f3-8488-4497-a757-a6dd51e84539\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4fc247caf16eab425363e12219a76c40022f0daf0bc9c8b52f4eb8e3726f242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86d62d991d6516ed42f7cad655018e1ebc85c8c1b100d624c1c2e5e8f616cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4ba9819f4f1746f03a558617fc6ee14f5627c37c362d1198a303b6f1854c7bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79cba0b7394bf25be648502fc84a916d8f2fa0949edce20b5a748046ce9b6f75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:41Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:41 crc kubenswrapper[4986]: I1203 12:56:41.029394 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:41 crc kubenswrapper[4986]: I1203 12:56:41.029474 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:41 crc kubenswrapper[4986]: I1203 12:56:41.029486 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:41 crc kubenswrapper[4986]: I1203 12:56:41.029503 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:41 crc kubenswrapper[4986]: I1203 12:56:41.029520 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:41Z","lastTransitionTime":"2025-12-03T12:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:41 crc kubenswrapper[4986]: I1203 12:56:41.040811 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:41Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:41 crc kubenswrapper[4986]: I1203 12:56:41.055025 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0d71a3ffdbe593936c83e92e92e286eaff092333bb3dd54cb5bc2825ca46fbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a402aeaef03d8d3c684f0d8e7fe8a080bc02d761104e3c1095b3233aade2b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:41Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:41 crc kubenswrapper[4986]: I1203 12:56:41.071790 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rkgd9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a939a03f-0eec-49cb-9b23-40e359e427d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b854d526fa4ac2b270c7f27c1b35b1ff0fbc671e40a928649b578f926d6b51b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc042debcddb7ea2233d6c224020a32acd8c996d5334d8e6eaae8d1329bb3774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc042debcddb7ea2233d6c224020a32acd8c996d5334d8e6eaae8d1329bb3774\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7128a1dd657b9829c1af31ef1a2f5ad08d2bfd65e5d352cf5a69a5be4e9aae9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7128a1dd657b9829c1af31ef1a2f5ad08d2bfd65e5d352cf5a69a5be4e9aae9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cd633caedf88a1bec633cef61b8b30ae422b8c8d91a012395831a09d398f0f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cd633caedf88a1bec633cef61b8b30ae422b8c8d91a012395831a09d398f0f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1b9e8e1f9326660fe6d7c89af18c5f7dd1134e9f2e866da8ba9c795e13cb57d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1b9e8e1f9326660fe6d7c89af18c5f7dd1134e9f2e866da8ba9c795e13cb57d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55ef54ee9f22072e2b437f38f6d06f24e6f1da02555505bc65c488b5818c87a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55ef54ee9f22072e2b437f38f6d06f24e6f1da02555505bc65c488b5818c87a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35fecc2332fdab60719b864653301c5e8322b13ace401a0ff016ff3dc09cd3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35fecc2332fdab60719b864653301c5e8322b13ace401a0ff016ff3dc09cd3d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rkgd9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:41Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:41 crc kubenswrapper[4986]: I1203 12:56:41.083264 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rl2mt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea24f625-ded4-4e37-a23b-f96fe691b0dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhr82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhr82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rl2mt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:41Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:41 crc kubenswrapper[4986]: I1203 12:56:41.098217 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231ed674-1d19-4ee2-b29c-a1b7453ed531\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7eb1602f2d542b1998655456e5c49fca8f55a45a07cb075f989d4b10f91f6c95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0ca216d3d1121b7035d5d7ead25d459f95d56808997c02b3801cac0354c76a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04aa8cf2e01507ace747505514511263d9c560608e33f592a2c10be1edad8674\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70f0aa85e0be54890d4367deaf1a95970a7b0ac9d95afdd07c431680bdf7f15e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c0ed6891f87bfc3c62f0f13739720e012dc8fa3b122a18e799e164e4656abb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T12:56:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 12:56:01.437086 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 12:56:01.437205 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 12:56:01.438166 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1972143202/tls.crt::/tmp/serving-cert-1972143202/tls.key\\\\\\\"\\\\nI1203 12:56:02.167657 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 12:56:02.174337 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 12:56:02.174360 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 12:56:02.174377 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 12:56:02.174382 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 12:56:02.178753 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 12:56:02.179168 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 12:56:02.179175 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 12:56:02.179180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 12:56:02.179184 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 12:56:02.179187 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 12:56:02.179190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 12:56:02.178775 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 12:56:02.181220 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8ecb87e56d3fa7d72b57a6f333db4244ee7f60381778e7099a6fc88d91e1e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:41Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:41 crc kubenswrapper[4986]: I1203 12:56:41.117736 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d51894dd-38f0-413a-adaf-22d196474bed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa98006a8736394300cdb9ef3331d2a23e1b3af2e2203f757c51ca9c700d26c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6e7d24b5f1325cfb2d210cb9f311026ab370c91deecc5d0d951bcd4eda79816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9bb9d940d346f9895e24d98173668ea2d0e68fd3035e5b03475bbc936582958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7fa0d673f037c8edcc75ee30b1bbeea73da55d5459347dacc9251fffbb38be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d4007e9ed33227938a5a3656fd74958aef599418fec4cd2048dfa7dd88924dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fb412bfaef1de92aa6be2e23c0dd73fe39eebf29b6814fb3bb6fd0281ddbbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fb412bfaef1de92aa6be2e23c0dd73fe39eebf29b6814fb3bb6fd0281ddbbd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c66ee903635427e08f96e3f96c512c4d25e79bf2fc14eb0c699b3b9a617cfeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c66ee903635427e08f96e3f96c512c4d25e79bf2fc14eb0c699b3b9a617cfeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54fb6e4ecd7f73267d49261875a00a51e5e36e6079170f694346e9d6c3df8f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54fb6e4ecd7f73267d49261875a00a51e5e36e6079170f694346e9d6c3df8f31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:41Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:41 crc kubenswrapper[4986]: I1203 12:56:41.131466 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b923e52e6284c8adca87e7d3d801a432a676e4389d7ab3c0cba584c5b7d96f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:41Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:41 crc kubenswrapper[4986]: I1203 12:56:41.131822 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:41 crc kubenswrapper[4986]: I1203 12:56:41.131856 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:41 crc kubenswrapper[4986]: I1203 12:56:41.131865 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:41 crc kubenswrapper[4986]: I1203 12:56:41.131882 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:41 crc kubenswrapper[4986]: I1203 12:56:41.131892 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:41Z","lastTransitionTime":"2025-12-03T12:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:41 crc kubenswrapper[4986]: I1203 12:56:41.150854 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d66d06e79ffb2ecd12acc0380e5e4839904147f9194e48277ca667df13a0cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:41Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:41 crc kubenswrapper[4986]: I1203 12:56:41.166895 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://535b7b05dbe9b47808f3d358a1f370b696a5d5bb4dd96b049ac2cbe3d18ea065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgppd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0ab87b22153ba31fa79d2d210e695b274c2ddc878ad679c605aa8fb716534d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgppd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xggpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:41Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:41 crc kubenswrapper[4986]: I1203 12:56:41.182926 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1aad898-ad11-4c01-95c2-8a4d293aed99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77c695e6dd4e9c2fe64a41b90d59aebe2a6a54a1985702749b0a68eed6551f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0d8158a257a7d5b17302a2d6f4c81326b8c24487c18132807f91366cdb3457b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16164ea50f4f29916a5878eee07e8debc4268fae086f9086aa4c73c4d03dd082\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://865442eb9652b14d8c0642927e9f59c60344dfcffb5bd2d0cc0ec572ede971c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://865442eb9652b14d8c0642927e9f59c60344dfcffb5bd2d0cc0ec572ede971c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:41Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:41 crc kubenswrapper[4986]: I1203 12:56:41.199261 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:41Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:41 crc kubenswrapper[4986]: I1203 12:56:41.218403 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:41Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:41 crc kubenswrapper[4986]: I1203 12:56:41.233005 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8lcrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75e4c223-9952-4d1b-b83b-5c45cfc51432\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b2af5d43c7842f129b6521e5b3b1963f6060b3e0c552a771c7df6a2a9ef867d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnr67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8lcrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:41Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:41 crc kubenswrapper[4986]: I1203 12:56:41.234668 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:41 crc kubenswrapper[4986]: I1203 12:56:41.234711 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:41 crc kubenswrapper[4986]: I1203 12:56:41.234720 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:41 crc kubenswrapper[4986]: I1203 12:56:41.234736 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:41 crc kubenswrapper[4986]: I1203 12:56:41.234746 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:41Z","lastTransitionTime":"2025-12-03T12:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:41 crc kubenswrapper[4986]: I1203 12:56:41.337962 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:41 crc kubenswrapper[4986]: I1203 12:56:41.338021 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:41 crc kubenswrapper[4986]: I1203 12:56:41.338039 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:41 crc kubenswrapper[4986]: I1203 12:56:41.338065 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:41 crc kubenswrapper[4986]: I1203 12:56:41.338085 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:41Z","lastTransitionTime":"2025-12-03T12:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:41 crc kubenswrapper[4986]: I1203 12:56:41.440817 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:41 crc kubenswrapper[4986]: I1203 12:56:41.440895 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:41 crc kubenswrapper[4986]: I1203 12:56:41.440907 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:41 crc kubenswrapper[4986]: I1203 12:56:41.440925 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:41 crc kubenswrapper[4986]: I1203 12:56:41.440937 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:41Z","lastTransitionTime":"2025-12-03T12:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:41 crc kubenswrapper[4986]: I1203 12:56:41.543871 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:41 crc kubenswrapper[4986]: I1203 12:56:41.544186 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:41 crc kubenswrapper[4986]: I1203 12:56:41.544203 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:41 crc kubenswrapper[4986]: I1203 12:56:41.544223 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:41 crc kubenswrapper[4986]: I1203 12:56:41.544237 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:41Z","lastTransitionTime":"2025-12-03T12:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:41 crc kubenswrapper[4986]: I1203 12:56:41.646671 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:41 crc kubenswrapper[4986]: I1203 12:56:41.646724 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:41 crc kubenswrapper[4986]: I1203 12:56:41.646760 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:41 crc kubenswrapper[4986]: I1203 12:56:41.646777 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:41 crc kubenswrapper[4986]: I1203 12:56:41.646788 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:41Z","lastTransitionTime":"2025-12-03T12:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:41 crc kubenswrapper[4986]: I1203 12:56:41.748832 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:41 crc kubenswrapper[4986]: I1203 12:56:41.748877 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:41 crc kubenswrapper[4986]: I1203 12:56:41.748888 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:41 crc kubenswrapper[4986]: I1203 12:56:41.748902 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:41 crc kubenswrapper[4986]: I1203 12:56:41.748914 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:41Z","lastTransitionTime":"2025-12-03T12:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:41 crc kubenswrapper[4986]: I1203 12:56:41.851917 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:41 crc kubenswrapper[4986]: I1203 12:56:41.851980 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:41 crc kubenswrapper[4986]: I1203 12:56:41.851994 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:41 crc kubenswrapper[4986]: I1203 12:56:41.852016 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:41 crc kubenswrapper[4986]: I1203 12:56:41.852032 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:41Z","lastTransitionTime":"2025-12-03T12:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:41 crc kubenswrapper[4986]: I1203 12:56:41.942631 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 12:56:41 crc kubenswrapper[4986]: I1203 12:56:41.942631 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 12:56:41 crc kubenswrapper[4986]: I1203 12:56:41.942652 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 12:56:41 crc kubenswrapper[4986]: I1203 12:56:41.942764 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl2mt" Dec 03 12:56:41 crc kubenswrapper[4986]: E1203 12:56:41.942869 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 12:56:41 crc kubenswrapper[4986]: E1203 12:56:41.942949 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 12:56:41 crc kubenswrapper[4986]: E1203 12:56:41.943085 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 12:56:41 crc kubenswrapper[4986]: E1203 12:56:41.943128 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl2mt" podUID="ea24f625-ded4-4e37-a23b-f96fe691b0dd" Dec 03 12:56:41 crc kubenswrapper[4986]: I1203 12:56:41.954437 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:41 crc kubenswrapper[4986]: I1203 12:56:41.954492 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:41 crc kubenswrapper[4986]: I1203 12:56:41.954513 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:41 crc kubenswrapper[4986]: I1203 12:56:41.954534 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:41 crc kubenswrapper[4986]: I1203 12:56:41.954549 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:41Z","lastTransitionTime":"2025-12-03T12:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:42 crc kubenswrapper[4986]: I1203 12:56:42.056938 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:42 crc kubenswrapper[4986]: I1203 12:56:42.056969 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:42 crc kubenswrapper[4986]: I1203 12:56:42.056977 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:42 crc kubenswrapper[4986]: I1203 12:56:42.056990 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:42 crc kubenswrapper[4986]: I1203 12:56:42.057000 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:42Z","lastTransitionTime":"2025-12-03T12:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:42 crc kubenswrapper[4986]: I1203 12:56:42.159196 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:42 crc kubenswrapper[4986]: I1203 12:56:42.159252 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:42 crc kubenswrapper[4986]: I1203 12:56:42.159268 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:42 crc kubenswrapper[4986]: I1203 12:56:42.159307 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:42 crc kubenswrapper[4986]: I1203 12:56:42.159320 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:42Z","lastTransitionTime":"2025-12-03T12:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:42 crc kubenswrapper[4986]: I1203 12:56:42.262077 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:42 crc kubenswrapper[4986]: I1203 12:56:42.262127 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:42 crc kubenswrapper[4986]: I1203 12:56:42.262139 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:42 crc kubenswrapper[4986]: I1203 12:56:42.262156 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:42 crc kubenswrapper[4986]: I1203 12:56:42.262169 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:42Z","lastTransitionTime":"2025-12-03T12:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:42 crc kubenswrapper[4986]: I1203 12:56:42.365180 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:42 crc kubenswrapper[4986]: I1203 12:56:42.365225 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:42 crc kubenswrapper[4986]: I1203 12:56:42.365236 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:42 crc kubenswrapper[4986]: I1203 12:56:42.365254 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:42 crc kubenswrapper[4986]: I1203 12:56:42.365271 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:42Z","lastTransitionTime":"2025-12-03T12:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:42 crc kubenswrapper[4986]: I1203 12:56:42.438823 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:42 crc kubenswrapper[4986]: I1203 12:56:42.438904 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:42 crc kubenswrapper[4986]: I1203 12:56:42.438927 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:42 crc kubenswrapper[4986]: I1203 12:56:42.438960 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:42 crc kubenswrapper[4986]: I1203 12:56:42.438976 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:42Z","lastTransitionTime":"2025-12-03T12:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:42 crc kubenswrapper[4986]: E1203 12:56:42.460226 4986 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b28ccd4-3e47-4e26-a3b8-f87de96f7586\\\",\\\"systemUUID\\\":\\\"52e71ce5-258d-4951-aa26-5d4aac0725ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:42Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:42 crc kubenswrapper[4986]: I1203 12:56:42.465403 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:42 crc kubenswrapper[4986]: I1203 12:56:42.465460 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:42 crc kubenswrapper[4986]: I1203 12:56:42.465480 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:42 crc kubenswrapper[4986]: I1203 12:56:42.465502 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:42 crc kubenswrapper[4986]: I1203 12:56:42.465517 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:42Z","lastTransitionTime":"2025-12-03T12:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:42 crc kubenswrapper[4986]: E1203 12:56:42.478970 4986 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b28ccd4-3e47-4e26-a3b8-f87de96f7586\\\",\\\"systemUUID\\\":\\\"52e71ce5-258d-4951-aa26-5d4aac0725ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:42Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:42 crc kubenswrapper[4986]: I1203 12:56:42.482134 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:42 crc kubenswrapper[4986]: I1203 12:56:42.482176 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:42 crc kubenswrapper[4986]: I1203 12:56:42.482189 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:42 crc kubenswrapper[4986]: I1203 12:56:42.482205 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:42 crc kubenswrapper[4986]: I1203 12:56:42.482218 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:42Z","lastTransitionTime":"2025-12-03T12:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:42 crc kubenswrapper[4986]: E1203 12:56:42.493150 4986 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b28ccd4-3e47-4e26-a3b8-f87de96f7586\\\",\\\"systemUUID\\\":\\\"52e71ce5-258d-4951-aa26-5d4aac0725ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:42Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:42 crc kubenswrapper[4986]: I1203 12:56:42.496615 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:42 crc kubenswrapper[4986]: I1203 12:56:42.496675 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:42 crc kubenswrapper[4986]: I1203 12:56:42.496684 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:42 crc kubenswrapper[4986]: I1203 12:56:42.496697 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:42 crc kubenswrapper[4986]: I1203 12:56:42.496705 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:42Z","lastTransitionTime":"2025-12-03T12:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:42 crc kubenswrapper[4986]: E1203 12:56:42.507749 4986 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b28ccd4-3e47-4e26-a3b8-f87de96f7586\\\",\\\"systemUUID\\\":\\\"52e71ce5-258d-4951-aa26-5d4aac0725ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:42Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:42 crc kubenswrapper[4986]: I1203 12:56:42.511035 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:42 crc kubenswrapper[4986]: I1203 12:56:42.511072 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:42 crc kubenswrapper[4986]: I1203 12:56:42.511079 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:42 crc kubenswrapper[4986]: I1203 12:56:42.511093 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:42 crc kubenswrapper[4986]: I1203 12:56:42.511103 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:42Z","lastTransitionTime":"2025-12-03T12:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:42 crc kubenswrapper[4986]: E1203 12:56:42.522564 4986 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b28ccd4-3e47-4e26-a3b8-f87de96f7586\\\",\\\"systemUUID\\\":\\\"52e71ce5-258d-4951-aa26-5d4aac0725ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:42Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:42 crc kubenswrapper[4986]: E1203 12:56:42.522683 4986 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 03 12:56:42 crc kubenswrapper[4986]: I1203 12:56:42.524366 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:42 crc kubenswrapper[4986]: I1203 12:56:42.524402 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:42 crc kubenswrapper[4986]: I1203 12:56:42.524413 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:42 crc kubenswrapper[4986]: I1203 12:56:42.524430 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:42 crc kubenswrapper[4986]: I1203 12:56:42.524440 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:42Z","lastTransitionTime":"2025-12-03T12:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:42 crc kubenswrapper[4986]: I1203 12:56:42.627594 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:42 crc kubenswrapper[4986]: I1203 12:56:42.627640 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:42 crc kubenswrapper[4986]: I1203 12:56:42.627655 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:42 crc kubenswrapper[4986]: I1203 12:56:42.627676 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:42 crc kubenswrapper[4986]: I1203 12:56:42.627687 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:42Z","lastTransitionTime":"2025-12-03T12:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:42 crc kubenswrapper[4986]: I1203 12:56:42.730054 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:42 crc kubenswrapper[4986]: I1203 12:56:42.730092 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:42 crc kubenswrapper[4986]: I1203 12:56:42.730104 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:42 crc kubenswrapper[4986]: I1203 12:56:42.730122 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:42 crc kubenswrapper[4986]: I1203 12:56:42.730133 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:42Z","lastTransitionTime":"2025-12-03T12:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:42 crc kubenswrapper[4986]: I1203 12:56:42.833092 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:42 crc kubenswrapper[4986]: I1203 12:56:42.833161 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:42 crc kubenswrapper[4986]: I1203 12:56:42.833173 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:42 crc kubenswrapper[4986]: I1203 12:56:42.833190 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:42 crc kubenswrapper[4986]: I1203 12:56:42.833202 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:42Z","lastTransitionTime":"2025-12-03T12:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:42 crc kubenswrapper[4986]: I1203 12:56:42.935718 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:42 crc kubenswrapper[4986]: I1203 12:56:42.935767 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:42 crc kubenswrapper[4986]: I1203 12:56:42.935779 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:42 crc kubenswrapper[4986]: I1203 12:56:42.935795 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:42 crc kubenswrapper[4986]: I1203 12:56:42.935808 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:42Z","lastTransitionTime":"2025-12-03T12:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:43 crc kubenswrapper[4986]: I1203 12:56:43.038738 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:43 crc kubenswrapper[4986]: I1203 12:56:43.038782 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:43 crc kubenswrapper[4986]: I1203 12:56:43.038792 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:43 crc kubenswrapper[4986]: I1203 12:56:43.038808 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:43 crc kubenswrapper[4986]: I1203 12:56:43.038819 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:43Z","lastTransitionTime":"2025-12-03T12:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:43 crc kubenswrapper[4986]: I1203 12:56:43.140966 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:43 crc kubenswrapper[4986]: I1203 12:56:43.141006 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:43 crc kubenswrapper[4986]: I1203 12:56:43.141016 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:43 crc kubenswrapper[4986]: I1203 12:56:43.141030 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:43 crc kubenswrapper[4986]: I1203 12:56:43.141040 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:43Z","lastTransitionTime":"2025-12-03T12:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:43 crc kubenswrapper[4986]: I1203 12:56:43.243344 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:43 crc kubenswrapper[4986]: I1203 12:56:43.243393 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:43 crc kubenswrapper[4986]: I1203 12:56:43.243404 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:43 crc kubenswrapper[4986]: I1203 12:56:43.243423 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:43 crc kubenswrapper[4986]: I1203 12:56:43.243436 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:43Z","lastTransitionTime":"2025-12-03T12:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:43 crc kubenswrapper[4986]: I1203 12:56:43.346228 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:43 crc kubenswrapper[4986]: I1203 12:56:43.346584 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:43 crc kubenswrapper[4986]: I1203 12:56:43.346684 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:43 crc kubenswrapper[4986]: I1203 12:56:43.346767 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:43 crc kubenswrapper[4986]: I1203 12:56:43.346855 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:43Z","lastTransitionTime":"2025-12-03T12:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:43 crc kubenswrapper[4986]: I1203 12:56:43.449137 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:43 crc kubenswrapper[4986]: I1203 12:56:43.449173 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:43 crc kubenswrapper[4986]: I1203 12:56:43.449183 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:43 crc kubenswrapper[4986]: I1203 12:56:43.449197 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:43 crc kubenswrapper[4986]: I1203 12:56:43.449208 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:43Z","lastTransitionTime":"2025-12-03T12:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:43 crc kubenswrapper[4986]: I1203 12:56:43.552038 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:43 crc kubenswrapper[4986]: I1203 12:56:43.552078 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:43 crc kubenswrapper[4986]: I1203 12:56:43.552089 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:43 crc kubenswrapper[4986]: I1203 12:56:43.552106 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:43 crc kubenswrapper[4986]: I1203 12:56:43.552117 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:43Z","lastTransitionTime":"2025-12-03T12:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:43 crc kubenswrapper[4986]: I1203 12:56:43.655140 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:43 crc kubenswrapper[4986]: I1203 12:56:43.655179 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:43 crc kubenswrapper[4986]: I1203 12:56:43.655191 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:43 crc kubenswrapper[4986]: I1203 12:56:43.655209 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:43 crc kubenswrapper[4986]: I1203 12:56:43.655220 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:43Z","lastTransitionTime":"2025-12-03T12:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:43 crc kubenswrapper[4986]: I1203 12:56:43.757954 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:43 crc kubenswrapper[4986]: I1203 12:56:43.757987 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:43 crc kubenswrapper[4986]: I1203 12:56:43.757997 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:43 crc kubenswrapper[4986]: I1203 12:56:43.758013 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:43 crc kubenswrapper[4986]: I1203 12:56:43.758023 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:43Z","lastTransitionTime":"2025-12-03T12:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:43 crc kubenswrapper[4986]: I1203 12:56:43.860648 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:43 crc kubenswrapper[4986]: I1203 12:56:43.860981 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:43 crc kubenswrapper[4986]: I1203 12:56:43.861241 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:43 crc kubenswrapper[4986]: I1203 12:56:43.861598 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:43 crc kubenswrapper[4986]: I1203 12:56:43.861789 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:43Z","lastTransitionTime":"2025-12-03T12:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:43 crc kubenswrapper[4986]: I1203 12:56:43.943372 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 12:56:43 crc kubenswrapper[4986]: I1203 12:56:43.943429 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl2mt" Dec 03 12:56:43 crc kubenswrapper[4986]: I1203 12:56:43.943429 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 12:56:43 crc kubenswrapper[4986]: I1203 12:56:43.943374 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 12:56:43 crc kubenswrapper[4986]: E1203 12:56:43.943528 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 12:56:43 crc kubenswrapper[4986]: E1203 12:56:43.943625 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 12:56:43 crc kubenswrapper[4986]: E1203 12:56:43.943704 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl2mt" podUID="ea24f625-ded4-4e37-a23b-f96fe691b0dd" Dec 03 12:56:43 crc kubenswrapper[4986]: E1203 12:56:43.943754 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 12:56:43 crc kubenswrapper[4986]: I1203 12:56:43.964385 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:43 crc kubenswrapper[4986]: I1203 12:56:43.964421 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:43 crc kubenswrapper[4986]: I1203 12:56:43.964430 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:43 crc kubenswrapper[4986]: I1203 12:56:43.964446 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:43 crc kubenswrapper[4986]: I1203 12:56:43.964456 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:43Z","lastTransitionTime":"2025-12-03T12:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:44 crc kubenswrapper[4986]: I1203 12:56:44.066453 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:44 crc kubenswrapper[4986]: I1203 12:56:44.066495 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:44 crc kubenswrapper[4986]: I1203 12:56:44.066506 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:44 crc kubenswrapper[4986]: I1203 12:56:44.066522 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:44 crc kubenswrapper[4986]: I1203 12:56:44.066535 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:44Z","lastTransitionTime":"2025-12-03T12:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:44 crc kubenswrapper[4986]: I1203 12:56:44.169235 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:44 crc kubenswrapper[4986]: I1203 12:56:44.169271 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:44 crc kubenswrapper[4986]: I1203 12:56:44.169299 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:44 crc kubenswrapper[4986]: I1203 12:56:44.169319 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:44 crc kubenswrapper[4986]: I1203 12:56:44.169332 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:44Z","lastTransitionTime":"2025-12-03T12:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:44 crc kubenswrapper[4986]: I1203 12:56:44.272109 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:44 crc kubenswrapper[4986]: I1203 12:56:44.272199 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:44 crc kubenswrapper[4986]: I1203 12:56:44.272219 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:44 crc kubenswrapper[4986]: I1203 12:56:44.272236 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:44 crc kubenswrapper[4986]: I1203 12:56:44.272247 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:44Z","lastTransitionTime":"2025-12-03T12:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:44 crc kubenswrapper[4986]: I1203 12:56:44.375034 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:44 crc kubenswrapper[4986]: I1203 12:56:44.375099 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:44 crc kubenswrapper[4986]: I1203 12:56:44.375117 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:44 crc kubenswrapper[4986]: I1203 12:56:44.375136 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:44 crc kubenswrapper[4986]: I1203 12:56:44.375147 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:44Z","lastTransitionTime":"2025-12-03T12:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:44 crc kubenswrapper[4986]: I1203 12:56:44.477133 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:44 crc kubenswrapper[4986]: I1203 12:56:44.477177 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:44 crc kubenswrapper[4986]: I1203 12:56:44.477197 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:44 crc kubenswrapper[4986]: I1203 12:56:44.477223 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:44 crc kubenswrapper[4986]: I1203 12:56:44.477236 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:44Z","lastTransitionTime":"2025-12-03T12:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:44 crc kubenswrapper[4986]: I1203 12:56:44.579304 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:44 crc kubenswrapper[4986]: I1203 12:56:44.579353 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:44 crc kubenswrapper[4986]: I1203 12:56:44.579365 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:44 crc kubenswrapper[4986]: I1203 12:56:44.579384 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:44 crc kubenswrapper[4986]: I1203 12:56:44.579396 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:44Z","lastTransitionTime":"2025-12-03T12:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:44 crc kubenswrapper[4986]: I1203 12:56:44.682629 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:44 crc kubenswrapper[4986]: I1203 12:56:44.682675 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:44 crc kubenswrapper[4986]: I1203 12:56:44.682687 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:44 crc kubenswrapper[4986]: I1203 12:56:44.682704 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:44 crc kubenswrapper[4986]: I1203 12:56:44.682714 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:44Z","lastTransitionTime":"2025-12-03T12:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:44 crc kubenswrapper[4986]: I1203 12:56:44.785514 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:44 crc kubenswrapper[4986]: I1203 12:56:44.785584 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:44 crc kubenswrapper[4986]: I1203 12:56:44.785595 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:44 crc kubenswrapper[4986]: I1203 12:56:44.785633 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:44 crc kubenswrapper[4986]: I1203 12:56:44.785648 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:44Z","lastTransitionTime":"2025-12-03T12:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:44 crc kubenswrapper[4986]: I1203 12:56:44.888406 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:44 crc kubenswrapper[4986]: I1203 12:56:44.888509 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:44 crc kubenswrapper[4986]: I1203 12:56:44.888527 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:44 crc kubenswrapper[4986]: I1203 12:56:44.888543 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:44 crc kubenswrapper[4986]: I1203 12:56:44.888555 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:44Z","lastTransitionTime":"2025-12-03T12:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:44 crc kubenswrapper[4986]: I1203 12:56:44.990882 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:44 crc kubenswrapper[4986]: I1203 12:56:44.990943 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:44 crc kubenswrapper[4986]: I1203 12:56:44.990958 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:44 crc kubenswrapper[4986]: I1203 12:56:44.990978 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:44 crc kubenswrapper[4986]: I1203 12:56:44.990995 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:44Z","lastTransitionTime":"2025-12-03T12:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:45 crc kubenswrapper[4986]: I1203 12:56:45.093737 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:45 crc kubenswrapper[4986]: I1203 12:56:45.093783 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:45 crc kubenswrapper[4986]: I1203 12:56:45.093794 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:45 crc kubenswrapper[4986]: I1203 12:56:45.093811 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:45 crc kubenswrapper[4986]: I1203 12:56:45.093821 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:45Z","lastTransitionTime":"2025-12-03T12:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:45 crc kubenswrapper[4986]: I1203 12:56:45.196469 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:45 crc kubenswrapper[4986]: I1203 12:56:45.196520 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:45 crc kubenswrapper[4986]: I1203 12:56:45.196531 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:45 crc kubenswrapper[4986]: I1203 12:56:45.196549 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:45 crc kubenswrapper[4986]: I1203 12:56:45.196561 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:45Z","lastTransitionTime":"2025-12-03T12:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:45 crc kubenswrapper[4986]: I1203 12:56:45.298587 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:45 crc kubenswrapper[4986]: I1203 12:56:45.298646 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:45 crc kubenswrapper[4986]: I1203 12:56:45.298657 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:45 crc kubenswrapper[4986]: I1203 12:56:45.298677 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:45 crc kubenswrapper[4986]: I1203 12:56:45.298688 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:45Z","lastTransitionTime":"2025-12-03T12:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:45 crc kubenswrapper[4986]: I1203 12:56:45.400780 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:45 crc kubenswrapper[4986]: I1203 12:56:45.400816 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:45 crc kubenswrapper[4986]: I1203 12:56:45.400825 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:45 crc kubenswrapper[4986]: I1203 12:56:45.400839 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:45 crc kubenswrapper[4986]: I1203 12:56:45.400847 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:45Z","lastTransitionTime":"2025-12-03T12:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:45 crc kubenswrapper[4986]: I1203 12:56:45.502904 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:45 crc kubenswrapper[4986]: I1203 12:56:45.502934 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:45 crc kubenswrapper[4986]: I1203 12:56:45.502944 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:45 crc kubenswrapper[4986]: I1203 12:56:45.502956 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:45 crc kubenswrapper[4986]: I1203 12:56:45.502966 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:45Z","lastTransitionTime":"2025-12-03T12:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:45 crc kubenswrapper[4986]: I1203 12:56:45.605533 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:45 crc kubenswrapper[4986]: I1203 12:56:45.605597 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:45 crc kubenswrapper[4986]: I1203 12:56:45.605614 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:45 crc kubenswrapper[4986]: I1203 12:56:45.605637 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:45 crc kubenswrapper[4986]: I1203 12:56:45.605654 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:45Z","lastTransitionTime":"2025-12-03T12:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:45 crc kubenswrapper[4986]: I1203 12:56:45.707591 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:45 crc kubenswrapper[4986]: I1203 12:56:45.707640 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:45 crc kubenswrapper[4986]: I1203 12:56:45.707652 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:45 crc kubenswrapper[4986]: I1203 12:56:45.707675 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:45 crc kubenswrapper[4986]: I1203 12:56:45.707689 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:45Z","lastTransitionTime":"2025-12-03T12:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:45 crc kubenswrapper[4986]: I1203 12:56:45.810753 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:45 crc kubenswrapper[4986]: I1203 12:56:45.810797 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:45 crc kubenswrapper[4986]: I1203 12:56:45.810808 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:45 crc kubenswrapper[4986]: I1203 12:56:45.810824 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:45 crc kubenswrapper[4986]: I1203 12:56:45.810836 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:45Z","lastTransitionTime":"2025-12-03T12:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:45 crc kubenswrapper[4986]: I1203 12:56:45.913491 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:45 crc kubenswrapper[4986]: I1203 12:56:45.913555 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:45 crc kubenswrapper[4986]: I1203 12:56:45.913566 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:45 crc kubenswrapper[4986]: I1203 12:56:45.913585 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:45 crc kubenswrapper[4986]: I1203 12:56:45.913604 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:45Z","lastTransitionTime":"2025-12-03T12:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:45 crc kubenswrapper[4986]: I1203 12:56:45.943032 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 12:56:45 crc kubenswrapper[4986]: I1203 12:56:45.943078 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl2mt" Dec 03 12:56:45 crc kubenswrapper[4986]: I1203 12:56:45.943032 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 12:56:45 crc kubenswrapper[4986]: E1203 12:56:45.943153 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 12:56:45 crc kubenswrapper[4986]: I1203 12:56:45.943303 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 12:56:45 crc kubenswrapper[4986]: E1203 12:56:45.943377 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 12:56:45 crc kubenswrapper[4986]: E1203 12:56:45.943541 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 12:56:45 crc kubenswrapper[4986]: E1203 12:56:45.943719 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl2mt" podUID="ea24f625-ded4-4e37-a23b-f96fe691b0dd" Dec 03 12:56:46 crc kubenswrapper[4986]: I1203 12:56:46.015886 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:46 crc kubenswrapper[4986]: I1203 12:56:46.015921 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:46 crc kubenswrapper[4986]: I1203 12:56:46.015930 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:46 crc kubenswrapper[4986]: I1203 12:56:46.015943 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:46 crc kubenswrapper[4986]: I1203 12:56:46.015951 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:46Z","lastTransitionTime":"2025-12-03T12:56:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:46 crc kubenswrapper[4986]: I1203 12:56:46.118380 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:46 crc kubenswrapper[4986]: I1203 12:56:46.118421 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:46 crc kubenswrapper[4986]: I1203 12:56:46.118430 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:46 crc kubenswrapper[4986]: I1203 12:56:46.118444 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:46 crc kubenswrapper[4986]: I1203 12:56:46.118454 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:46Z","lastTransitionTime":"2025-12-03T12:56:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:46 crc kubenswrapper[4986]: I1203 12:56:46.221007 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:46 crc kubenswrapper[4986]: I1203 12:56:46.221044 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:46 crc kubenswrapper[4986]: I1203 12:56:46.221052 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:46 crc kubenswrapper[4986]: I1203 12:56:46.221067 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:46 crc kubenswrapper[4986]: I1203 12:56:46.221076 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:46Z","lastTransitionTime":"2025-12-03T12:56:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:46 crc kubenswrapper[4986]: I1203 12:56:46.324240 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:46 crc kubenswrapper[4986]: I1203 12:56:46.324291 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:46 crc kubenswrapper[4986]: I1203 12:56:46.324303 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:46 crc kubenswrapper[4986]: I1203 12:56:46.324318 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:46 crc kubenswrapper[4986]: I1203 12:56:46.324329 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:46Z","lastTransitionTime":"2025-12-03T12:56:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:46 crc kubenswrapper[4986]: I1203 12:56:46.426722 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:46 crc kubenswrapper[4986]: I1203 12:56:46.426759 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:46 crc kubenswrapper[4986]: I1203 12:56:46.426770 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:46 crc kubenswrapper[4986]: I1203 12:56:46.426786 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:46 crc kubenswrapper[4986]: I1203 12:56:46.426797 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:46Z","lastTransitionTime":"2025-12-03T12:56:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:46 crc kubenswrapper[4986]: I1203 12:56:46.529241 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:46 crc kubenswrapper[4986]: I1203 12:56:46.529335 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:46 crc kubenswrapper[4986]: I1203 12:56:46.529350 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:46 crc kubenswrapper[4986]: I1203 12:56:46.529367 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:46 crc kubenswrapper[4986]: I1203 12:56:46.529378 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:46Z","lastTransitionTime":"2025-12-03T12:56:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:46 crc kubenswrapper[4986]: I1203 12:56:46.631697 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:46 crc kubenswrapper[4986]: I1203 12:56:46.631739 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:46 crc kubenswrapper[4986]: I1203 12:56:46.631751 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:46 crc kubenswrapper[4986]: I1203 12:56:46.631766 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:46 crc kubenswrapper[4986]: I1203 12:56:46.631776 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:46Z","lastTransitionTime":"2025-12-03T12:56:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:46 crc kubenswrapper[4986]: I1203 12:56:46.734323 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:46 crc kubenswrapper[4986]: I1203 12:56:46.734395 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:46 crc kubenswrapper[4986]: I1203 12:56:46.734414 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:46 crc kubenswrapper[4986]: I1203 12:56:46.734436 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:46 crc kubenswrapper[4986]: I1203 12:56:46.734448 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:46Z","lastTransitionTime":"2025-12-03T12:56:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:46 crc kubenswrapper[4986]: I1203 12:56:46.836248 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:46 crc kubenswrapper[4986]: I1203 12:56:46.836334 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:46 crc kubenswrapper[4986]: I1203 12:56:46.836349 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:46 crc kubenswrapper[4986]: I1203 12:56:46.836367 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:46 crc kubenswrapper[4986]: I1203 12:56:46.836379 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:46Z","lastTransitionTime":"2025-12-03T12:56:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:46 crc kubenswrapper[4986]: I1203 12:56:46.938237 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:46 crc kubenswrapper[4986]: I1203 12:56:46.938305 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:46 crc kubenswrapper[4986]: I1203 12:56:46.938319 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:46 crc kubenswrapper[4986]: I1203 12:56:46.938334 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:46 crc kubenswrapper[4986]: I1203 12:56:46.938347 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:46Z","lastTransitionTime":"2025-12-03T12:56:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:47 crc kubenswrapper[4986]: I1203 12:56:47.040109 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:47 crc kubenswrapper[4986]: I1203 12:56:47.040153 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:47 crc kubenswrapper[4986]: I1203 12:56:47.040163 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:47 crc kubenswrapper[4986]: I1203 12:56:47.040180 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:47 crc kubenswrapper[4986]: I1203 12:56:47.040191 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:47Z","lastTransitionTime":"2025-12-03T12:56:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:47 crc kubenswrapper[4986]: I1203 12:56:47.143468 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:47 crc kubenswrapper[4986]: I1203 12:56:47.143511 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:47 crc kubenswrapper[4986]: I1203 12:56:47.143520 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:47 crc kubenswrapper[4986]: I1203 12:56:47.143538 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:47 crc kubenswrapper[4986]: I1203 12:56:47.143548 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:47Z","lastTransitionTime":"2025-12-03T12:56:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:47 crc kubenswrapper[4986]: I1203 12:56:47.245964 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:47 crc kubenswrapper[4986]: I1203 12:56:47.246003 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:47 crc kubenswrapper[4986]: I1203 12:56:47.246014 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:47 crc kubenswrapper[4986]: I1203 12:56:47.246028 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:47 crc kubenswrapper[4986]: I1203 12:56:47.246039 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:47Z","lastTransitionTime":"2025-12-03T12:56:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:47 crc kubenswrapper[4986]: I1203 12:56:47.348381 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:47 crc kubenswrapper[4986]: I1203 12:56:47.348430 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:47 crc kubenswrapper[4986]: I1203 12:56:47.348437 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:47 crc kubenswrapper[4986]: I1203 12:56:47.348450 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:47 crc kubenswrapper[4986]: I1203 12:56:47.348461 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:47Z","lastTransitionTime":"2025-12-03T12:56:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:47 crc kubenswrapper[4986]: I1203 12:56:47.450863 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:47 crc kubenswrapper[4986]: I1203 12:56:47.450930 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:47 crc kubenswrapper[4986]: I1203 12:56:47.450940 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:47 crc kubenswrapper[4986]: I1203 12:56:47.450954 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:47 crc kubenswrapper[4986]: I1203 12:56:47.450965 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:47Z","lastTransitionTime":"2025-12-03T12:56:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:47 crc kubenswrapper[4986]: I1203 12:56:47.553610 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:47 crc kubenswrapper[4986]: I1203 12:56:47.553659 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:47 crc kubenswrapper[4986]: I1203 12:56:47.553675 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:47 crc kubenswrapper[4986]: I1203 12:56:47.553693 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:47 crc kubenswrapper[4986]: I1203 12:56:47.553707 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:47Z","lastTransitionTime":"2025-12-03T12:56:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:47 crc kubenswrapper[4986]: I1203 12:56:47.656058 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:47 crc kubenswrapper[4986]: I1203 12:56:47.656100 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:47 crc kubenswrapper[4986]: I1203 12:56:47.656109 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:47 crc kubenswrapper[4986]: I1203 12:56:47.656123 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:47 crc kubenswrapper[4986]: I1203 12:56:47.656133 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:47Z","lastTransitionTime":"2025-12-03T12:56:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:47 crc kubenswrapper[4986]: I1203 12:56:47.758925 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:47 crc kubenswrapper[4986]: I1203 12:56:47.759175 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:47 crc kubenswrapper[4986]: I1203 12:56:47.759305 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:47 crc kubenswrapper[4986]: I1203 12:56:47.759391 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:47 crc kubenswrapper[4986]: I1203 12:56:47.759465 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:47Z","lastTransitionTime":"2025-12-03T12:56:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:47 crc kubenswrapper[4986]: I1203 12:56:47.861771 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:47 crc kubenswrapper[4986]: I1203 12:56:47.861858 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:47 crc kubenswrapper[4986]: I1203 12:56:47.861870 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:47 crc kubenswrapper[4986]: I1203 12:56:47.862111 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:47 crc kubenswrapper[4986]: I1203 12:56:47.862124 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:47Z","lastTransitionTime":"2025-12-03T12:56:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:47 crc kubenswrapper[4986]: I1203 12:56:47.895444 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ea24f625-ded4-4e37-a23b-f96fe691b0dd-metrics-certs\") pod \"network-metrics-daemon-rl2mt\" (UID: \"ea24f625-ded4-4e37-a23b-f96fe691b0dd\") " pod="openshift-multus/network-metrics-daemon-rl2mt" Dec 03 12:56:47 crc kubenswrapper[4986]: E1203 12:56:47.895639 4986 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 12:56:47 crc kubenswrapper[4986]: E1203 12:56:47.895706 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ea24f625-ded4-4e37-a23b-f96fe691b0dd-metrics-certs podName:ea24f625-ded4-4e37-a23b-f96fe691b0dd nodeName:}" failed. No retries permitted until 2025-12-03 12:57:19.895687782 +0000 UTC m=+99.362118973 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ea24f625-ded4-4e37-a23b-f96fe691b0dd-metrics-certs") pod "network-metrics-daemon-rl2mt" (UID: "ea24f625-ded4-4e37-a23b-f96fe691b0dd") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 12:56:47 crc kubenswrapper[4986]: I1203 12:56:47.943223 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 12:56:47 crc kubenswrapper[4986]: E1203 12:56:47.943392 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 12:56:47 crc kubenswrapper[4986]: I1203 12:56:47.943451 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl2mt" Dec 03 12:56:47 crc kubenswrapper[4986]: I1203 12:56:47.943588 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 12:56:47 crc kubenswrapper[4986]: E1203 12:56:47.943654 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 12:56:47 crc kubenswrapper[4986]: E1203 12:56:47.943583 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl2mt" podUID="ea24f625-ded4-4e37-a23b-f96fe691b0dd" Dec 03 12:56:47 crc kubenswrapper[4986]: I1203 12:56:47.943703 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 12:56:47 crc kubenswrapper[4986]: E1203 12:56:47.943760 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 12:56:47 crc kubenswrapper[4986]: I1203 12:56:47.964679 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:47 crc kubenswrapper[4986]: I1203 12:56:47.964921 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:47 crc kubenswrapper[4986]: I1203 12:56:47.965003 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:47 crc kubenswrapper[4986]: I1203 12:56:47.965080 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:47 crc kubenswrapper[4986]: I1203 12:56:47.965144 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:47Z","lastTransitionTime":"2025-12-03T12:56:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:48 crc kubenswrapper[4986]: I1203 12:56:48.067603 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:48 crc kubenswrapper[4986]: I1203 12:56:48.068241 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:48 crc kubenswrapper[4986]: I1203 12:56:48.068372 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:48 crc kubenswrapper[4986]: I1203 12:56:48.068582 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:48 crc kubenswrapper[4986]: I1203 12:56:48.068809 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:48Z","lastTransitionTime":"2025-12-03T12:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:48 crc kubenswrapper[4986]: I1203 12:56:48.171993 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:48 crc kubenswrapper[4986]: I1203 12:56:48.172039 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:48 crc kubenswrapper[4986]: I1203 12:56:48.172055 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:48 crc kubenswrapper[4986]: I1203 12:56:48.172074 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:48 crc kubenswrapper[4986]: I1203 12:56:48.172085 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:48Z","lastTransitionTime":"2025-12-03T12:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:48 crc kubenswrapper[4986]: I1203 12:56:48.275636 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:48 crc kubenswrapper[4986]: I1203 12:56:48.275706 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:48 crc kubenswrapper[4986]: I1203 12:56:48.275729 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:48 crc kubenswrapper[4986]: I1203 12:56:48.275761 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:48 crc kubenswrapper[4986]: I1203 12:56:48.275788 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:48Z","lastTransitionTime":"2025-12-03T12:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:48 crc kubenswrapper[4986]: I1203 12:56:48.379223 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:48 crc kubenswrapper[4986]: I1203 12:56:48.379295 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:48 crc kubenswrapper[4986]: I1203 12:56:48.379309 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:48 crc kubenswrapper[4986]: I1203 12:56:48.379329 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:48 crc kubenswrapper[4986]: I1203 12:56:48.379344 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:48Z","lastTransitionTime":"2025-12-03T12:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:48 crc kubenswrapper[4986]: I1203 12:56:48.482086 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:48 crc kubenswrapper[4986]: I1203 12:56:48.482384 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:48 crc kubenswrapper[4986]: I1203 12:56:48.482511 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:48 crc kubenswrapper[4986]: I1203 12:56:48.482604 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:48 crc kubenswrapper[4986]: I1203 12:56:48.482686 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:48Z","lastTransitionTime":"2025-12-03T12:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:48 crc kubenswrapper[4986]: I1203 12:56:48.585172 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:48 crc kubenswrapper[4986]: I1203 12:56:48.585217 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:48 crc kubenswrapper[4986]: I1203 12:56:48.585227 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:48 crc kubenswrapper[4986]: I1203 12:56:48.585242 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:48 crc kubenswrapper[4986]: I1203 12:56:48.585253 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:48Z","lastTransitionTime":"2025-12-03T12:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:48 crc kubenswrapper[4986]: I1203 12:56:48.691510 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:48 crc kubenswrapper[4986]: I1203 12:56:48.691755 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:48 crc kubenswrapper[4986]: I1203 12:56:48.691879 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:48 crc kubenswrapper[4986]: I1203 12:56:48.691963 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:48 crc kubenswrapper[4986]: I1203 12:56:48.692043 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:48Z","lastTransitionTime":"2025-12-03T12:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:48 crc kubenswrapper[4986]: I1203 12:56:48.793984 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:48 crc kubenswrapper[4986]: I1203 12:56:48.794024 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:48 crc kubenswrapper[4986]: I1203 12:56:48.794034 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:48 crc kubenswrapper[4986]: I1203 12:56:48.794048 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:48 crc kubenswrapper[4986]: I1203 12:56:48.794056 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:48Z","lastTransitionTime":"2025-12-03T12:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:48 crc kubenswrapper[4986]: I1203 12:56:48.896948 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:48 crc kubenswrapper[4986]: I1203 12:56:48.896991 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:48 crc kubenswrapper[4986]: I1203 12:56:48.897002 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:48 crc kubenswrapper[4986]: I1203 12:56:48.897016 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:48 crc kubenswrapper[4986]: I1203 12:56:48.897026 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:48Z","lastTransitionTime":"2025-12-03T12:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:48 crc kubenswrapper[4986]: I1203 12:56:48.999074 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:48 crc kubenswrapper[4986]: I1203 12:56:48.999118 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:48 crc kubenswrapper[4986]: I1203 12:56:48.999129 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:48 crc kubenswrapper[4986]: I1203 12:56:48.999145 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:48 crc kubenswrapper[4986]: I1203 12:56:48.999156 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:48Z","lastTransitionTime":"2025-12-03T12:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:49 crc kubenswrapper[4986]: I1203 12:56:49.101982 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:49 crc kubenswrapper[4986]: I1203 12:56:49.102034 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:49 crc kubenswrapper[4986]: I1203 12:56:49.102045 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:49 crc kubenswrapper[4986]: I1203 12:56:49.102063 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:49 crc kubenswrapper[4986]: I1203 12:56:49.102077 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:49Z","lastTransitionTime":"2025-12-03T12:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:49 crc kubenswrapper[4986]: I1203 12:56:49.204981 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:49 crc kubenswrapper[4986]: I1203 12:56:49.205017 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:49 crc kubenswrapper[4986]: I1203 12:56:49.205026 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:49 crc kubenswrapper[4986]: I1203 12:56:49.205043 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:49 crc kubenswrapper[4986]: I1203 12:56:49.205052 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:49Z","lastTransitionTime":"2025-12-03T12:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:49 crc kubenswrapper[4986]: I1203 12:56:49.307647 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:49 crc kubenswrapper[4986]: I1203 12:56:49.307686 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:49 crc kubenswrapper[4986]: I1203 12:56:49.307695 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:49 crc kubenswrapper[4986]: I1203 12:56:49.307710 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:49 crc kubenswrapper[4986]: I1203 12:56:49.307720 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:49Z","lastTransitionTime":"2025-12-03T12:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:49 crc kubenswrapper[4986]: I1203 12:56:49.410603 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:49 crc kubenswrapper[4986]: I1203 12:56:49.410644 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:49 crc kubenswrapper[4986]: I1203 12:56:49.410657 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:49 crc kubenswrapper[4986]: I1203 12:56:49.410675 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:49 crc kubenswrapper[4986]: I1203 12:56:49.410689 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:49Z","lastTransitionTime":"2025-12-03T12:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:49 crc kubenswrapper[4986]: I1203 12:56:49.513685 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:49 crc kubenswrapper[4986]: I1203 12:56:49.513736 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:49 crc kubenswrapper[4986]: I1203 12:56:49.513747 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:49 crc kubenswrapper[4986]: I1203 12:56:49.513765 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:49 crc kubenswrapper[4986]: I1203 12:56:49.513776 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:49Z","lastTransitionTime":"2025-12-03T12:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:49 crc kubenswrapper[4986]: I1203 12:56:49.615565 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:49 crc kubenswrapper[4986]: I1203 12:56:49.615599 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:49 crc kubenswrapper[4986]: I1203 12:56:49.615607 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:49 crc kubenswrapper[4986]: I1203 12:56:49.615621 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:49 crc kubenswrapper[4986]: I1203 12:56:49.615631 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:49Z","lastTransitionTime":"2025-12-03T12:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:49 crc kubenswrapper[4986]: I1203 12:56:49.718095 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:49 crc kubenswrapper[4986]: I1203 12:56:49.718132 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:49 crc kubenswrapper[4986]: I1203 12:56:49.718148 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:49 crc kubenswrapper[4986]: I1203 12:56:49.718167 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:49 crc kubenswrapper[4986]: I1203 12:56:49.718177 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:49Z","lastTransitionTime":"2025-12-03T12:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:49 crc kubenswrapper[4986]: I1203 12:56:49.820622 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:49 crc kubenswrapper[4986]: I1203 12:56:49.820660 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:49 crc kubenswrapper[4986]: I1203 12:56:49.820670 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:49 crc kubenswrapper[4986]: I1203 12:56:49.820684 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:49 crc kubenswrapper[4986]: I1203 12:56:49.820694 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:49Z","lastTransitionTime":"2025-12-03T12:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:49 crc kubenswrapper[4986]: I1203 12:56:49.922887 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:49 crc kubenswrapper[4986]: I1203 12:56:49.923142 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:49 crc kubenswrapper[4986]: I1203 12:56:49.923204 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:49 crc kubenswrapper[4986]: I1203 12:56:49.923266 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:49 crc kubenswrapper[4986]: I1203 12:56:49.923357 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:49Z","lastTransitionTime":"2025-12-03T12:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:49 crc kubenswrapper[4986]: I1203 12:56:49.943122 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl2mt" Dec 03 12:56:49 crc kubenswrapper[4986]: I1203 12:56:49.943139 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 12:56:49 crc kubenswrapper[4986]: E1203 12:56:49.943234 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 12:56:49 crc kubenswrapper[4986]: E1203 12:56:49.943335 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl2mt" podUID="ea24f625-ded4-4e37-a23b-f96fe691b0dd" Dec 03 12:56:49 crc kubenswrapper[4986]: I1203 12:56:49.943672 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 12:56:49 crc kubenswrapper[4986]: I1203 12:56:49.943806 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 12:56:49 crc kubenswrapper[4986]: E1203 12:56:49.943984 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 12:56:49 crc kubenswrapper[4986]: I1203 12:56:49.944206 4986 scope.go:117] "RemoveContainer" containerID="8b002220fe79bb266ddd4f28ab0b9f29bb0ea1e8d204f558d023242437694625" Dec 03 12:56:49 crc kubenswrapper[4986]: E1203 12:56:49.944183 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 12:56:49 crc kubenswrapper[4986]: E1203 12:56:49.944477 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-9nf52_openshift-ovn-kubernetes(d3a45156-295b-4093-80e7-2059f81ddbd7)\"" pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" podUID="d3a45156-295b-4093-80e7-2059f81ddbd7" Dec 03 12:56:50 crc kubenswrapper[4986]: I1203 12:56:50.025958 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:50 crc kubenswrapper[4986]: I1203 12:56:50.026198 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:50 crc kubenswrapper[4986]: I1203 12:56:50.026266 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:50 crc kubenswrapper[4986]: I1203 12:56:50.026370 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:50 crc kubenswrapper[4986]: I1203 12:56:50.026434 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:50Z","lastTransitionTime":"2025-12-03T12:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:50 crc kubenswrapper[4986]: I1203 12:56:50.128206 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:50 crc kubenswrapper[4986]: I1203 12:56:50.128248 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:50 crc kubenswrapper[4986]: I1203 12:56:50.128258 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:50 crc kubenswrapper[4986]: I1203 12:56:50.128275 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:50 crc kubenswrapper[4986]: I1203 12:56:50.128362 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:50Z","lastTransitionTime":"2025-12-03T12:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:50 crc kubenswrapper[4986]: I1203 12:56:50.230948 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:50 crc kubenswrapper[4986]: I1203 12:56:50.230983 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:50 crc kubenswrapper[4986]: I1203 12:56:50.230992 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:50 crc kubenswrapper[4986]: I1203 12:56:50.231007 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:50 crc kubenswrapper[4986]: I1203 12:56:50.231017 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:50Z","lastTransitionTime":"2025-12-03T12:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:50 crc kubenswrapper[4986]: I1203 12:56:50.321886 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-px97g_97196b6d-75cc-4de4-8805-f9ce3fbd4230/kube-multus/0.log" Dec 03 12:56:50 crc kubenswrapper[4986]: I1203 12:56:50.321935 4986 generic.go:334] "Generic (PLEG): container finished" podID="97196b6d-75cc-4de4-8805-f9ce3fbd4230" containerID="c28bf1aff27ba408262b5fc561e084ff3813ce95a7c14b57fea24d409eb33bea" exitCode=1 Dec 03 12:56:50 crc kubenswrapper[4986]: I1203 12:56:50.321986 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-px97g" event={"ID":"97196b6d-75cc-4de4-8805-f9ce3fbd4230","Type":"ContainerDied","Data":"c28bf1aff27ba408262b5fc561e084ff3813ce95a7c14b57fea24d409eb33bea"} Dec 03 12:56:50 crc kubenswrapper[4986]: I1203 12:56:50.322493 4986 scope.go:117] "RemoveContainer" containerID="c28bf1aff27ba408262b5fc561e084ff3813ce95a7c14b57fea24d409eb33bea" Dec 03 12:56:50 crc kubenswrapper[4986]: I1203 12:56:50.333865 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:50 crc kubenswrapper[4986]: I1203 12:56:50.334062 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:50 crc kubenswrapper[4986]: I1203 12:56:50.334084 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:50 crc kubenswrapper[4986]: I1203 12:56:50.334102 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:50 crc kubenswrapper[4986]: I1203 12:56:50.334122 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:50Z","lastTransitionTime":"2025-12-03T12:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:50 crc kubenswrapper[4986]: I1203 12:56:50.335523 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-px97g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97196b6d-75cc-4de4-8805-f9ce3fbd4230\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c28bf1aff27ba408262b5fc561e084ff3813ce95a7c14b57fea24d409eb33bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c28bf1aff27ba408262b5fc561e084ff3813ce95a7c14b57fea24d409eb33bea\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T12:56:50Z\\\",\\\"message\\\":\\\"2025-12-03T12:56:04+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_2e5072a4-4b1a-448b-a811-e2aaaa751560\\\\n2025-12-03T12:56:04+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_2e5072a4-4b1a-448b-a811-e2aaaa751560 to /host/opt/cni/bin/\\\\n2025-12-03T12:56:05Z [verbose] multus-daemon started\\\\n2025-12-03T12:56:05Z [verbose] Readiness Indicator file check\\\\n2025-12-03T12:56:50Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v4j5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-px97g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:50Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:50 crc kubenswrapper[4986]: I1203 12:56:50.355018 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3a45156-295b-4093-80e7-2059f81ddbd7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c79f84fc019d109bac446dd058b4bafb0b516a13fae47b1924e4f7690658ba8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be6405facfa7726cbe56c48184cba0123774e8e34d3ceb9e18daed6f35560c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d77d078572dbe7f11e91ece0d25e76bd17004167dabce267a2b568e798aae158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2843fb3fcbadb926b104a9f7c4084468b36293aebed6c692c29419a0457b62a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d94eca13821749d7e15a350c411a831d2887f4211ed092a4fb57e26e49fcd9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ff8c1f1602e57602cae3f46dd3d2bf8d7abad6bd0952f4347152678e8606d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b002220fe79bb266ddd4f28ab0b9f29bb0ea1e8d204f558d023242437694625\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b002220fe79bb266ddd4f28ab0b9f29bb0ea1e8d204f558d023242437694625\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T12:56:33Z\\\",\\\"message\\\":\\\"go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1203 12:56:33.207496 6692 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 12:56:33.207580 6692 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 12:56:33.207656 6692 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 12:56:33.208012 6692 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1203 12:56:33.208030 6692 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1203 12:56:33.208054 6692 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1203 12:56:33.208069 6692 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1203 12:56:33.208132 6692 factory.go:656] Stopping watch factory\\\\nI1203 12:56:33.208149 6692 ovnkube.go:599] Stopped ovnkube\\\\nI1203 12:56:33.208174 6692 handler.go:208] Removed *v1.Node event handler 2\\\\nI1203 12:56:33.208187 6692 handler.go:208] Removed *v1.Node event handler 7\\\\nI1203 12:56:33.208197 6692 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1203 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:32Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-9nf52_openshift-ovn-kubernetes(d3a45156-295b-4093-80e7-2059f81ddbd7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dfc7520e7cb1e09eb10bf9615dbae11d7e73474f1b73eed481d18269b65aef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c34a338f506e8dd09757984f92a29fd76ddd2d25fc00d680cc4883c4766080e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c34a338f506e8dd09757984f92a29fd76ddd2d25fc00d680cc4883c4766080e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9nf52\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:50Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:50 crc kubenswrapper[4986]: I1203 12:56:50.366087 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2p8xn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"957171e6-7b64-4d92-b690-342e3251ed8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d417b35a888f61c40a58512bcff329faed85f3b4f8d1d29a7c6f2f43634a9090\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbhn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27878a8de45ccf0e02719e2a3f9d1339d8910db5947d423a3e7400359cc96253\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbhn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2p8xn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:50Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:50 crc kubenswrapper[4986]: I1203 12:56:50.376921 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fszqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a76bcf5c-6a62-4360-826d-ecac337c88ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d94370a14317ade02cb805783e09ffcd8c6ff07f35e9124e302028e6dfa3c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhtrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fszqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:50Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:50 crc kubenswrapper[4986]: I1203 12:56:50.389581 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0d71a3ffdbe593936c83e92e92e286eaff092333bb3dd54cb5bc2825ca46fbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a402aeaef03d8d3c684f0d8e7fe8a080bc02d761104e3c1095b3233aade2b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:50Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:50 crc kubenswrapper[4986]: I1203 12:56:50.406734 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rkgd9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a939a03f-0eec-49cb-9b23-40e359e427d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b854d526fa4ac2b270c7f27c1b35b1ff0fbc671e40a928649b578f926d6b51b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc042debcddb7ea2233d6c224020a32acd8c996d5334d8e6eaae8d1329bb3774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc042debcddb7ea2233d6c224020a32acd8c996d5334d8e6eaae8d1329bb3774\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7128a1dd657b9829c1af31ef1a2f5ad08d2bfd65e5d352cf5a69a5be4e9aae9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7128a1dd657b9829c1af31ef1a2f5ad08d2bfd65e5d352cf5a69a5be4e9aae9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cd633caedf88a1bec633cef61b8b30ae422b8c8d91a012395831a09d398f0f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cd633caedf88a1bec633cef61b8b30ae422b8c8d91a012395831a09d398f0f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1b9e8e1f9326660fe6d7c89af18c5f7dd1134e9f2e866da8ba9c795e13cb57d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1b9e8e1f9326660fe6d7c89af18c5f7dd1134e9f2e866da8ba9c795e13cb57d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55ef54ee9f22072e2b437f38f6d06f24e6f1da02555505bc65c488b5818c87a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55ef54ee9f22072e2b437f38f6d06f24e6f1da02555505bc65c488b5818c87a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35fecc2332fdab60719b864653301c5e8322b13ace401a0ff016ff3dc09cd3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35fecc2332fdab60719b864653301c5e8322b13ace401a0ff016ff3dc09cd3d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rkgd9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:50Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:50 crc kubenswrapper[4986]: I1203 12:56:50.423910 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rl2mt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea24f625-ded4-4e37-a23b-f96fe691b0dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhr82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhr82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rl2mt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:50Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:50 crc kubenswrapper[4986]: I1203 12:56:50.437305 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:50 crc kubenswrapper[4986]: I1203 12:56:50.437355 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:50 crc kubenswrapper[4986]: I1203 12:56:50.437392 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:50 crc kubenswrapper[4986]: I1203 12:56:50.437412 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:50 crc kubenswrapper[4986]: I1203 12:56:50.437423 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:50Z","lastTransitionTime":"2025-12-03T12:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:50 crc kubenswrapper[4986]: I1203 12:56:50.438926 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee22d4f3-8488-4497-a757-a6dd51e84539\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4fc247caf16eab425363e12219a76c40022f0daf0bc9c8b52f4eb8e3726f242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86d62d991d6516ed42f7cad655018e1ebc85c8c1b100d624c1c2e5e8f616cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4ba9819f4f1746f03a558617fc6ee14f5627c37c362d1198a303b6f1854c7bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79cba0b7394bf25be648502fc84a916d8f2fa0949edce20b5a748046ce9b6f75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:50Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:50 crc kubenswrapper[4986]: I1203 12:56:50.454062 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:50Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:50 crc kubenswrapper[4986]: I1203 12:56:50.469676 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b923e52e6284c8adca87e7d3d801a432a676e4389d7ab3c0cba584c5b7d96f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:50Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:50 crc kubenswrapper[4986]: I1203 12:56:50.483476 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d66d06e79ffb2ecd12acc0380e5e4839904147f9194e48277ca667df13a0cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:50Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:50 crc kubenswrapper[4986]: I1203 12:56:50.501072 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://535b7b05dbe9b47808f3d358a1f370b696a5d5bb4dd96b049ac2cbe3d18ea065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgppd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0ab87b22153ba31fa79d2d210e695b274c2ddc878ad679c605aa8fb716534d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgppd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xggpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:50Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:50 crc kubenswrapper[4986]: I1203 12:56:50.518126 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231ed674-1d19-4ee2-b29c-a1b7453ed531\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7eb1602f2d542b1998655456e5c49fca8f55a45a07cb075f989d4b10f91f6c95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0ca216d3d1121b7035d5d7ead25d459f95d56808997c02b3801cac0354c76a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04aa8cf2e01507ace747505514511263d9c560608e33f592a2c10be1edad8674\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70f0aa85e0be54890d4367deaf1a95970a7b0ac9d95afdd07c431680bdf7f15e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c0ed6891f87bfc3c62f0f13739720e012dc8fa3b122a18e799e164e4656abb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T12:56:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 12:56:01.437086 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 12:56:01.437205 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 12:56:01.438166 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1972143202/tls.crt::/tmp/serving-cert-1972143202/tls.key\\\\\\\"\\\\nI1203 12:56:02.167657 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 12:56:02.174337 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 12:56:02.174360 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 12:56:02.174377 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 12:56:02.174382 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 12:56:02.178753 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 12:56:02.179168 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 12:56:02.179175 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 12:56:02.179180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 12:56:02.179184 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 12:56:02.179187 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 12:56:02.179190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 12:56:02.178775 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 12:56:02.181220 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8ecb87e56d3fa7d72b57a6f333db4244ee7f60381778e7099a6fc88d91e1e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:50Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:50 crc kubenswrapper[4986]: I1203 12:56:50.536884 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d51894dd-38f0-413a-adaf-22d196474bed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa98006a8736394300cdb9ef3331d2a23e1b3af2e2203f757c51ca9c700d26c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6e7d24b5f1325cfb2d210cb9f311026ab370c91deecc5d0d951bcd4eda79816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9bb9d940d346f9895e24d98173668ea2d0e68fd3035e5b03475bbc936582958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7fa0d673f037c8edcc75ee30b1bbeea73da55d5459347dacc9251fffbb38be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d4007e9ed33227938a5a3656fd74958aef599418fec4cd2048dfa7dd88924dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fb412bfaef1de92aa6be2e23c0dd73fe39eebf29b6814fb3bb6fd0281ddbbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fb412bfaef1de92aa6be2e23c0dd73fe39eebf29b6814fb3bb6fd0281ddbbd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c66ee903635427e08f96e3f96c512c4d25e79bf2fc14eb0c699b3b9a617cfeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c66ee903635427e08f96e3f96c512c4d25e79bf2fc14eb0c699b3b9a617cfeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54fb6e4ecd7f73267d49261875a00a51e5e36e6079170f694346e9d6c3df8f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54fb6e4ecd7f73267d49261875a00a51e5e36e6079170f694346e9d6c3df8f31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:50Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:50 crc kubenswrapper[4986]: I1203 12:56:50.539640 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:50 crc kubenswrapper[4986]: I1203 12:56:50.539659 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:50 crc kubenswrapper[4986]: I1203 12:56:50.539668 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:50 crc kubenswrapper[4986]: I1203 12:56:50.539679 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:50 crc kubenswrapper[4986]: I1203 12:56:50.539688 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:50Z","lastTransitionTime":"2025-12-03T12:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:50 crc kubenswrapper[4986]: I1203 12:56:50.550464 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:50Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:50 crc kubenswrapper[4986]: I1203 12:56:50.563034 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8lcrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75e4c223-9952-4d1b-b83b-5c45cfc51432\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b2af5d43c7842f129b6521e5b3b1963f6060b3e0c552a771c7df6a2a9ef867d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnr67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8lcrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:50Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:50 crc kubenswrapper[4986]: I1203 12:56:50.576738 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1aad898-ad11-4c01-95c2-8a4d293aed99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77c695e6dd4e9c2fe64a41b90d59aebe2a6a54a1985702749b0a68eed6551f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0d8158a257a7d5b17302a2d6f4c81326b8c24487c18132807f91366cdb3457b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16164ea50f4f29916a5878eee07e8debc4268fae086f9086aa4c73c4d03dd082\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://865442eb9652b14d8c0642927e9f59c60344dfcffb5bd2d0cc0ec572ede971c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://865442eb9652b14d8c0642927e9f59c60344dfcffb5bd2d0cc0ec572ede971c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:50Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:50 crc kubenswrapper[4986]: I1203 12:56:50.588706 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:50Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:50 crc kubenswrapper[4986]: I1203 12:56:50.641941 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:50 crc kubenswrapper[4986]: I1203 12:56:50.642013 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:50 crc kubenswrapper[4986]: I1203 12:56:50.642026 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:50 crc kubenswrapper[4986]: I1203 12:56:50.642049 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:50 crc kubenswrapper[4986]: I1203 12:56:50.642062 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:50Z","lastTransitionTime":"2025-12-03T12:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:50 crc kubenswrapper[4986]: I1203 12:56:50.744657 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:50 crc kubenswrapper[4986]: I1203 12:56:50.744905 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:50 crc kubenswrapper[4986]: I1203 12:56:50.745030 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:50 crc kubenswrapper[4986]: I1203 12:56:50.745119 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:50 crc kubenswrapper[4986]: I1203 12:56:50.745199 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:50Z","lastTransitionTime":"2025-12-03T12:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:50 crc kubenswrapper[4986]: I1203 12:56:50.847629 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:50 crc kubenswrapper[4986]: I1203 12:56:50.847668 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:50 crc kubenswrapper[4986]: I1203 12:56:50.847678 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:50 crc kubenswrapper[4986]: I1203 12:56:50.847692 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:50 crc kubenswrapper[4986]: I1203 12:56:50.847703 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:50Z","lastTransitionTime":"2025-12-03T12:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:50 crc kubenswrapper[4986]: I1203 12:56:50.949664 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:50 crc kubenswrapper[4986]: I1203 12:56:50.950085 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:50 crc kubenswrapper[4986]: I1203 12:56:50.950246 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:50 crc kubenswrapper[4986]: I1203 12:56:50.950437 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:50 crc kubenswrapper[4986]: I1203 12:56:50.950563 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:50Z","lastTransitionTime":"2025-12-03T12:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:50 crc kubenswrapper[4986]: I1203 12:56:50.960412 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:50Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:50 crc kubenswrapper[4986]: I1203 12:56:50.972262 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8lcrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75e4c223-9952-4d1b-b83b-5c45cfc51432\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b2af5d43c7842f129b6521e5b3b1963f6060b3e0c552a771c7df6a2a9ef867d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnr67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8lcrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:50Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:50 crc kubenswrapper[4986]: I1203 12:56:50.985582 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1aad898-ad11-4c01-95c2-8a4d293aed99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77c695e6dd4e9c2fe64a41b90d59aebe2a6a54a1985702749b0a68eed6551f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0d8158a257a7d5b17302a2d6f4c81326b8c24487c18132807f91366cdb3457b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16164ea50f4f29916a5878eee07e8debc4268fae086f9086aa4c73c4d03dd082\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://865442eb9652b14d8c0642927e9f59c60344dfcffb5bd2d0cc0ec572ede971c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://865442eb9652b14d8c0642927e9f59c60344dfcffb5bd2d0cc0ec572ede971c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:50Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:51 crc kubenswrapper[4986]: I1203 12:56:51.000442 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:50Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:51 crc kubenswrapper[4986]: I1203 12:56:51.015932 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-px97g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97196b6d-75cc-4de4-8805-f9ce3fbd4230\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c28bf1aff27ba408262b5fc561e084ff3813ce95a7c14b57fea24d409eb33bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c28bf1aff27ba408262b5fc561e084ff3813ce95a7c14b57fea24d409eb33bea\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T12:56:50Z\\\",\\\"message\\\":\\\"2025-12-03T12:56:04+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_2e5072a4-4b1a-448b-a811-e2aaaa751560\\\\n2025-12-03T12:56:04+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_2e5072a4-4b1a-448b-a811-e2aaaa751560 to /host/opt/cni/bin/\\\\n2025-12-03T12:56:05Z [verbose] multus-daemon started\\\\n2025-12-03T12:56:05Z [verbose] Readiness Indicator file check\\\\n2025-12-03T12:56:50Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v4j5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-px97g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:51Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:51 crc kubenswrapper[4986]: I1203 12:56:51.035749 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3a45156-295b-4093-80e7-2059f81ddbd7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c79f84fc019d109bac446dd058b4bafb0b516a13fae47b1924e4f7690658ba8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be6405facfa7726cbe56c48184cba0123774e8e34d3ceb9e18daed6f35560c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d77d078572dbe7f11e91ece0d25e76bd17004167dabce267a2b568e798aae158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2843fb3fcbadb926b104a9f7c4084468b36293aebed6c692c29419a0457b62a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d94eca13821749d7e15a350c411a831d2887f4211ed092a4fb57e26e49fcd9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ff8c1f1602e57602cae3f46dd3d2bf8d7abad6bd0952f4347152678e8606d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b002220fe79bb266ddd4f28ab0b9f29bb0ea1e8d204f558d023242437694625\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b002220fe79bb266ddd4f28ab0b9f29bb0ea1e8d204f558d023242437694625\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T12:56:33Z\\\",\\\"message\\\":\\\"go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1203 12:56:33.207496 6692 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 12:56:33.207580 6692 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 12:56:33.207656 6692 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 12:56:33.208012 6692 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1203 12:56:33.208030 6692 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1203 12:56:33.208054 6692 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1203 12:56:33.208069 6692 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1203 12:56:33.208132 6692 factory.go:656] Stopping watch factory\\\\nI1203 12:56:33.208149 6692 ovnkube.go:599] Stopped ovnkube\\\\nI1203 12:56:33.208174 6692 handler.go:208] Removed *v1.Node event handler 2\\\\nI1203 12:56:33.208187 6692 handler.go:208] Removed *v1.Node event handler 7\\\\nI1203 12:56:33.208197 6692 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1203 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:32Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-9nf52_openshift-ovn-kubernetes(d3a45156-295b-4093-80e7-2059f81ddbd7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dfc7520e7cb1e09eb10bf9615dbae11d7e73474f1b73eed481d18269b65aef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c34a338f506e8dd09757984f92a29fd76ddd2d25fc00d680cc4883c4766080e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c34a338f506e8dd09757984f92a29fd76ddd2d25fc00d680cc4883c4766080e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9nf52\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:51Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:51 crc kubenswrapper[4986]: I1203 12:56:51.049980 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2p8xn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"957171e6-7b64-4d92-b690-342e3251ed8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d417b35a888f61c40a58512bcff329faed85f3b4f8d1d29a7c6f2f43634a9090\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbhn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27878a8de45ccf0e02719e2a3f9d1339d8910db5947d423a3e7400359cc96253\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbhn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2p8xn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:51Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:51 crc kubenswrapper[4986]: I1203 12:56:51.054106 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:51 crc kubenswrapper[4986]: I1203 12:56:51.054155 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:51 crc kubenswrapper[4986]: I1203 12:56:51.054168 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:51 crc kubenswrapper[4986]: I1203 12:56:51.054186 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:51 crc kubenswrapper[4986]: I1203 12:56:51.054206 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:51Z","lastTransitionTime":"2025-12-03T12:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:51 crc kubenswrapper[4986]: I1203 12:56:51.062856 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fszqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a76bcf5c-6a62-4360-826d-ecac337c88ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d94370a14317ade02cb805783e09ffcd8c6ff07f35e9124e302028e6dfa3c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhtrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fszqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:51Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:51 crc kubenswrapper[4986]: I1203 12:56:51.077966 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0d71a3ffdbe593936c83e92e92e286eaff092333bb3dd54cb5bc2825ca46fbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a402aeaef03d8d3c684f0d8e7fe8a080bc02d761104e3c1095b3233aade2b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:51Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:51 crc kubenswrapper[4986]: I1203 12:56:51.092676 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rkgd9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a939a03f-0eec-49cb-9b23-40e359e427d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b854d526fa4ac2b270c7f27c1b35b1ff0fbc671e40a928649b578f926d6b51b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc042debcddb7ea2233d6c224020a32acd8c996d5334d8e6eaae8d1329bb3774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc042debcddb7ea2233d6c224020a32acd8c996d5334d8e6eaae8d1329bb3774\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7128a1dd657b9829c1af31ef1a2f5ad08d2bfd65e5d352cf5a69a5be4e9aae9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7128a1dd657b9829c1af31ef1a2f5ad08d2bfd65e5d352cf5a69a5be4e9aae9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cd633caedf88a1bec633cef61b8b30ae422b8c8d91a012395831a09d398f0f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cd633caedf88a1bec633cef61b8b30ae422b8c8d91a012395831a09d398f0f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1b9e8e1f9326660fe6d7c89af18c5f7dd1134e9f2e866da8ba9c795e13cb57d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1b9e8e1f9326660fe6d7c89af18c5f7dd1134e9f2e866da8ba9c795e13cb57d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55ef54ee9f22072e2b437f38f6d06f24e6f1da02555505bc65c488b5818c87a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55ef54ee9f22072e2b437f38f6d06f24e6f1da02555505bc65c488b5818c87a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35fecc2332fdab60719b864653301c5e8322b13ace401a0ff016ff3dc09cd3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35fecc2332fdab60719b864653301c5e8322b13ace401a0ff016ff3dc09cd3d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rkgd9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:51Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:51 crc kubenswrapper[4986]: I1203 12:56:51.104230 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rl2mt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea24f625-ded4-4e37-a23b-f96fe691b0dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhr82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhr82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rl2mt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:51Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:51 crc kubenswrapper[4986]: I1203 12:56:51.120681 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee22d4f3-8488-4497-a757-a6dd51e84539\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4fc247caf16eab425363e12219a76c40022f0daf0bc9c8b52f4eb8e3726f242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86d62d991d6516ed42f7cad655018e1ebc85c8c1b100d624c1c2e5e8f616cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4ba9819f4f1746f03a558617fc6ee14f5627c37c362d1198a303b6f1854c7bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79cba0b7394bf25be648502fc84a916d8f2fa0949edce20b5a748046ce9b6f75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:51Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:51 crc kubenswrapper[4986]: I1203 12:56:51.136123 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:51Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:51 crc kubenswrapper[4986]: I1203 12:56:51.151384 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b923e52e6284c8adca87e7d3d801a432a676e4389d7ab3c0cba584c5b7d96f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:51Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:51 crc kubenswrapper[4986]: I1203 12:56:51.156360 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:51 crc kubenswrapper[4986]: I1203 12:56:51.156430 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:51 crc kubenswrapper[4986]: I1203 12:56:51.156442 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:51 crc kubenswrapper[4986]: I1203 12:56:51.156456 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:51 crc kubenswrapper[4986]: I1203 12:56:51.156466 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:51Z","lastTransitionTime":"2025-12-03T12:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:51 crc kubenswrapper[4986]: I1203 12:56:51.165056 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d66d06e79ffb2ecd12acc0380e5e4839904147f9194e48277ca667df13a0cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:51Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:51 crc kubenswrapper[4986]: I1203 12:56:51.176723 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://535b7b05dbe9b47808f3d358a1f370b696a5d5bb4dd96b049ac2cbe3d18ea065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgppd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0ab87b22153ba31fa79d2d210e695b274c2ddc878ad679c605aa8fb716534d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgppd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xggpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:51Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:51 crc kubenswrapper[4986]: I1203 12:56:51.188526 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231ed674-1d19-4ee2-b29c-a1b7453ed531\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7eb1602f2d542b1998655456e5c49fca8f55a45a07cb075f989d4b10f91f6c95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0ca216d3d1121b7035d5d7ead25d459f95d56808997c02b3801cac0354c76a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04aa8cf2e01507ace747505514511263d9c560608e33f592a2c10be1edad8674\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70f0aa85e0be54890d4367deaf1a95970a7b0ac9d95afdd07c431680bdf7f15e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c0ed6891f87bfc3c62f0f13739720e012dc8fa3b122a18e799e164e4656abb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T12:56:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 12:56:01.437086 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 12:56:01.437205 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 12:56:01.438166 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1972143202/tls.crt::/tmp/serving-cert-1972143202/tls.key\\\\\\\"\\\\nI1203 12:56:02.167657 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 12:56:02.174337 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 12:56:02.174360 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 12:56:02.174377 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 12:56:02.174382 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 12:56:02.178753 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 12:56:02.179168 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 12:56:02.179175 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 12:56:02.179180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 12:56:02.179184 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 12:56:02.179187 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 12:56:02.179190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 12:56:02.178775 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 12:56:02.181220 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8ecb87e56d3fa7d72b57a6f333db4244ee7f60381778e7099a6fc88d91e1e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:51Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:51 crc kubenswrapper[4986]: I1203 12:56:51.208008 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d51894dd-38f0-413a-adaf-22d196474bed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa98006a8736394300cdb9ef3331d2a23e1b3af2e2203f757c51ca9c700d26c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6e7d24b5f1325cfb2d210cb9f311026ab370c91deecc5d0d951bcd4eda79816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9bb9d940d346f9895e24d98173668ea2d0e68fd3035e5b03475bbc936582958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7fa0d673f037c8edcc75ee30b1bbeea73da55d5459347dacc9251fffbb38be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d4007e9ed33227938a5a3656fd74958aef599418fec4cd2048dfa7dd88924dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fb412bfaef1de92aa6be2e23c0dd73fe39eebf29b6814fb3bb6fd0281ddbbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fb412bfaef1de92aa6be2e23c0dd73fe39eebf29b6814fb3bb6fd0281ddbbd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c66ee903635427e08f96e3f96c512c4d25e79bf2fc14eb0c699b3b9a617cfeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c66ee903635427e08f96e3f96c512c4d25e79bf2fc14eb0c699b3b9a617cfeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54fb6e4ecd7f73267d49261875a00a51e5e36e6079170f694346e9d6c3df8f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54fb6e4ecd7f73267d49261875a00a51e5e36e6079170f694346e9d6c3df8f31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:51Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:51 crc kubenswrapper[4986]: I1203 12:56:51.258934 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:51 crc kubenswrapper[4986]: I1203 12:56:51.258976 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:51 crc kubenswrapper[4986]: I1203 12:56:51.258987 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:51 crc kubenswrapper[4986]: I1203 12:56:51.259002 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:51 crc kubenswrapper[4986]: I1203 12:56:51.259016 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:51Z","lastTransitionTime":"2025-12-03T12:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:51 crc kubenswrapper[4986]: I1203 12:56:51.326671 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-px97g_97196b6d-75cc-4de4-8805-f9ce3fbd4230/kube-multus/0.log" Dec 03 12:56:51 crc kubenswrapper[4986]: I1203 12:56:51.326757 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-px97g" event={"ID":"97196b6d-75cc-4de4-8805-f9ce3fbd4230","Type":"ContainerStarted","Data":"1dd2dabaec03ca7bc575d3b31582c870f9094270ef72a40fd7425e2cef5b54e0"} Dec 03 12:56:51 crc kubenswrapper[4986]: I1203 12:56:51.343349 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rkgd9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a939a03f-0eec-49cb-9b23-40e359e427d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b854d526fa4ac2b270c7f27c1b35b1ff0fbc671e40a928649b578f926d6b51b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc042debcddb7ea2233d6c224020a32acd8c996d5334d8e6eaae8d1329bb3774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc042debcddb7ea2233d6c224020a32acd8c996d5334d8e6eaae8d1329bb3774\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7128a1dd657b9829c1af31ef1a2f5ad08d2bfd65e5d352cf5a69a5be4e9aae9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7128a1dd657b9829c1af31ef1a2f5ad08d2bfd65e5d352cf5a69a5be4e9aae9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cd633caedf88a1bec633cef61b8b30ae422b8c8d91a012395831a09d398f0f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cd633caedf88a1bec633cef61b8b30ae422b8c8d91a012395831a09d398f0f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1b9e8e1f9326660fe6d7c89af18c5f7dd1134e9f2e866da8ba9c795e13cb57d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1b9e8e1f9326660fe6d7c89af18c5f7dd1134e9f2e866da8ba9c795e13cb57d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55ef54ee9f22072e2b437f38f6d06f24e6f1da02555505bc65c488b5818c87a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55ef54ee9f22072e2b437f38f6d06f24e6f1da02555505bc65c488b5818c87a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35fecc2332fdab60719b864653301c5e8322b13ace401a0ff016ff3dc09cd3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35fecc2332fdab60719b864653301c5e8322b13ace401a0ff016ff3dc09cd3d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rkgd9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:51Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:51 crc kubenswrapper[4986]: I1203 12:56:51.354037 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rl2mt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea24f625-ded4-4e37-a23b-f96fe691b0dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhr82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhr82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rl2mt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:51Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:51 crc kubenswrapper[4986]: I1203 12:56:51.362980 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:51 crc kubenswrapper[4986]: I1203 12:56:51.363041 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:51 crc kubenswrapper[4986]: I1203 12:56:51.363053 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:51 crc kubenswrapper[4986]: I1203 12:56:51.363071 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:51 crc kubenswrapper[4986]: I1203 12:56:51.363084 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:51Z","lastTransitionTime":"2025-12-03T12:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:51 crc kubenswrapper[4986]: I1203 12:56:51.369258 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee22d4f3-8488-4497-a757-a6dd51e84539\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4fc247caf16eab425363e12219a76c40022f0daf0bc9c8b52f4eb8e3726f242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86d62d991d6516ed42f7cad655018e1ebc85c8c1b100d624c1c2e5e8f616cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4ba9819f4f1746f03a558617fc6ee14f5627c37c362d1198a303b6f1854c7bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79cba0b7394bf25be648502fc84a916d8f2fa0949edce20b5a748046ce9b6f75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:51Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:51 crc kubenswrapper[4986]: I1203 12:56:51.385168 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:51Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:51 crc kubenswrapper[4986]: I1203 12:56:51.398963 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0d71a3ffdbe593936c83e92e92e286eaff092333bb3dd54cb5bc2825ca46fbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a402aeaef03d8d3c684f0d8e7fe8a080bc02d761104e3c1095b3233aade2b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:51Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:51 crc kubenswrapper[4986]: I1203 12:56:51.410945 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d66d06e79ffb2ecd12acc0380e5e4839904147f9194e48277ca667df13a0cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:51Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:51 crc kubenswrapper[4986]: I1203 12:56:51.423712 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://535b7b05dbe9b47808f3d358a1f370b696a5d5bb4dd96b049ac2cbe3d18ea065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgppd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0ab87b22153ba31fa79d2d210e695b274c2ddc878ad679c605aa8fb716534d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgppd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xggpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:51Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:51 crc kubenswrapper[4986]: I1203 12:56:51.436275 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231ed674-1d19-4ee2-b29c-a1b7453ed531\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7eb1602f2d542b1998655456e5c49fca8f55a45a07cb075f989d4b10f91f6c95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0ca216d3d1121b7035d5d7ead25d459f95d56808997c02b3801cac0354c76a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04aa8cf2e01507ace747505514511263d9c560608e33f592a2c10be1edad8674\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70f0aa85e0be54890d4367deaf1a95970a7b0ac9d95afdd07c431680bdf7f15e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c0ed6891f87bfc3c62f0f13739720e012dc8fa3b122a18e799e164e4656abb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T12:56:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 12:56:01.437086 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 12:56:01.437205 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 12:56:01.438166 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1972143202/tls.crt::/tmp/serving-cert-1972143202/tls.key\\\\\\\"\\\\nI1203 12:56:02.167657 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 12:56:02.174337 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 12:56:02.174360 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 12:56:02.174377 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 12:56:02.174382 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 12:56:02.178753 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 12:56:02.179168 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 12:56:02.179175 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 12:56:02.179180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 12:56:02.179184 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 12:56:02.179187 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 12:56:02.179190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 12:56:02.178775 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 12:56:02.181220 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8ecb87e56d3fa7d72b57a6f333db4244ee7f60381778e7099a6fc88d91e1e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:51Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:51 crc kubenswrapper[4986]: I1203 12:56:51.454532 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d51894dd-38f0-413a-adaf-22d196474bed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa98006a8736394300cdb9ef3331d2a23e1b3af2e2203f757c51ca9c700d26c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6e7d24b5f1325cfb2d210cb9f311026ab370c91deecc5d0d951bcd4eda79816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9bb9d940d346f9895e24d98173668ea2d0e68fd3035e5b03475bbc936582958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7fa0d673f037c8edcc75ee30b1bbeea73da55d5459347dacc9251fffbb38be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d4007e9ed33227938a5a3656fd74958aef599418fec4cd2048dfa7dd88924dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fb412bfaef1de92aa6be2e23c0dd73fe39eebf29b6814fb3bb6fd0281ddbbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fb412bfaef1de92aa6be2e23c0dd73fe39eebf29b6814fb3bb6fd0281ddbbd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c66ee903635427e08f96e3f96c512c4d25e79bf2fc14eb0c699b3b9a617cfeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c66ee903635427e08f96e3f96c512c4d25e79bf2fc14eb0c699b3b9a617cfeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54fb6e4ecd7f73267d49261875a00a51e5e36e6079170f694346e9d6c3df8f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54fb6e4ecd7f73267d49261875a00a51e5e36e6079170f694346e9d6c3df8f31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:51Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:51 crc kubenswrapper[4986]: I1203 12:56:51.468301 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:51 crc kubenswrapper[4986]: I1203 12:56:51.468336 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:51 crc kubenswrapper[4986]: I1203 12:56:51.468347 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:51 crc kubenswrapper[4986]: I1203 12:56:51.468363 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:51 crc kubenswrapper[4986]: I1203 12:56:51.468373 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:51Z","lastTransitionTime":"2025-12-03T12:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:51 crc kubenswrapper[4986]: I1203 12:56:51.469669 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b923e52e6284c8adca87e7d3d801a432a676e4389d7ab3c0cba584c5b7d96f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:51Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:51 crc kubenswrapper[4986]: I1203 12:56:51.480758 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8lcrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75e4c223-9952-4d1b-b83b-5c45cfc51432\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b2af5d43c7842f129b6521e5b3b1963f6060b3e0c552a771c7df6a2a9ef867d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnr67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8lcrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:51Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:51 crc kubenswrapper[4986]: I1203 12:56:51.494853 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1aad898-ad11-4c01-95c2-8a4d293aed99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77c695e6dd4e9c2fe64a41b90d59aebe2a6a54a1985702749b0a68eed6551f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0d8158a257a7d5b17302a2d6f4c81326b8c24487c18132807f91366cdb3457b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16164ea50f4f29916a5878eee07e8debc4268fae086f9086aa4c73c4d03dd082\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://865442eb9652b14d8c0642927e9f59c60344dfcffb5bd2d0cc0ec572ede971c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://865442eb9652b14d8c0642927e9f59c60344dfcffb5bd2d0cc0ec572ede971c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:51Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:51 crc kubenswrapper[4986]: I1203 12:56:51.508396 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:51Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:51 crc kubenswrapper[4986]: I1203 12:56:51.522380 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:51Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:51 crc kubenswrapper[4986]: I1203 12:56:51.541330 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3a45156-295b-4093-80e7-2059f81ddbd7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c79f84fc019d109bac446dd058b4bafb0b516a13fae47b1924e4f7690658ba8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be6405facfa7726cbe56c48184cba0123774e8e34d3ceb9e18daed6f35560c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d77d078572dbe7f11e91ece0d25e76bd17004167dabce267a2b568e798aae158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2843fb3fcbadb926b104a9f7c4084468b36293aebed6c692c29419a0457b62a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d94eca13821749d7e15a350c411a831d2887f4211ed092a4fb57e26e49fcd9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ff8c1f1602e57602cae3f46dd3d2bf8d7abad6bd0952f4347152678e8606d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b002220fe79bb266ddd4f28ab0b9f29bb0ea1e8d204f558d023242437694625\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b002220fe79bb266ddd4f28ab0b9f29bb0ea1e8d204f558d023242437694625\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T12:56:33Z\\\",\\\"message\\\":\\\"go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1203 12:56:33.207496 6692 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 12:56:33.207580 6692 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 12:56:33.207656 6692 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 12:56:33.208012 6692 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1203 12:56:33.208030 6692 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1203 12:56:33.208054 6692 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1203 12:56:33.208069 6692 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1203 12:56:33.208132 6692 factory.go:656] Stopping watch factory\\\\nI1203 12:56:33.208149 6692 ovnkube.go:599] Stopped ovnkube\\\\nI1203 12:56:33.208174 6692 handler.go:208] Removed *v1.Node event handler 2\\\\nI1203 12:56:33.208187 6692 handler.go:208] Removed *v1.Node event handler 7\\\\nI1203 12:56:33.208197 6692 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1203 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:32Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-9nf52_openshift-ovn-kubernetes(d3a45156-295b-4093-80e7-2059f81ddbd7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dfc7520e7cb1e09eb10bf9615dbae11d7e73474f1b73eed481d18269b65aef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c34a338f506e8dd09757984f92a29fd76ddd2d25fc00d680cc4883c4766080e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c34a338f506e8dd09757984f92a29fd76ddd2d25fc00d680cc4883c4766080e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9nf52\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:51Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:51 crc kubenswrapper[4986]: I1203 12:56:51.553546 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2p8xn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"957171e6-7b64-4d92-b690-342e3251ed8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d417b35a888f61c40a58512bcff329faed85f3b4f8d1d29a7c6f2f43634a9090\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbhn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27878a8de45ccf0e02719e2a3f9d1339d8910db5947d423a3e7400359cc96253\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbhn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2p8xn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:51Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:51 crc kubenswrapper[4986]: I1203 12:56:51.566197 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fszqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a76bcf5c-6a62-4360-826d-ecac337c88ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d94370a14317ade02cb805783e09ffcd8c6ff07f35e9124e302028e6dfa3c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhtrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fszqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:51Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:51 crc kubenswrapper[4986]: I1203 12:56:51.571566 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:51 crc kubenswrapper[4986]: I1203 12:56:51.571608 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:51 crc kubenswrapper[4986]: I1203 12:56:51.571622 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:51 crc kubenswrapper[4986]: I1203 12:56:51.571639 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:51 crc kubenswrapper[4986]: I1203 12:56:51.571651 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:51Z","lastTransitionTime":"2025-12-03T12:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:51 crc kubenswrapper[4986]: I1203 12:56:51.581854 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-px97g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97196b6d-75cc-4de4-8805-f9ce3fbd4230\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd2dabaec03ca7bc575d3b31582c870f9094270ef72a40fd7425e2cef5b54e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c28bf1aff27ba408262b5fc561e084ff3813ce95a7c14b57fea24d409eb33bea\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T12:56:50Z\\\",\\\"message\\\":\\\"2025-12-03T12:56:04+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_2e5072a4-4b1a-448b-a811-e2aaaa751560\\\\n2025-12-03T12:56:04+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_2e5072a4-4b1a-448b-a811-e2aaaa751560 to /host/opt/cni/bin/\\\\n2025-12-03T12:56:05Z [verbose] multus-daemon started\\\\n2025-12-03T12:56:05Z [verbose] Readiness Indicator file check\\\\n2025-12-03T12:56:50Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v4j5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-px97g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:51Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:51 crc kubenswrapper[4986]: I1203 12:56:51.674419 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:51 crc kubenswrapper[4986]: I1203 12:56:51.674467 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:51 crc kubenswrapper[4986]: I1203 12:56:51.674479 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:51 crc kubenswrapper[4986]: I1203 12:56:51.674495 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:51 crc kubenswrapper[4986]: I1203 12:56:51.674504 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:51Z","lastTransitionTime":"2025-12-03T12:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:51 crc kubenswrapper[4986]: I1203 12:56:51.777262 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:51 crc kubenswrapper[4986]: I1203 12:56:51.777319 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:51 crc kubenswrapper[4986]: I1203 12:56:51.777331 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:51 crc kubenswrapper[4986]: I1203 12:56:51.777345 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:51 crc kubenswrapper[4986]: I1203 12:56:51.777354 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:51Z","lastTransitionTime":"2025-12-03T12:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:51 crc kubenswrapper[4986]: I1203 12:56:51.879788 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:51 crc kubenswrapper[4986]: I1203 12:56:51.879854 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:51 crc kubenswrapper[4986]: I1203 12:56:51.879867 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:51 crc kubenswrapper[4986]: I1203 12:56:51.879883 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:51 crc kubenswrapper[4986]: I1203 12:56:51.879896 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:51Z","lastTransitionTime":"2025-12-03T12:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:51 crc kubenswrapper[4986]: I1203 12:56:51.942766 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl2mt" Dec 03 12:56:51 crc kubenswrapper[4986]: I1203 12:56:51.942839 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 12:56:51 crc kubenswrapper[4986]: I1203 12:56:51.942856 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 12:56:51 crc kubenswrapper[4986]: E1203 12:56:51.943264 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 12:56:51 crc kubenswrapper[4986]: E1203 12:56:51.943340 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 12:56:51 crc kubenswrapper[4986]: I1203 12:56:51.942894 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 12:56:51 crc kubenswrapper[4986]: E1203 12:56:51.943479 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 12:56:51 crc kubenswrapper[4986]: E1203 12:56:51.943491 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl2mt" podUID="ea24f625-ded4-4e37-a23b-f96fe691b0dd" Dec 03 12:56:51 crc kubenswrapper[4986]: I1203 12:56:51.982705 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:51 crc kubenswrapper[4986]: I1203 12:56:51.982746 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:51 crc kubenswrapper[4986]: I1203 12:56:51.982757 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:51 crc kubenswrapper[4986]: I1203 12:56:51.982773 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:51 crc kubenswrapper[4986]: I1203 12:56:51.982785 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:51Z","lastTransitionTime":"2025-12-03T12:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:52 crc kubenswrapper[4986]: I1203 12:56:52.085085 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:52 crc kubenswrapper[4986]: I1203 12:56:52.085120 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:52 crc kubenswrapper[4986]: I1203 12:56:52.085132 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:52 crc kubenswrapper[4986]: I1203 12:56:52.085149 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:52 crc kubenswrapper[4986]: I1203 12:56:52.085160 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:52Z","lastTransitionTime":"2025-12-03T12:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:52 crc kubenswrapper[4986]: I1203 12:56:52.188488 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:52 crc kubenswrapper[4986]: I1203 12:56:52.188778 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:52 crc kubenswrapper[4986]: I1203 12:56:52.188875 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:52 crc kubenswrapper[4986]: I1203 12:56:52.188978 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:52 crc kubenswrapper[4986]: I1203 12:56:52.189076 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:52Z","lastTransitionTime":"2025-12-03T12:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:52 crc kubenswrapper[4986]: I1203 12:56:52.291768 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:52 crc kubenswrapper[4986]: I1203 12:56:52.291817 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:52 crc kubenswrapper[4986]: I1203 12:56:52.291830 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:52 crc kubenswrapper[4986]: I1203 12:56:52.291846 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:52 crc kubenswrapper[4986]: I1203 12:56:52.291861 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:52Z","lastTransitionTime":"2025-12-03T12:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:52 crc kubenswrapper[4986]: I1203 12:56:52.393821 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:52 crc kubenswrapper[4986]: I1203 12:56:52.393858 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:52 crc kubenswrapper[4986]: I1203 12:56:52.393867 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:52 crc kubenswrapper[4986]: I1203 12:56:52.393885 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:52 crc kubenswrapper[4986]: I1203 12:56:52.393895 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:52Z","lastTransitionTime":"2025-12-03T12:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:52 crc kubenswrapper[4986]: I1203 12:56:52.496549 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:52 crc kubenswrapper[4986]: I1203 12:56:52.496598 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:52 crc kubenswrapper[4986]: I1203 12:56:52.496610 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:52 crc kubenswrapper[4986]: I1203 12:56:52.496625 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:52 crc kubenswrapper[4986]: I1203 12:56:52.496636 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:52Z","lastTransitionTime":"2025-12-03T12:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:52 crc kubenswrapper[4986]: I1203 12:56:52.598395 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:52 crc kubenswrapper[4986]: I1203 12:56:52.598443 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:52 crc kubenswrapper[4986]: I1203 12:56:52.598455 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:52 crc kubenswrapper[4986]: I1203 12:56:52.598472 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:52 crc kubenswrapper[4986]: I1203 12:56:52.598483 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:52Z","lastTransitionTime":"2025-12-03T12:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:52 crc kubenswrapper[4986]: I1203 12:56:52.627618 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:52 crc kubenswrapper[4986]: I1203 12:56:52.627660 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:52 crc kubenswrapper[4986]: I1203 12:56:52.627670 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:52 crc kubenswrapper[4986]: I1203 12:56:52.627689 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:52 crc kubenswrapper[4986]: I1203 12:56:52.627700 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:52Z","lastTransitionTime":"2025-12-03T12:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:52 crc kubenswrapper[4986]: E1203 12:56:52.640014 4986 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b28ccd4-3e47-4e26-a3b8-f87de96f7586\\\",\\\"systemUUID\\\":\\\"52e71ce5-258d-4951-aa26-5d4aac0725ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:52Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:52 crc kubenswrapper[4986]: I1203 12:56:52.644719 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:52 crc kubenswrapper[4986]: I1203 12:56:52.644778 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:52 crc kubenswrapper[4986]: I1203 12:56:52.644792 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:52 crc kubenswrapper[4986]: I1203 12:56:52.644811 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:52 crc kubenswrapper[4986]: I1203 12:56:52.644829 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:52Z","lastTransitionTime":"2025-12-03T12:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:52 crc kubenswrapper[4986]: E1203 12:56:52.659166 4986 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b28ccd4-3e47-4e26-a3b8-f87de96f7586\\\",\\\"systemUUID\\\":\\\"52e71ce5-258d-4951-aa26-5d4aac0725ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:52Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:52 crc kubenswrapper[4986]: I1203 12:56:52.663533 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:52 crc kubenswrapper[4986]: I1203 12:56:52.663575 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:52 crc kubenswrapper[4986]: I1203 12:56:52.663588 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:52 crc kubenswrapper[4986]: I1203 12:56:52.663603 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:52 crc kubenswrapper[4986]: I1203 12:56:52.663615 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:52Z","lastTransitionTime":"2025-12-03T12:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:52 crc kubenswrapper[4986]: E1203 12:56:52.676627 4986 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b28ccd4-3e47-4e26-a3b8-f87de96f7586\\\",\\\"systemUUID\\\":\\\"52e71ce5-258d-4951-aa26-5d4aac0725ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:52Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:52 crc kubenswrapper[4986]: I1203 12:56:52.680881 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:52 crc kubenswrapper[4986]: I1203 12:56:52.680923 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:52 crc kubenswrapper[4986]: I1203 12:56:52.680969 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:52 crc kubenswrapper[4986]: I1203 12:56:52.680986 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:52 crc kubenswrapper[4986]: I1203 12:56:52.680999 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:52Z","lastTransitionTime":"2025-12-03T12:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:52 crc kubenswrapper[4986]: E1203 12:56:52.692793 4986 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b28ccd4-3e47-4e26-a3b8-f87de96f7586\\\",\\\"systemUUID\\\":\\\"52e71ce5-258d-4951-aa26-5d4aac0725ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:52Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:52 crc kubenswrapper[4986]: I1203 12:56:52.695918 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:52 crc kubenswrapper[4986]: I1203 12:56:52.695958 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:52 crc kubenswrapper[4986]: I1203 12:56:52.695968 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:52 crc kubenswrapper[4986]: I1203 12:56:52.695984 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:52 crc kubenswrapper[4986]: I1203 12:56:52.695992 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:52Z","lastTransitionTime":"2025-12-03T12:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:52 crc kubenswrapper[4986]: E1203 12:56:52.708489 4986 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:56:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b28ccd4-3e47-4e26-a3b8-f87de96f7586\\\",\\\"systemUUID\\\":\\\"52e71ce5-258d-4951-aa26-5d4aac0725ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:56:52Z is after 2025-08-24T17:21:41Z" Dec 03 12:56:52 crc kubenswrapper[4986]: E1203 12:56:52.708601 4986 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 03 12:56:52 crc kubenswrapper[4986]: I1203 12:56:52.710227 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:52 crc kubenswrapper[4986]: I1203 12:56:52.710301 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:52 crc kubenswrapper[4986]: I1203 12:56:52.710315 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:52 crc kubenswrapper[4986]: I1203 12:56:52.710330 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:52 crc kubenswrapper[4986]: I1203 12:56:52.710341 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:52Z","lastTransitionTime":"2025-12-03T12:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:52 crc kubenswrapper[4986]: I1203 12:56:52.812984 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:52 crc kubenswrapper[4986]: I1203 12:56:52.813039 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:52 crc kubenswrapper[4986]: I1203 12:56:52.813055 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:52 crc kubenswrapper[4986]: I1203 12:56:52.813074 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:52 crc kubenswrapper[4986]: I1203 12:56:52.813087 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:52Z","lastTransitionTime":"2025-12-03T12:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:52 crc kubenswrapper[4986]: I1203 12:56:52.915921 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:52 crc kubenswrapper[4986]: I1203 12:56:52.915955 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:52 crc kubenswrapper[4986]: I1203 12:56:52.915963 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:52 crc kubenswrapper[4986]: I1203 12:56:52.915976 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:52 crc kubenswrapper[4986]: I1203 12:56:52.915985 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:52Z","lastTransitionTime":"2025-12-03T12:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:53 crc kubenswrapper[4986]: I1203 12:56:53.018409 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:53 crc kubenswrapper[4986]: I1203 12:56:53.018455 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:53 crc kubenswrapper[4986]: I1203 12:56:53.018464 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:53 crc kubenswrapper[4986]: I1203 12:56:53.018479 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:53 crc kubenswrapper[4986]: I1203 12:56:53.018489 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:53Z","lastTransitionTime":"2025-12-03T12:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:53 crc kubenswrapper[4986]: I1203 12:56:53.121124 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:53 crc kubenswrapper[4986]: I1203 12:56:53.121162 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:53 crc kubenswrapper[4986]: I1203 12:56:53.121170 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:53 crc kubenswrapper[4986]: I1203 12:56:53.121184 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:53 crc kubenswrapper[4986]: I1203 12:56:53.121195 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:53Z","lastTransitionTime":"2025-12-03T12:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:53 crc kubenswrapper[4986]: I1203 12:56:53.224197 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:53 crc kubenswrapper[4986]: I1203 12:56:53.224245 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:53 crc kubenswrapper[4986]: I1203 12:56:53.224260 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:53 crc kubenswrapper[4986]: I1203 12:56:53.224304 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:53 crc kubenswrapper[4986]: I1203 12:56:53.224319 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:53Z","lastTransitionTime":"2025-12-03T12:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:53 crc kubenswrapper[4986]: I1203 12:56:53.327079 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:53 crc kubenswrapper[4986]: I1203 12:56:53.327119 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:53 crc kubenswrapper[4986]: I1203 12:56:53.327130 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:53 crc kubenswrapper[4986]: I1203 12:56:53.327145 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:53 crc kubenswrapper[4986]: I1203 12:56:53.327155 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:53Z","lastTransitionTime":"2025-12-03T12:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:53 crc kubenswrapper[4986]: I1203 12:56:53.430474 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:53 crc kubenswrapper[4986]: I1203 12:56:53.430522 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:53 crc kubenswrapper[4986]: I1203 12:56:53.430532 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:53 crc kubenswrapper[4986]: I1203 12:56:53.430549 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:53 crc kubenswrapper[4986]: I1203 12:56:53.430560 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:53Z","lastTransitionTime":"2025-12-03T12:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:53 crc kubenswrapper[4986]: I1203 12:56:53.533552 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:53 crc kubenswrapper[4986]: I1203 12:56:53.533590 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:53 crc kubenswrapper[4986]: I1203 12:56:53.533600 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:53 crc kubenswrapper[4986]: I1203 12:56:53.533614 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:53 crc kubenswrapper[4986]: I1203 12:56:53.533624 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:53Z","lastTransitionTime":"2025-12-03T12:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:53 crc kubenswrapper[4986]: I1203 12:56:53.636601 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:53 crc kubenswrapper[4986]: I1203 12:56:53.636647 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:53 crc kubenswrapper[4986]: I1203 12:56:53.636655 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:53 crc kubenswrapper[4986]: I1203 12:56:53.636669 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:53 crc kubenswrapper[4986]: I1203 12:56:53.636680 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:53Z","lastTransitionTime":"2025-12-03T12:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:53 crc kubenswrapper[4986]: I1203 12:56:53.740208 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:53 crc kubenswrapper[4986]: I1203 12:56:53.740260 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:53 crc kubenswrapper[4986]: I1203 12:56:53.740273 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:53 crc kubenswrapper[4986]: I1203 12:56:53.740309 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:53 crc kubenswrapper[4986]: I1203 12:56:53.740323 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:53Z","lastTransitionTime":"2025-12-03T12:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:53 crc kubenswrapper[4986]: I1203 12:56:53.842652 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:53 crc kubenswrapper[4986]: I1203 12:56:53.842704 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:53 crc kubenswrapper[4986]: I1203 12:56:53.842715 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:53 crc kubenswrapper[4986]: I1203 12:56:53.842730 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:53 crc kubenswrapper[4986]: I1203 12:56:53.842741 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:53Z","lastTransitionTime":"2025-12-03T12:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:53 crc kubenswrapper[4986]: I1203 12:56:53.942943 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl2mt" Dec 03 12:56:53 crc kubenswrapper[4986]: I1203 12:56:53.942959 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 12:56:53 crc kubenswrapper[4986]: I1203 12:56:53.942983 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 12:56:53 crc kubenswrapper[4986]: I1203 12:56:53.942986 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 12:56:53 crc kubenswrapper[4986]: E1203 12:56:53.943553 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 12:56:53 crc kubenswrapper[4986]: E1203 12:56:53.943632 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 12:56:53 crc kubenswrapper[4986]: E1203 12:56:53.943774 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 12:56:53 crc kubenswrapper[4986]: E1203 12:56:53.944075 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl2mt" podUID="ea24f625-ded4-4e37-a23b-f96fe691b0dd" Dec 03 12:56:53 crc kubenswrapper[4986]: I1203 12:56:53.945020 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:53 crc kubenswrapper[4986]: I1203 12:56:53.945053 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:53 crc kubenswrapper[4986]: I1203 12:56:53.945063 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:53 crc kubenswrapper[4986]: I1203 12:56:53.945078 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:53 crc kubenswrapper[4986]: I1203 12:56:53.945088 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:53Z","lastTransitionTime":"2025-12-03T12:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:54 crc kubenswrapper[4986]: I1203 12:56:54.048517 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:54 crc kubenswrapper[4986]: I1203 12:56:54.048569 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:54 crc kubenswrapper[4986]: I1203 12:56:54.048582 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:54 crc kubenswrapper[4986]: I1203 12:56:54.048602 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:54 crc kubenswrapper[4986]: I1203 12:56:54.048615 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:54Z","lastTransitionTime":"2025-12-03T12:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:54 crc kubenswrapper[4986]: I1203 12:56:54.151331 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:54 crc kubenswrapper[4986]: I1203 12:56:54.151376 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:54 crc kubenswrapper[4986]: I1203 12:56:54.151386 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:54 crc kubenswrapper[4986]: I1203 12:56:54.151422 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:54 crc kubenswrapper[4986]: I1203 12:56:54.151431 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:54Z","lastTransitionTime":"2025-12-03T12:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:54 crc kubenswrapper[4986]: I1203 12:56:54.254129 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:54 crc kubenswrapper[4986]: I1203 12:56:54.254190 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:54 crc kubenswrapper[4986]: I1203 12:56:54.254201 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:54 crc kubenswrapper[4986]: I1203 12:56:54.254222 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:54 crc kubenswrapper[4986]: I1203 12:56:54.254234 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:54Z","lastTransitionTime":"2025-12-03T12:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:54 crc kubenswrapper[4986]: I1203 12:56:54.357989 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:54 crc kubenswrapper[4986]: I1203 12:56:54.358057 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:54 crc kubenswrapper[4986]: I1203 12:56:54.358071 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:54 crc kubenswrapper[4986]: I1203 12:56:54.358087 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:54 crc kubenswrapper[4986]: I1203 12:56:54.358099 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:54Z","lastTransitionTime":"2025-12-03T12:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:54 crc kubenswrapper[4986]: I1203 12:56:54.460304 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:54 crc kubenswrapper[4986]: I1203 12:56:54.460337 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:54 crc kubenswrapper[4986]: I1203 12:56:54.460349 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:54 crc kubenswrapper[4986]: I1203 12:56:54.460367 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:54 crc kubenswrapper[4986]: I1203 12:56:54.460376 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:54Z","lastTransitionTime":"2025-12-03T12:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:54 crc kubenswrapper[4986]: I1203 12:56:54.562440 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:54 crc kubenswrapper[4986]: I1203 12:56:54.562497 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:54 crc kubenswrapper[4986]: I1203 12:56:54.562511 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:54 crc kubenswrapper[4986]: I1203 12:56:54.562527 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:54 crc kubenswrapper[4986]: I1203 12:56:54.562538 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:54Z","lastTransitionTime":"2025-12-03T12:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:54 crc kubenswrapper[4986]: I1203 12:56:54.664691 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:54 crc kubenswrapper[4986]: I1203 12:56:54.664748 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:54 crc kubenswrapper[4986]: I1203 12:56:54.664758 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:54 crc kubenswrapper[4986]: I1203 12:56:54.664772 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:54 crc kubenswrapper[4986]: I1203 12:56:54.664782 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:54Z","lastTransitionTime":"2025-12-03T12:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:54 crc kubenswrapper[4986]: I1203 12:56:54.767454 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:54 crc kubenswrapper[4986]: I1203 12:56:54.767520 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:54 crc kubenswrapper[4986]: I1203 12:56:54.767532 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:54 crc kubenswrapper[4986]: I1203 12:56:54.767548 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:54 crc kubenswrapper[4986]: I1203 12:56:54.767557 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:54Z","lastTransitionTime":"2025-12-03T12:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:54 crc kubenswrapper[4986]: I1203 12:56:54.870050 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:54 crc kubenswrapper[4986]: I1203 12:56:54.870095 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:54 crc kubenswrapper[4986]: I1203 12:56:54.870110 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:54 crc kubenswrapper[4986]: I1203 12:56:54.870132 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:54 crc kubenswrapper[4986]: I1203 12:56:54.870149 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:54Z","lastTransitionTime":"2025-12-03T12:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:54 crc kubenswrapper[4986]: I1203 12:56:54.953514 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Dec 03 12:56:54 crc kubenswrapper[4986]: I1203 12:56:54.973842 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:54 crc kubenswrapper[4986]: I1203 12:56:54.973908 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:54 crc kubenswrapper[4986]: I1203 12:56:54.973926 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:54 crc kubenswrapper[4986]: I1203 12:56:54.973956 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:54 crc kubenswrapper[4986]: I1203 12:56:54.973975 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:54Z","lastTransitionTime":"2025-12-03T12:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:55 crc kubenswrapper[4986]: I1203 12:56:55.076940 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:55 crc kubenswrapper[4986]: I1203 12:56:55.076993 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:55 crc kubenswrapper[4986]: I1203 12:56:55.077005 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:55 crc kubenswrapper[4986]: I1203 12:56:55.077021 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:55 crc kubenswrapper[4986]: I1203 12:56:55.077032 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:55Z","lastTransitionTime":"2025-12-03T12:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:55 crc kubenswrapper[4986]: I1203 12:56:55.179746 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:55 crc kubenswrapper[4986]: I1203 12:56:55.179802 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:55 crc kubenswrapper[4986]: I1203 12:56:55.179814 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:55 crc kubenswrapper[4986]: I1203 12:56:55.179829 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:55 crc kubenswrapper[4986]: I1203 12:56:55.179838 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:55Z","lastTransitionTime":"2025-12-03T12:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:55 crc kubenswrapper[4986]: I1203 12:56:55.282654 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:55 crc kubenswrapper[4986]: I1203 12:56:55.282702 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:55 crc kubenswrapper[4986]: I1203 12:56:55.282714 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:55 crc kubenswrapper[4986]: I1203 12:56:55.282736 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:55 crc kubenswrapper[4986]: I1203 12:56:55.282752 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:55Z","lastTransitionTime":"2025-12-03T12:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:55 crc kubenswrapper[4986]: I1203 12:56:55.384976 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:55 crc kubenswrapper[4986]: I1203 12:56:55.385013 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:55 crc kubenswrapper[4986]: I1203 12:56:55.385039 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:55 crc kubenswrapper[4986]: I1203 12:56:55.385053 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:55 crc kubenswrapper[4986]: I1203 12:56:55.385062 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:55Z","lastTransitionTime":"2025-12-03T12:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:55 crc kubenswrapper[4986]: I1203 12:56:55.488254 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:55 crc kubenswrapper[4986]: I1203 12:56:55.488332 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:55 crc kubenswrapper[4986]: I1203 12:56:55.488348 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:55 crc kubenswrapper[4986]: I1203 12:56:55.488363 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:55 crc kubenswrapper[4986]: I1203 12:56:55.488374 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:55Z","lastTransitionTime":"2025-12-03T12:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:55 crc kubenswrapper[4986]: I1203 12:56:55.590959 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:55 crc kubenswrapper[4986]: I1203 12:56:55.591059 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:55 crc kubenswrapper[4986]: I1203 12:56:55.591075 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:55 crc kubenswrapper[4986]: I1203 12:56:55.591091 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:55 crc kubenswrapper[4986]: I1203 12:56:55.591100 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:55Z","lastTransitionTime":"2025-12-03T12:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:55 crc kubenswrapper[4986]: I1203 12:56:55.693632 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:55 crc kubenswrapper[4986]: I1203 12:56:55.693668 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:55 crc kubenswrapper[4986]: I1203 12:56:55.693676 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:55 crc kubenswrapper[4986]: I1203 12:56:55.693689 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:55 crc kubenswrapper[4986]: I1203 12:56:55.693699 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:55Z","lastTransitionTime":"2025-12-03T12:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:55 crc kubenswrapper[4986]: I1203 12:56:55.796113 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:55 crc kubenswrapper[4986]: I1203 12:56:55.796144 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:55 crc kubenswrapper[4986]: I1203 12:56:55.796151 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:55 crc kubenswrapper[4986]: I1203 12:56:55.796166 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:55 crc kubenswrapper[4986]: I1203 12:56:55.796174 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:55Z","lastTransitionTime":"2025-12-03T12:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:55 crc kubenswrapper[4986]: I1203 12:56:55.899735 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:55 crc kubenswrapper[4986]: I1203 12:56:55.899778 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:55 crc kubenswrapper[4986]: I1203 12:56:55.899789 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:55 crc kubenswrapper[4986]: I1203 12:56:55.899804 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:55 crc kubenswrapper[4986]: I1203 12:56:55.899815 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:55Z","lastTransitionTime":"2025-12-03T12:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:55 crc kubenswrapper[4986]: I1203 12:56:55.943336 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 12:56:55 crc kubenswrapper[4986]: I1203 12:56:55.943408 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 12:56:55 crc kubenswrapper[4986]: I1203 12:56:55.943336 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl2mt" Dec 03 12:56:55 crc kubenswrapper[4986]: E1203 12:56:55.943468 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 12:56:55 crc kubenswrapper[4986]: I1203 12:56:55.943367 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 12:56:55 crc kubenswrapper[4986]: E1203 12:56:55.943559 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 12:56:55 crc kubenswrapper[4986]: E1203 12:56:55.943649 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl2mt" podUID="ea24f625-ded4-4e37-a23b-f96fe691b0dd" Dec 03 12:56:55 crc kubenswrapper[4986]: E1203 12:56:55.943714 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 12:56:56 crc kubenswrapper[4986]: I1203 12:56:56.002041 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:56 crc kubenswrapper[4986]: I1203 12:56:56.002319 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:56 crc kubenswrapper[4986]: I1203 12:56:56.002417 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:56 crc kubenswrapper[4986]: I1203 12:56:56.002549 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:56 crc kubenswrapper[4986]: I1203 12:56:56.002645 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:56Z","lastTransitionTime":"2025-12-03T12:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:56 crc kubenswrapper[4986]: I1203 12:56:56.105497 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:56 crc kubenswrapper[4986]: I1203 12:56:56.105765 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:56 crc kubenswrapper[4986]: I1203 12:56:56.105878 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:56 crc kubenswrapper[4986]: I1203 12:56:56.105972 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:56 crc kubenswrapper[4986]: I1203 12:56:56.106046 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:56Z","lastTransitionTime":"2025-12-03T12:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:56 crc kubenswrapper[4986]: I1203 12:56:56.209045 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:56 crc kubenswrapper[4986]: I1203 12:56:56.209084 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:56 crc kubenswrapper[4986]: I1203 12:56:56.209092 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:56 crc kubenswrapper[4986]: I1203 12:56:56.209106 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:56 crc kubenswrapper[4986]: I1203 12:56:56.209117 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:56Z","lastTransitionTime":"2025-12-03T12:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:56 crc kubenswrapper[4986]: I1203 12:56:56.311830 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:56 crc kubenswrapper[4986]: I1203 12:56:56.311894 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:56 crc kubenswrapper[4986]: I1203 12:56:56.311905 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:56 crc kubenswrapper[4986]: I1203 12:56:56.311926 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:56 crc kubenswrapper[4986]: I1203 12:56:56.311938 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:56Z","lastTransitionTime":"2025-12-03T12:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:56 crc kubenswrapper[4986]: I1203 12:56:56.414559 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:56 crc kubenswrapper[4986]: I1203 12:56:56.414623 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:56 crc kubenswrapper[4986]: I1203 12:56:56.414633 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:56 crc kubenswrapper[4986]: I1203 12:56:56.414649 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:56 crc kubenswrapper[4986]: I1203 12:56:56.414660 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:56Z","lastTransitionTime":"2025-12-03T12:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:56 crc kubenswrapper[4986]: I1203 12:56:56.517541 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:56 crc kubenswrapper[4986]: I1203 12:56:56.517578 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:56 crc kubenswrapper[4986]: I1203 12:56:56.517588 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:56 crc kubenswrapper[4986]: I1203 12:56:56.517604 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:56 crc kubenswrapper[4986]: I1203 12:56:56.517644 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:56Z","lastTransitionTime":"2025-12-03T12:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:56 crc kubenswrapper[4986]: I1203 12:56:56.619875 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:56 crc kubenswrapper[4986]: I1203 12:56:56.619924 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:56 crc kubenswrapper[4986]: I1203 12:56:56.619935 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:56 crc kubenswrapper[4986]: I1203 12:56:56.619949 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:56 crc kubenswrapper[4986]: I1203 12:56:56.619959 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:56Z","lastTransitionTime":"2025-12-03T12:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:56 crc kubenswrapper[4986]: I1203 12:56:56.723101 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:56 crc kubenswrapper[4986]: I1203 12:56:56.723162 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:56 crc kubenswrapper[4986]: I1203 12:56:56.723174 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:56 crc kubenswrapper[4986]: I1203 12:56:56.723196 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:56 crc kubenswrapper[4986]: I1203 12:56:56.723212 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:56Z","lastTransitionTime":"2025-12-03T12:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:56 crc kubenswrapper[4986]: I1203 12:56:56.825074 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:56 crc kubenswrapper[4986]: I1203 12:56:56.825124 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:56 crc kubenswrapper[4986]: I1203 12:56:56.825139 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:56 crc kubenswrapper[4986]: I1203 12:56:56.825156 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:56 crc kubenswrapper[4986]: I1203 12:56:56.825167 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:56Z","lastTransitionTime":"2025-12-03T12:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:56 crc kubenswrapper[4986]: I1203 12:56:56.927402 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:56 crc kubenswrapper[4986]: I1203 12:56:56.927435 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:56 crc kubenswrapper[4986]: I1203 12:56:56.927442 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:56 crc kubenswrapper[4986]: I1203 12:56:56.927455 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:56 crc kubenswrapper[4986]: I1203 12:56:56.927464 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:56Z","lastTransitionTime":"2025-12-03T12:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:57 crc kubenswrapper[4986]: I1203 12:56:57.029179 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:57 crc kubenswrapper[4986]: I1203 12:56:57.029210 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:57 crc kubenswrapper[4986]: I1203 12:56:57.029218 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:57 crc kubenswrapper[4986]: I1203 12:56:57.029230 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:57 crc kubenswrapper[4986]: I1203 12:56:57.029239 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:57Z","lastTransitionTime":"2025-12-03T12:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:57 crc kubenswrapper[4986]: I1203 12:56:57.131757 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:57 crc kubenswrapper[4986]: I1203 12:56:57.131803 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:57 crc kubenswrapper[4986]: I1203 12:56:57.131812 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:57 crc kubenswrapper[4986]: I1203 12:56:57.131826 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:57 crc kubenswrapper[4986]: I1203 12:56:57.131836 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:57Z","lastTransitionTime":"2025-12-03T12:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:57 crc kubenswrapper[4986]: I1203 12:56:57.234068 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:57 crc kubenswrapper[4986]: I1203 12:56:57.234141 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:57 crc kubenswrapper[4986]: I1203 12:56:57.234156 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:57 crc kubenswrapper[4986]: I1203 12:56:57.234172 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:57 crc kubenswrapper[4986]: I1203 12:56:57.234185 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:57Z","lastTransitionTime":"2025-12-03T12:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:57 crc kubenswrapper[4986]: I1203 12:56:57.338435 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:57 crc kubenswrapper[4986]: I1203 12:56:57.338474 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:57 crc kubenswrapper[4986]: I1203 12:56:57.338487 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:57 crc kubenswrapper[4986]: I1203 12:56:57.338504 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:57 crc kubenswrapper[4986]: I1203 12:56:57.338516 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:57Z","lastTransitionTime":"2025-12-03T12:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:57 crc kubenswrapper[4986]: I1203 12:56:57.440692 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:57 crc kubenswrapper[4986]: I1203 12:56:57.440759 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:57 crc kubenswrapper[4986]: I1203 12:56:57.440773 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:57 crc kubenswrapper[4986]: I1203 12:56:57.440790 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:57 crc kubenswrapper[4986]: I1203 12:56:57.440801 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:57Z","lastTransitionTime":"2025-12-03T12:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:57 crc kubenswrapper[4986]: I1203 12:56:57.542963 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:57 crc kubenswrapper[4986]: I1203 12:56:57.543006 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:57 crc kubenswrapper[4986]: I1203 12:56:57.543018 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:57 crc kubenswrapper[4986]: I1203 12:56:57.543033 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:57 crc kubenswrapper[4986]: I1203 12:56:57.543044 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:57Z","lastTransitionTime":"2025-12-03T12:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:57 crc kubenswrapper[4986]: I1203 12:56:57.645831 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:57 crc kubenswrapper[4986]: I1203 12:56:57.645879 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:57 crc kubenswrapper[4986]: I1203 12:56:57.645890 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:57 crc kubenswrapper[4986]: I1203 12:56:57.645904 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:57 crc kubenswrapper[4986]: I1203 12:56:57.645913 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:57Z","lastTransitionTime":"2025-12-03T12:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:57 crc kubenswrapper[4986]: I1203 12:56:57.748513 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:57 crc kubenswrapper[4986]: I1203 12:56:57.748555 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:57 crc kubenswrapper[4986]: I1203 12:56:57.748575 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:57 crc kubenswrapper[4986]: I1203 12:56:57.748592 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:57 crc kubenswrapper[4986]: I1203 12:56:57.748605 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:57Z","lastTransitionTime":"2025-12-03T12:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:57 crc kubenswrapper[4986]: I1203 12:56:57.851085 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:57 crc kubenswrapper[4986]: I1203 12:56:57.851126 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:57 crc kubenswrapper[4986]: I1203 12:56:57.851136 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:57 crc kubenswrapper[4986]: I1203 12:56:57.851149 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:57 crc kubenswrapper[4986]: I1203 12:56:57.851159 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:57Z","lastTransitionTime":"2025-12-03T12:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:57 crc kubenswrapper[4986]: I1203 12:56:57.943048 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl2mt" Dec 03 12:56:57 crc kubenswrapper[4986]: I1203 12:56:57.943105 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 12:56:57 crc kubenswrapper[4986]: I1203 12:56:57.943071 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 12:56:57 crc kubenswrapper[4986]: I1203 12:56:57.943048 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 12:56:57 crc kubenswrapper[4986]: E1203 12:56:57.943208 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl2mt" podUID="ea24f625-ded4-4e37-a23b-f96fe691b0dd" Dec 03 12:56:57 crc kubenswrapper[4986]: E1203 12:56:57.943268 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 12:56:57 crc kubenswrapper[4986]: E1203 12:56:57.943343 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 12:56:57 crc kubenswrapper[4986]: E1203 12:56:57.943407 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 12:56:57 crc kubenswrapper[4986]: I1203 12:56:57.953069 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:57 crc kubenswrapper[4986]: I1203 12:56:57.953110 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:57 crc kubenswrapper[4986]: I1203 12:56:57.953122 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:57 crc kubenswrapper[4986]: I1203 12:56:57.953139 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:57 crc kubenswrapper[4986]: I1203 12:56:57.953151 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:57Z","lastTransitionTime":"2025-12-03T12:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:58 crc kubenswrapper[4986]: I1203 12:56:58.055369 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:58 crc kubenswrapper[4986]: I1203 12:56:58.055409 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:58 crc kubenswrapper[4986]: I1203 12:56:58.055429 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:58 crc kubenswrapper[4986]: I1203 12:56:58.055447 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:58 crc kubenswrapper[4986]: I1203 12:56:58.055458 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:58Z","lastTransitionTime":"2025-12-03T12:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:58 crc kubenswrapper[4986]: I1203 12:56:58.158076 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:58 crc kubenswrapper[4986]: I1203 12:56:58.158129 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:58 crc kubenswrapper[4986]: I1203 12:56:58.158142 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:58 crc kubenswrapper[4986]: I1203 12:56:58.158158 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:58 crc kubenswrapper[4986]: I1203 12:56:58.158173 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:58Z","lastTransitionTime":"2025-12-03T12:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:58 crc kubenswrapper[4986]: I1203 12:56:58.261117 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:58 crc kubenswrapper[4986]: I1203 12:56:58.261156 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:58 crc kubenswrapper[4986]: I1203 12:56:58.261166 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:58 crc kubenswrapper[4986]: I1203 12:56:58.261181 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:58 crc kubenswrapper[4986]: I1203 12:56:58.261194 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:58Z","lastTransitionTime":"2025-12-03T12:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:58 crc kubenswrapper[4986]: I1203 12:56:58.363830 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:58 crc kubenswrapper[4986]: I1203 12:56:58.363898 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:58 crc kubenswrapper[4986]: I1203 12:56:58.363912 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:58 crc kubenswrapper[4986]: I1203 12:56:58.363928 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:58 crc kubenswrapper[4986]: I1203 12:56:58.363939 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:58Z","lastTransitionTime":"2025-12-03T12:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:58 crc kubenswrapper[4986]: I1203 12:56:58.465887 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:58 crc kubenswrapper[4986]: I1203 12:56:58.465961 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:58 crc kubenswrapper[4986]: I1203 12:56:58.465978 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:58 crc kubenswrapper[4986]: I1203 12:56:58.465994 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:58 crc kubenswrapper[4986]: I1203 12:56:58.466004 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:58Z","lastTransitionTime":"2025-12-03T12:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:58 crc kubenswrapper[4986]: I1203 12:56:58.567874 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:58 crc kubenswrapper[4986]: I1203 12:56:58.567912 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:58 crc kubenswrapper[4986]: I1203 12:56:58.567920 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:58 crc kubenswrapper[4986]: I1203 12:56:58.567933 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:58 crc kubenswrapper[4986]: I1203 12:56:58.567946 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:58Z","lastTransitionTime":"2025-12-03T12:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:58 crc kubenswrapper[4986]: I1203 12:56:58.671171 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:58 crc kubenswrapper[4986]: I1203 12:56:58.671224 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:58 crc kubenswrapper[4986]: I1203 12:56:58.671235 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:58 crc kubenswrapper[4986]: I1203 12:56:58.671251 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:58 crc kubenswrapper[4986]: I1203 12:56:58.671262 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:58Z","lastTransitionTime":"2025-12-03T12:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:58 crc kubenswrapper[4986]: I1203 12:56:58.773567 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:58 crc kubenswrapper[4986]: I1203 12:56:58.773626 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:58 crc kubenswrapper[4986]: I1203 12:56:58.773639 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:58 crc kubenswrapper[4986]: I1203 12:56:58.773659 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:58 crc kubenswrapper[4986]: I1203 12:56:58.773671 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:58Z","lastTransitionTime":"2025-12-03T12:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:58 crc kubenswrapper[4986]: I1203 12:56:58.875972 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:58 crc kubenswrapper[4986]: I1203 12:56:58.876047 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:58 crc kubenswrapper[4986]: I1203 12:56:58.876055 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:58 crc kubenswrapper[4986]: I1203 12:56:58.876072 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:58 crc kubenswrapper[4986]: I1203 12:56:58.876080 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:58Z","lastTransitionTime":"2025-12-03T12:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:58 crc kubenswrapper[4986]: I1203 12:56:58.978839 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:58 crc kubenswrapper[4986]: I1203 12:56:58.978911 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:58 crc kubenswrapper[4986]: I1203 12:56:58.978924 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:58 crc kubenswrapper[4986]: I1203 12:56:58.978940 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:58 crc kubenswrapper[4986]: I1203 12:56:58.978952 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:58Z","lastTransitionTime":"2025-12-03T12:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:59 crc kubenswrapper[4986]: I1203 12:56:59.082517 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:59 crc kubenswrapper[4986]: I1203 12:56:59.082563 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:59 crc kubenswrapper[4986]: I1203 12:56:59.082574 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:59 crc kubenswrapper[4986]: I1203 12:56:59.082592 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:59 crc kubenswrapper[4986]: I1203 12:56:59.082602 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:59Z","lastTransitionTime":"2025-12-03T12:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:59 crc kubenswrapper[4986]: I1203 12:56:59.185519 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:59 crc kubenswrapper[4986]: I1203 12:56:59.185563 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:59 crc kubenswrapper[4986]: I1203 12:56:59.185574 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:59 crc kubenswrapper[4986]: I1203 12:56:59.185589 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:59 crc kubenswrapper[4986]: I1203 12:56:59.185604 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:59Z","lastTransitionTime":"2025-12-03T12:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:59 crc kubenswrapper[4986]: I1203 12:56:59.288157 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:59 crc kubenswrapper[4986]: I1203 12:56:59.288198 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:59 crc kubenswrapper[4986]: I1203 12:56:59.288213 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:59 crc kubenswrapper[4986]: I1203 12:56:59.288230 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:59 crc kubenswrapper[4986]: I1203 12:56:59.288241 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:59Z","lastTransitionTime":"2025-12-03T12:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:59 crc kubenswrapper[4986]: I1203 12:56:59.390461 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:59 crc kubenswrapper[4986]: I1203 12:56:59.390504 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:59 crc kubenswrapper[4986]: I1203 12:56:59.390514 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:59 crc kubenswrapper[4986]: I1203 12:56:59.390529 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:59 crc kubenswrapper[4986]: I1203 12:56:59.390540 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:59Z","lastTransitionTime":"2025-12-03T12:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:59 crc kubenswrapper[4986]: I1203 12:56:59.493102 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:59 crc kubenswrapper[4986]: I1203 12:56:59.493193 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:59 crc kubenswrapper[4986]: I1203 12:56:59.493208 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:59 crc kubenswrapper[4986]: I1203 12:56:59.493224 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:59 crc kubenswrapper[4986]: I1203 12:56:59.493237 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:59Z","lastTransitionTime":"2025-12-03T12:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:59 crc kubenswrapper[4986]: I1203 12:56:59.596070 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:59 crc kubenswrapper[4986]: I1203 12:56:59.596113 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:59 crc kubenswrapper[4986]: I1203 12:56:59.596124 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:59 crc kubenswrapper[4986]: I1203 12:56:59.596139 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:59 crc kubenswrapper[4986]: I1203 12:56:59.596150 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:59Z","lastTransitionTime":"2025-12-03T12:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:59 crc kubenswrapper[4986]: I1203 12:56:59.698854 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:59 crc kubenswrapper[4986]: I1203 12:56:59.698897 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:59 crc kubenswrapper[4986]: I1203 12:56:59.698907 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:59 crc kubenswrapper[4986]: I1203 12:56:59.698919 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:59 crc kubenswrapper[4986]: I1203 12:56:59.698928 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:59Z","lastTransitionTime":"2025-12-03T12:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:59 crc kubenswrapper[4986]: I1203 12:56:59.801777 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:59 crc kubenswrapper[4986]: I1203 12:56:59.801821 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:59 crc kubenswrapper[4986]: I1203 12:56:59.801833 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:59 crc kubenswrapper[4986]: I1203 12:56:59.801850 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:59 crc kubenswrapper[4986]: I1203 12:56:59.801864 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:59Z","lastTransitionTime":"2025-12-03T12:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:59 crc kubenswrapper[4986]: I1203 12:56:59.904530 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:56:59 crc kubenswrapper[4986]: I1203 12:56:59.904570 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:56:59 crc kubenswrapper[4986]: I1203 12:56:59.904580 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:56:59 crc kubenswrapper[4986]: I1203 12:56:59.904595 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:56:59 crc kubenswrapper[4986]: I1203 12:56:59.904605 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:56:59Z","lastTransitionTime":"2025-12-03T12:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:56:59 crc kubenswrapper[4986]: I1203 12:56:59.943039 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 12:56:59 crc kubenswrapper[4986]: I1203 12:56:59.943142 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl2mt" Dec 03 12:56:59 crc kubenswrapper[4986]: E1203 12:56:59.943183 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 12:56:59 crc kubenswrapper[4986]: I1203 12:56:59.943228 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 12:56:59 crc kubenswrapper[4986]: I1203 12:56:59.943242 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 12:56:59 crc kubenswrapper[4986]: E1203 12:56:59.943353 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl2mt" podUID="ea24f625-ded4-4e37-a23b-f96fe691b0dd" Dec 03 12:56:59 crc kubenswrapper[4986]: E1203 12:56:59.943434 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 12:56:59 crc kubenswrapper[4986]: E1203 12:56:59.943517 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 12:57:00 crc kubenswrapper[4986]: I1203 12:57:00.008033 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:00 crc kubenswrapper[4986]: I1203 12:57:00.008111 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:00 crc kubenswrapper[4986]: I1203 12:57:00.008144 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:00 crc kubenswrapper[4986]: I1203 12:57:00.008179 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:00 crc kubenswrapper[4986]: I1203 12:57:00.008200 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:00Z","lastTransitionTime":"2025-12-03T12:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:00 crc kubenswrapper[4986]: I1203 12:57:00.111129 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:00 crc kubenswrapper[4986]: I1203 12:57:00.111206 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:00 crc kubenswrapper[4986]: I1203 12:57:00.111229 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:00 crc kubenswrapper[4986]: I1203 12:57:00.111260 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:00 crc kubenswrapper[4986]: I1203 12:57:00.111316 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:00Z","lastTransitionTime":"2025-12-03T12:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:00 crc kubenswrapper[4986]: I1203 12:57:00.214511 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:00 crc kubenswrapper[4986]: I1203 12:57:00.214606 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:00 crc kubenswrapper[4986]: I1203 12:57:00.214627 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:00 crc kubenswrapper[4986]: I1203 12:57:00.214655 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:00 crc kubenswrapper[4986]: I1203 12:57:00.214675 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:00Z","lastTransitionTime":"2025-12-03T12:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:00 crc kubenswrapper[4986]: I1203 12:57:00.317476 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:00 crc kubenswrapper[4986]: I1203 12:57:00.317531 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:00 crc kubenswrapper[4986]: I1203 12:57:00.317543 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:00 crc kubenswrapper[4986]: I1203 12:57:00.317560 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:00 crc kubenswrapper[4986]: I1203 12:57:00.317574 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:00Z","lastTransitionTime":"2025-12-03T12:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:00 crc kubenswrapper[4986]: I1203 12:57:00.419390 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:00 crc kubenswrapper[4986]: I1203 12:57:00.419421 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:00 crc kubenswrapper[4986]: I1203 12:57:00.419429 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:00 crc kubenswrapper[4986]: I1203 12:57:00.419442 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:00 crc kubenswrapper[4986]: I1203 12:57:00.419450 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:00Z","lastTransitionTime":"2025-12-03T12:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:00 crc kubenswrapper[4986]: I1203 12:57:00.521776 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:00 crc kubenswrapper[4986]: I1203 12:57:00.521824 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:00 crc kubenswrapper[4986]: I1203 12:57:00.521836 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:00 crc kubenswrapper[4986]: I1203 12:57:00.521855 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:00 crc kubenswrapper[4986]: I1203 12:57:00.521868 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:00Z","lastTransitionTime":"2025-12-03T12:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:00 crc kubenswrapper[4986]: I1203 12:57:00.624967 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:00 crc kubenswrapper[4986]: I1203 12:57:00.625016 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:00 crc kubenswrapper[4986]: I1203 12:57:00.625027 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:00 crc kubenswrapper[4986]: I1203 12:57:00.625332 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:00 crc kubenswrapper[4986]: I1203 12:57:00.625358 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:00Z","lastTransitionTime":"2025-12-03T12:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:00 crc kubenswrapper[4986]: I1203 12:57:00.727534 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:00 crc kubenswrapper[4986]: I1203 12:57:00.727569 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:00 crc kubenswrapper[4986]: I1203 12:57:00.727578 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:00 crc kubenswrapper[4986]: I1203 12:57:00.727591 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:00 crc kubenswrapper[4986]: I1203 12:57:00.727599 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:00Z","lastTransitionTime":"2025-12-03T12:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:00 crc kubenswrapper[4986]: I1203 12:57:00.829598 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:00 crc kubenswrapper[4986]: I1203 12:57:00.829636 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:00 crc kubenswrapper[4986]: I1203 12:57:00.829647 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:00 crc kubenswrapper[4986]: I1203 12:57:00.829665 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:00 crc kubenswrapper[4986]: I1203 12:57:00.829676 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:00Z","lastTransitionTime":"2025-12-03T12:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:00 crc kubenswrapper[4986]: I1203 12:57:00.934388 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:00 crc kubenswrapper[4986]: I1203 12:57:00.934450 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:00 crc kubenswrapper[4986]: I1203 12:57:00.934461 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:00 crc kubenswrapper[4986]: I1203 12:57:00.934476 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:00 crc kubenswrapper[4986]: I1203 12:57:00.934487 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:00Z","lastTransitionTime":"2025-12-03T12:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:00 crc kubenswrapper[4986]: I1203 12:57:00.955917 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1aad898-ad11-4c01-95c2-8a4d293aed99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77c695e6dd4e9c2fe64a41b90d59aebe2a6a54a1985702749b0a68eed6551f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0d8158a257a7d5b17302a2d6f4c81326b8c24487c18132807f91366cdb3457b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16164ea50f4f29916a5878eee07e8debc4268fae086f9086aa4c73c4d03dd082\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://865442eb9652b14d8c0642927e9f59c60344dfcffb5bd2d0cc0ec572ede971c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://865442eb9652b14d8c0642927e9f59c60344dfcffb5bd2d0cc0ec572ede971c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:57:00Z is after 2025-08-24T17:21:41Z" Dec 03 12:57:00 crc kubenswrapper[4986]: I1203 12:57:00.970301 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:57:00Z is after 2025-08-24T17:21:41Z" Dec 03 12:57:00 crc kubenswrapper[4986]: I1203 12:57:00.984184 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:57:00Z is after 2025-08-24T17:21:41Z" Dec 03 12:57:00 crc kubenswrapper[4986]: I1203 12:57:00.999074 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8lcrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75e4c223-9952-4d1b-b83b-5c45cfc51432\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b2af5d43c7842f129b6521e5b3b1963f6060b3e0c552a771c7df6a2a9ef867d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnr67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8lcrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:57:00Z is after 2025-08-24T17:21:41Z" Dec 03 12:57:01 crc kubenswrapper[4986]: I1203 12:57:01.010490 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da0bee0b-131d-40cf-98b9-5dd5d6ddf8ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7998f6aae63a6935e19e2a1f2157aeb88cb8cb14e875d25a9dcc4f80f36f2fbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b459b0d69a8744da31d3019da0622a77ccfbb8218fbc14b25c9f80af144a3278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b459b0d69a8744da31d3019da0622a77ccfbb8218fbc14b25c9f80af144a3278\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:57:01Z is after 2025-08-24T17:21:41Z" Dec 03 12:57:01 crc kubenswrapper[4986]: I1203 12:57:01.021165 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fszqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a76bcf5c-6a62-4360-826d-ecac337c88ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d94370a14317ade02cb805783e09ffcd8c6ff07f35e9124e302028e6dfa3c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhtrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fszqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:57:01Z is after 2025-08-24T17:21:41Z" Dec 03 12:57:01 crc kubenswrapper[4986]: I1203 12:57:01.036303 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-px97g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97196b6d-75cc-4de4-8805-f9ce3fbd4230\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd2dabaec03ca7bc575d3b31582c870f9094270ef72a40fd7425e2cef5b54e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c28bf1aff27ba408262b5fc561e084ff3813ce95a7c14b57fea24d409eb33bea\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T12:56:50Z\\\",\\\"message\\\":\\\"2025-12-03T12:56:04+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_2e5072a4-4b1a-448b-a811-e2aaaa751560\\\\n2025-12-03T12:56:04+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_2e5072a4-4b1a-448b-a811-e2aaaa751560 to /host/opt/cni/bin/\\\\n2025-12-03T12:56:05Z [verbose] multus-daemon started\\\\n2025-12-03T12:56:05Z [verbose] Readiness Indicator file check\\\\n2025-12-03T12:56:50Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v4j5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-px97g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:57:01Z is after 2025-08-24T17:21:41Z" Dec 03 12:57:01 crc kubenswrapper[4986]: I1203 12:57:01.037476 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:01 crc kubenswrapper[4986]: I1203 12:57:01.037524 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:01 crc kubenswrapper[4986]: I1203 12:57:01.037539 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:01 crc kubenswrapper[4986]: I1203 12:57:01.037558 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:01 crc kubenswrapper[4986]: I1203 12:57:01.037571 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:01Z","lastTransitionTime":"2025-12-03T12:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:01 crc kubenswrapper[4986]: I1203 12:57:01.055766 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3a45156-295b-4093-80e7-2059f81ddbd7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c79f84fc019d109bac446dd058b4bafb0b516a13fae47b1924e4f7690658ba8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be6405facfa7726cbe56c48184cba0123774e8e34d3ceb9e18daed6f35560c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d77d078572dbe7f11e91ece0d25e76bd17004167dabce267a2b568e798aae158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2843fb3fcbadb926b104a9f7c4084468b36293aebed6c692c29419a0457b62a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d94eca13821749d7e15a350c411a831d2887f4211ed092a4fb57e26e49fcd9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ff8c1f1602e57602cae3f46dd3d2bf8d7abad6bd0952f4347152678e8606d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b002220fe79bb266ddd4f28ab0b9f29bb0ea1e8d204f558d023242437694625\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b002220fe79bb266ddd4f28ab0b9f29bb0ea1e8d204f558d023242437694625\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T12:56:33Z\\\",\\\"message\\\":\\\"go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1203 12:56:33.207496 6692 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 12:56:33.207580 6692 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 12:56:33.207656 6692 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 12:56:33.208012 6692 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1203 12:56:33.208030 6692 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1203 12:56:33.208054 6692 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1203 12:56:33.208069 6692 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1203 12:56:33.208132 6692 factory.go:656] Stopping watch factory\\\\nI1203 12:56:33.208149 6692 ovnkube.go:599] Stopped ovnkube\\\\nI1203 12:56:33.208174 6692 handler.go:208] Removed *v1.Node event handler 2\\\\nI1203 12:56:33.208187 6692 handler.go:208] Removed *v1.Node event handler 7\\\\nI1203 12:56:33.208197 6692 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1203 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:32Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-9nf52_openshift-ovn-kubernetes(d3a45156-295b-4093-80e7-2059f81ddbd7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dfc7520e7cb1e09eb10bf9615dbae11d7e73474f1b73eed481d18269b65aef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c34a338f506e8dd09757984f92a29fd76ddd2d25fc00d680cc4883c4766080e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c34a338f506e8dd09757984f92a29fd76ddd2d25fc00d680cc4883c4766080e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9nf52\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:57:01Z is after 2025-08-24T17:21:41Z" Dec 03 12:57:01 crc kubenswrapper[4986]: I1203 12:57:01.067727 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2p8xn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"957171e6-7b64-4d92-b690-342e3251ed8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d417b35a888f61c40a58512bcff329faed85f3b4f8d1d29a7c6f2f43634a9090\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbhn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27878a8de45ccf0e02719e2a3f9d1339d8910db5947d423a3e7400359cc96253\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbhn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2p8xn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:57:01Z is after 2025-08-24T17:21:41Z" Dec 03 12:57:01 crc kubenswrapper[4986]: I1203 12:57:01.080920 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee22d4f3-8488-4497-a757-a6dd51e84539\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4fc247caf16eab425363e12219a76c40022f0daf0bc9c8b52f4eb8e3726f242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86d62d991d6516ed42f7cad655018e1ebc85c8c1b100d624c1c2e5e8f616cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4ba9819f4f1746f03a558617fc6ee14f5627c37c362d1198a303b6f1854c7bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79cba0b7394bf25be648502fc84a916d8f2fa0949edce20b5a748046ce9b6f75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:57:01Z is after 2025-08-24T17:21:41Z" Dec 03 12:57:01 crc kubenswrapper[4986]: I1203 12:57:01.091968 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:57:01Z is after 2025-08-24T17:21:41Z" Dec 03 12:57:01 crc kubenswrapper[4986]: I1203 12:57:01.105256 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0d71a3ffdbe593936c83e92e92e286eaff092333bb3dd54cb5bc2825ca46fbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a402aeaef03d8d3c684f0d8e7fe8a080bc02d761104e3c1095b3233aade2b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:57:01Z is after 2025-08-24T17:21:41Z" Dec 03 12:57:01 crc kubenswrapper[4986]: I1203 12:57:01.118569 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rkgd9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a939a03f-0eec-49cb-9b23-40e359e427d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b854d526fa4ac2b270c7f27c1b35b1ff0fbc671e40a928649b578f926d6b51b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc042debcddb7ea2233d6c224020a32acd8c996d5334d8e6eaae8d1329bb3774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc042debcddb7ea2233d6c224020a32acd8c996d5334d8e6eaae8d1329bb3774\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7128a1dd657b9829c1af31ef1a2f5ad08d2bfd65e5d352cf5a69a5be4e9aae9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7128a1dd657b9829c1af31ef1a2f5ad08d2bfd65e5d352cf5a69a5be4e9aae9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cd633caedf88a1bec633cef61b8b30ae422b8c8d91a012395831a09d398f0f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cd633caedf88a1bec633cef61b8b30ae422b8c8d91a012395831a09d398f0f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1b9e8e1f9326660fe6d7c89af18c5f7dd1134e9f2e866da8ba9c795e13cb57d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1b9e8e1f9326660fe6d7c89af18c5f7dd1134e9f2e866da8ba9c795e13cb57d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55ef54ee9f22072e2b437f38f6d06f24e6f1da02555505bc65c488b5818c87a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55ef54ee9f22072e2b437f38f6d06f24e6f1da02555505bc65c488b5818c87a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35fecc2332fdab60719b864653301c5e8322b13ace401a0ff016ff3dc09cd3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35fecc2332fdab60719b864653301c5e8322b13ace401a0ff016ff3dc09cd3d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rkgd9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:57:01Z is after 2025-08-24T17:21:41Z" Dec 03 12:57:01 crc kubenswrapper[4986]: I1203 12:57:01.127862 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rl2mt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea24f625-ded4-4e37-a23b-f96fe691b0dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhr82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhr82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rl2mt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:57:01Z is after 2025-08-24T17:21:41Z" Dec 03 12:57:01 crc kubenswrapper[4986]: I1203 12:57:01.139208 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231ed674-1d19-4ee2-b29c-a1b7453ed531\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7eb1602f2d542b1998655456e5c49fca8f55a45a07cb075f989d4b10f91f6c95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0ca216d3d1121b7035d5d7ead25d459f95d56808997c02b3801cac0354c76a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04aa8cf2e01507ace747505514511263d9c560608e33f592a2c10be1edad8674\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70f0aa85e0be54890d4367deaf1a95970a7b0ac9d95afdd07c431680bdf7f15e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c0ed6891f87bfc3c62f0f13739720e012dc8fa3b122a18e799e164e4656abb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T12:56:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 12:56:01.437086 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 12:56:01.437205 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 12:56:01.438166 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1972143202/tls.crt::/tmp/serving-cert-1972143202/tls.key\\\\\\\"\\\\nI1203 12:56:02.167657 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 12:56:02.174337 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 12:56:02.174360 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 12:56:02.174377 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 12:56:02.174382 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 12:56:02.178753 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 12:56:02.179168 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 12:56:02.179175 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 12:56:02.179180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 12:56:02.179184 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 12:56:02.179187 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 12:56:02.179190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 12:56:02.178775 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 12:56:02.181220 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8ecb87e56d3fa7d72b57a6f333db4244ee7f60381778e7099a6fc88d91e1e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:57:01Z is after 2025-08-24T17:21:41Z" Dec 03 12:57:01 crc kubenswrapper[4986]: I1203 12:57:01.139945 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:01 crc kubenswrapper[4986]: I1203 12:57:01.139979 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:01 crc kubenswrapper[4986]: I1203 12:57:01.139990 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:01 crc kubenswrapper[4986]: I1203 12:57:01.140005 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:01 crc kubenswrapper[4986]: I1203 12:57:01.140015 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:01Z","lastTransitionTime":"2025-12-03T12:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:01 crc kubenswrapper[4986]: I1203 12:57:01.157968 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d51894dd-38f0-413a-adaf-22d196474bed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa98006a8736394300cdb9ef3331d2a23e1b3af2e2203f757c51ca9c700d26c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6e7d24b5f1325cfb2d210cb9f311026ab370c91deecc5d0d951bcd4eda79816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9bb9d940d346f9895e24d98173668ea2d0e68fd3035e5b03475bbc936582958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7fa0d673f037c8edcc75ee30b1bbeea73da55d5459347dacc9251fffbb38be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d4007e9ed33227938a5a3656fd74958aef599418fec4cd2048dfa7dd88924dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fb412bfaef1de92aa6be2e23c0dd73fe39eebf29b6814fb3bb6fd0281ddbbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fb412bfaef1de92aa6be2e23c0dd73fe39eebf29b6814fb3bb6fd0281ddbbd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c66ee903635427e08f96e3f96c512c4d25e79bf2fc14eb0c699b3b9a617cfeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c66ee903635427e08f96e3f96c512c4d25e79bf2fc14eb0c699b3b9a617cfeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54fb6e4ecd7f73267d49261875a00a51e5e36e6079170f694346e9d6c3df8f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54fb6e4ecd7f73267d49261875a00a51e5e36e6079170f694346e9d6c3df8f31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:57:01Z is after 2025-08-24T17:21:41Z" Dec 03 12:57:01 crc kubenswrapper[4986]: I1203 12:57:01.169564 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b923e52e6284c8adca87e7d3d801a432a676e4389d7ab3c0cba584c5b7d96f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:57:01Z is after 2025-08-24T17:21:41Z" Dec 03 12:57:01 crc kubenswrapper[4986]: I1203 12:57:01.180836 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d66d06e79ffb2ecd12acc0380e5e4839904147f9194e48277ca667df13a0cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:57:01Z is after 2025-08-24T17:21:41Z" Dec 03 12:57:01 crc kubenswrapper[4986]: I1203 12:57:01.192658 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://535b7b05dbe9b47808f3d358a1f370b696a5d5bb4dd96b049ac2cbe3d18ea065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgppd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0ab87b22153ba31fa79d2d210e695b274c2ddc878ad679c605aa8fb716534d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgppd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xggpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:57:01Z is after 2025-08-24T17:21:41Z" Dec 03 12:57:01 crc kubenswrapper[4986]: I1203 12:57:01.242111 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:01 crc kubenswrapper[4986]: I1203 12:57:01.242162 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:01 crc kubenswrapper[4986]: I1203 12:57:01.242174 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:01 crc kubenswrapper[4986]: I1203 12:57:01.242193 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:01 crc kubenswrapper[4986]: I1203 12:57:01.242205 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:01Z","lastTransitionTime":"2025-12-03T12:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:01 crc kubenswrapper[4986]: I1203 12:57:01.345222 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:01 crc kubenswrapper[4986]: I1203 12:57:01.345264 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:01 crc kubenswrapper[4986]: I1203 12:57:01.345273 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:01 crc kubenswrapper[4986]: I1203 12:57:01.345302 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:01 crc kubenswrapper[4986]: I1203 12:57:01.345315 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:01Z","lastTransitionTime":"2025-12-03T12:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:01 crc kubenswrapper[4986]: I1203 12:57:01.447775 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:01 crc kubenswrapper[4986]: I1203 12:57:01.447825 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:01 crc kubenswrapper[4986]: I1203 12:57:01.447836 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:01 crc kubenswrapper[4986]: I1203 12:57:01.447853 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:01 crc kubenswrapper[4986]: I1203 12:57:01.447867 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:01Z","lastTransitionTime":"2025-12-03T12:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:01 crc kubenswrapper[4986]: I1203 12:57:01.549839 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:01 crc kubenswrapper[4986]: I1203 12:57:01.550009 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:01 crc kubenswrapper[4986]: I1203 12:57:01.550025 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:01 crc kubenswrapper[4986]: I1203 12:57:01.550042 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:01 crc kubenswrapper[4986]: I1203 12:57:01.550053 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:01Z","lastTransitionTime":"2025-12-03T12:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:01 crc kubenswrapper[4986]: I1203 12:57:01.653508 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:01 crc kubenswrapper[4986]: I1203 12:57:01.653568 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:01 crc kubenswrapper[4986]: I1203 12:57:01.653580 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:01 crc kubenswrapper[4986]: I1203 12:57:01.653597 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:01 crc kubenswrapper[4986]: I1203 12:57:01.653610 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:01Z","lastTransitionTime":"2025-12-03T12:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:01 crc kubenswrapper[4986]: I1203 12:57:01.755647 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:01 crc kubenswrapper[4986]: I1203 12:57:01.755684 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:01 crc kubenswrapper[4986]: I1203 12:57:01.755695 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:01 crc kubenswrapper[4986]: I1203 12:57:01.755709 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:01 crc kubenswrapper[4986]: I1203 12:57:01.755720 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:01Z","lastTransitionTime":"2025-12-03T12:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:01 crc kubenswrapper[4986]: I1203 12:57:01.858453 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:01 crc kubenswrapper[4986]: I1203 12:57:01.858501 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:01 crc kubenswrapper[4986]: I1203 12:57:01.858512 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:01 crc kubenswrapper[4986]: I1203 12:57:01.858529 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:01 crc kubenswrapper[4986]: I1203 12:57:01.858541 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:01Z","lastTransitionTime":"2025-12-03T12:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:01 crc kubenswrapper[4986]: I1203 12:57:01.943116 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 12:57:01 crc kubenswrapper[4986]: I1203 12:57:01.943195 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 12:57:01 crc kubenswrapper[4986]: E1203 12:57:01.943309 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 12:57:01 crc kubenswrapper[4986]: I1203 12:57:01.943344 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 12:57:01 crc kubenswrapper[4986]: E1203 12:57:01.943383 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 12:57:01 crc kubenswrapper[4986]: E1203 12:57:01.943499 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 12:57:01 crc kubenswrapper[4986]: I1203 12:57:01.944146 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl2mt" Dec 03 12:57:01 crc kubenswrapper[4986]: E1203 12:57:01.944397 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl2mt" podUID="ea24f625-ded4-4e37-a23b-f96fe691b0dd" Dec 03 12:57:01 crc kubenswrapper[4986]: I1203 12:57:01.961789 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:01 crc kubenswrapper[4986]: I1203 12:57:01.961839 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:01 crc kubenswrapper[4986]: I1203 12:57:01.961848 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:01 crc kubenswrapper[4986]: I1203 12:57:01.961864 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:01 crc kubenswrapper[4986]: I1203 12:57:01.961893 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:01Z","lastTransitionTime":"2025-12-03T12:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:02 crc kubenswrapper[4986]: I1203 12:57:02.063906 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:02 crc kubenswrapper[4986]: I1203 12:57:02.063951 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:02 crc kubenswrapper[4986]: I1203 12:57:02.063985 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:02 crc kubenswrapper[4986]: I1203 12:57:02.064005 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:02 crc kubenswrapper[4986]: I1203 12:57:02.064017 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:02Z","lastTransitionTime":"2025-12-03T12:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:02 crc kubenswrapper[4986]: I1203 12:57:02.166577 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:02 crc kubenswrapper[4986]: I1203 12:57:02.166635 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:02 crc kubenswrapper[4986]: I1203 12:57:02.166647 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:02 crc kubenswrapper[4986]: I1203 12:57:02.166667 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:02 crc kubenswrapper[4986]: I1203 12:57:02.166680 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:02Z","lastTransitionTime":"2025-12-03T12:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:02 crc kubenswrapper[4986]: I1203 12:57:02.269212 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:02 crc kubenswrapper[4986]: I1203 12:57:02.269264 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:02 crc kubenswrapper[4986]: I1203 12:57:02.269274 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:02 crc kubenswrapper[4986]: I1203 12:57:02.269302 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:02 crc kubenswrapper[4986]: I1203 12:57:02.269311 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:02Z","lastTransitionTime":"2025-12-03T12:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:02 crc kubenswrapper[4986]: I1203 12:57:02.372822 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:02 crc kubenswrapper[4986]: I1203 12:57:02.372871 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:02 crc kubenswrapper[4986]: I1203 12:57:02.372889 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:02 crc kubenswrapper[4986]: I1203 12:57:02.372911 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:02 crc kubenswrapper[4986]: I1203 12:57:02.372928 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:02Z","lastTransitionTime":"2025-12-03T12:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:02 crc kubenswrapper[4986]: I1203 12:57:02.474887 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:02 crc kubenswrapper[4986]: I1203 12:57:02.474917 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:02 crc kubenswrapper[4986]: I1203 12:57:02.474924 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:02 crc kubenswrapper[4986]: I1203 12:57:02.474937 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:02 crc kubenswrapper[4986]: I1203 12:57:02.474946 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:02Z","lastTransitionTime":"2025-12-03T12:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:02 crc kubenswrapper[4986]: I1203 12:57:02.577468 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:02 crc kubenswrapper[4986]: I1203 12:57:02.577510 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:02 crc kubenswrapper[4986]: I1203 12:57:02.577523 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:02 crc kubenswrapper[4986]: I1203 12:57:02.577541 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:02 crc kubenswrapper[4986]: I1203 12:57:02.577552 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:02Z","lastTransitionTime":"2025-12-03T12:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:02 crc kubenswrapper[4986]: I1203 12:57:02.680068 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:02 crc kubenswrapper[4986]: I1203 12:57:02.680123 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:02 crc kubenswrapper[4986]: I1203 12:57:02.680135 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:02 crc kubenswrapper[4986]: I1203 12:57:02.680150 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:02 crc kubenswrapper[4986]: I1203 12:57:02.680163 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:02Z","lastTransitionTime":"2025-12-03T12:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:02 crc kubenswrapper[4986]: I1203 12:57:02.782923 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:02 crc kubenswrapper[4986]: I1203 12:57:02.782961 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:02 crc kubenswrapper[4986]: I1203 12:57:02.782970 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:02 crc kubenswrapper[4986]: I1203 12:57:02.782984 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:02 crc kubenswrapper[4986]: I1203 12:57:02.782997 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:02Z","lastTransitionTime":"2025-12-03T12:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:02 crc kubenswrapper[4986]: I1203 12:57:02.847073 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:02 crc kubenswrapper[4986]: I1203 12:57:02.847111 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:02 crc kubenswrapper[4986]: I1203 12:57:02.847121 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:02 crc kubenswrapper[4986]: I1203 12:57:02.847134 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:02 crc kubenswrapper[4986]: I1203 12:57:02.847143 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:02Z","lastTransitionTime":"2025-12-03T12:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:02 crc kubenswrapper[4986]: E1203 12:57:02.864142 4986 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:57:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:57:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:57:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:57:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:57:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:57:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:57:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:57:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b28ccd4-3e47-4e26-a3b8-f87de96f7586\\\",\\\"systemUUID\\\":\\\"52e71ce5-258d-4951-aa26-5d4aac0725ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:57:02Z is after 2025-08-24T17:21:41Z" Dec 03 12:57:02 crc kubenswrapper[4986]: I1203 12:57:02.869075 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:02 crc kubenswrapper[4986]: I1203 12:57:02.869114 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:02 crc kubenswrapper[4986]: I1203 12:57:02.869127 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:02 crc kubenswrapper[4986]: I1203 12:57:02.869143 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:02 crc kubenswrapper[4986]: I1203 12:57:02.869153 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:02Z","lastTransitionTime":"2025-12-03T12:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:02 crc kubenswrapper[4986]: E1203 12:57:02.886208 4986 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:57:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:57:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:57:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:57:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:57:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:57:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:57:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:57:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b28ccd4-3e47-4e26-a3b8-f87de96f7586\\\",\\\"systemUUID\\\":\\\"52e71ce5-258d-4951-aa26-5d4aac0725ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:57:02Z is after 2025-08-24T17:21:41Z" Dec 03 12:57:02 crc kubenswrapper[4986]: I1203 12:57:02.889386 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:02 crc kubenswrapper[4986]: I1203 12:57:02.889418 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:02 crc kubenswrapper[4986]: I1203 12:57:02.889429 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:02 crc kubenswrapper[4986]: I1203 12:57:02.889445 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:02 crc kubenswrapper[4986]: I1203 12:57:02.889457 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:02Z","lastTransitionTime":"2025-12-03T12:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:02 crc kubenswrapper[4986]: E1203 12:57:02.902577 4986 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:57:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:57:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:57:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:57:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:57:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:57:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:57:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:57:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b28ccd4-3e47-4e26-a3b8-f87de96f7586\\\",\\\"systemUUID\\\":\\\"52e71ce5-258d-4951-aa26-5d4aac0725ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:57:02Z is after 2025-08-24T17:21:41Z" Dec 03 12:57:02 crc kubenswrapper[4986]: I1203 12:57:02.905393 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:02 crc kubenswrapper[4986]: I1203 12:57:02.905425 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:02 crc kubenswrapper[4986]: I1203 12:57:02.905434 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:02 crc kubenswrapper[4986]: I1203 12:57:02.905447 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:02 crc kubenswrapper[4986]: I1203 12:57:02.905457 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:02Z","lastTransitionTime":"2025-12-03T12:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:02 crc kubenswrapper[4986]: E1203 12:57:02.916802 4986 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:57:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:57:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:57:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:57:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:57:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:57:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:57:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:57:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b28ccd4-3e47-4e26-a3b8-f87de96f7586\\\",\\\"systemUUID\\\":\\\"52e71ce5-258d-4951-aa26-5d4aac0725ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:57:02Z is after 2025-08-24T17:21:41Z" Dec 03 12:57:02 crc kubenswrapper[4986]: I1203 12:57:02.920181 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:02 crc kubenswrapper[4986]: I1203 12:57:02.920222 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:02 crc kubenswrapper[4986]: I1203 12:57:02.920230 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:02 crc kubenswrapper[4986]: I1203 12:57:02.920245 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:02 crc kubenswrapper[4986]: I1203 12:57:02.920256 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:02Z","lastTransitionTime":"2025-12-03T12:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:02 crc kubenswrapper[4986]: E1203 12:57:02.934191 4986 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:57:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:57:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:57:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:57:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:57:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:57:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T12:57:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T12:57:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b28ccd4-3e47-4e26-a3b8-f87de96f7586\\\",\\\"systemUUID\\\":\\\"52e71ce5-258d-4951-aa26-5d4aac0725ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:57:02Z is after 2025-08-24T17:21:41Z" Dec 03 12:57:02 crc kubenswrapper[4986]: E1203 12:57:02.934339 4986 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 03 12:57:02 crc kubenswrapper[4986]: I1203 12:57:02.935859 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:02 crc kubenswrapper[4986]: I1203 12:57:02.935885 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:02 crc kubenswrapper[4986]: I1203 12:57:02.935893 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:02 crc kubenswrapper[4986]: I1203 12:57:02.935907 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:02 crc kubenswrapper[4986]: I1203 12:57:02.935917 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:02Z","lastTransitionTime":"2025-12-03T12:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:02 crc kubenswrapper[4986]: I1203 12:57:02.942837 4986 scope.go:117] "RemoveContainer" containerID="8b002220fe79bb266ddd4f28ab0b9f29bb0ea1e8d204f558d023242437694625" Dec 03 12:57:03 crc kubenswrapper[4986]: I1203 12:57:03.038364 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:03 crc kubenswrapper[4986]: I1203 12:57:03.038404 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:03 crc kubenswrapper[4986]: I1203 12:57:03.038414 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:03 crc kubenswrapper[4986]: I1203 12:57:03.038430 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:03 crc kubenswrapper[4986]: I1203 12:57:03.038439 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:03Z","lastTransitionTime":"2025-12-03T12:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:03 crc kubenswrapper[4986]: I1203 12:57:03.140984 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:03 crc kubenswrapper[4986]: I1203 12:57:03.141020 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:03 crc kubenswrapper[4986]: I1203 12:57:03.141032 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:03 crc kubenswrapper[4986]: I1203 12:57:03.141047 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:03 crc kubenswrapper[4986]: I1203 12:57:03.141059 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:03Z","lastTransitionTime":"2025-12-03T12:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:03 crc kubenswrapper[4986]: I1203 12:57:03.243680 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:03 crc kubenswrapper[4986]: I1203 12:57:03.243750 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:03 crc kubenswrapper[4986]: I1203 12:57:03.243761 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:03 crc kubenswrapper[4986]: I1203 12:57:03.243780 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:03 crc kubenswrapper[4986]: I1203 12:57:03.243812 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:03Z","lastTransitionTime":"2025-12-03T12:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:03 crc kubenswrapper[4986]: I1203 12:57:03.351187 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:03 crc kubenswrapper[4986]: I1203 12:57:03.351219 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:03 crc kubenswrapper[4986]: I1203 12:57:03.351228 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:03 crc kubenswrapper[4986]: I1203 12:57:03.351242 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:03 crc kubenswrapper[4986]: I1203 12:57:03.351252 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:03Z","lastTransitionTime":"2025-12-03T12:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:03 crc kubenswrapper[4986]: I1203 12:57:03.453894 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:03 crc kubenswrapper[4986]: I1203 12:57:03.453932 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:03 crc kubenswrapper[4986]: I1203 12:57:03.453941 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:03 crc kubenswrapper[4986]: I1203 12:57:03.453954 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:03 crc kubenswrapper[4986]: I1203 12:57:03.453963 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:03Z","lastTransitionTime":"2025-12-03T12:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:03 crc kubenswrapper[4986]: I1203 12:57:03.556425 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:03 crc kubenswrapper[4986]: I1203 12:57:03.556485 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:03 crc kubenswrapper[4986]: I1203 12:57:03.556498 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:03 crc kubenswrapper[4986]: I1203 12:57:03.556515 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:03 crc kubenswrapper[4986]: I1203 12:57:03.556527 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:03Z","lastTransitionTime":"2025-12-03T12:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:03 crc kubenswrapper[4986]: I1203 12:57:03.659621 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:03 crc kubenswrapper[4986]: I1203 12:57:03.659831 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:03 crc kubenswrapper[4986]: I1203 12:57:03.659844 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:03 crc kubenswrapper[4986]: I1203 12:57:03.659994 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:03 crc kubenswrapper[4986]: I1203 12:57:03.660007 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:03Z","lastTransitionTime":"2025-12-03T12:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:03 crc kubenswrapper[4986]: I1203 12:57:03.762787 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:03 crc kubenswrapper[4986]: I1203 12:57:03.762825 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:03 crc kubenswrapper[4986]: I1203 12:57:03.762835 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:03 crc kubenswrapper[4986]: I1203 12:57:03.762850 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:03 crc kubenswrapper[4986]: I1203 12:57:03.762862 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:03Z","lastTransitionTime":"2025-12-03T12:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:03 crc kubenswrapper[4986]: I1203 12:57:03.865355 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:03 crc kubenswrapper[4986]: I1203 12:57:03.865391 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:03 crc kubenswrapper[4986]: I1203 12:57:03.865402 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:03 crc kubenswrapper[4986]: I1203 12:57:03.865416 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:03 crc kubenswrapper[4986]: I1203 12:57:03.865427 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:03Z","lastTransitionTime":"2025-12-03T12:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:03 crc kubenswrapper[4986]: I1203 12:57:03.943083 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 12:57:03 crc kubenswrapper[4986]: I1203 12:57:03.943091 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 12:57:03 crc kubenswrapper[4986]: I1203 12:57:03.943106 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 12:57:03 crc kubenswrapper[4986]: I1203 12:57:03.943227 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl2mt" Dec 03 12:57:03 crc kubenswrapper[4986]: E1203 12:57:03.943425 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 12:57:03 crc kubenswrapper[4986]: E1203 12:57:03.943621 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 12:57:03 crc kubenswrapper[4986]: E1203 12:57:03.943655 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 12:57:03 crc kubenswrapper[4986]: E1203 12:57:03.943726 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl2mt" podUID="ea24f625-ded4-4e37-a23b-f96fe691b0dd" Dec 03 12:57:03 crc kubenswrapper[4986]: I1203 12:57:03.967454 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:03 crc kubenswrapper[4986]: I1203 12:57:03.967489 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:03 crc kubenswrapper[4986]: I1203 12:57:03.967501 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:03 crc kubenswrapper[4986]: I1203 12:57:03.967519 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:03 crc kubenswrapper[4986]: I1203 12:57:03.967530 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:03Z","lastTransitionTime":"2025-12-03T12:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:04 crc kubenswrapper[4986]: I1203 12:57:04.070194 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:04 crc kubenswrapper[4986]: I1203 12:57:04.070233 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:04 crc kubenswrapper[4986]: I1203 12:57:04.070242 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:04 crc kubenswrapper[4986]: I1203 12:57:04.070257 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:04 crc kubenswrapper[4986]: I1203 12:57:04.070266 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:04Z","lastTransitionTime":"2025-12-03T12:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:04 crc kubenswrapper[4986]: I1203 12:57:04.173037 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:04 crc kubenswrapper[4986]: I1203 12:57:04.173084 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:04 crc kubenswrapper[4986]: I1203 12:57:04.173097 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:04 crc kubenswrapper[4986]: I1203 12:57:04.173115 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:04 crc kubenswrapper[4986]: I1203 12:57:04.173126 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:04Z","lastTransitionTime":"2025-12-03T12:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:04 crc kubenswrapper[4986]: I1203 12:57:04.275812 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:04 crc kubenswrapper[4986]: I1203 12:57:04.275886 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:04 crc kubenswrapper[4986]: I1203 12:57:04.275903 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:04 crc kubenswrapper[4986]: I1203 12:57:04.275928 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:04 crc kubenswrapper[4986]: I1203 12:57:04.275945 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:04Z","lastTransitionTime":"2025-12-03T12:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:04 crc kubenswrapper[4986]: I1203 12:57:04.379818 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:04 crc kubenswrapper[4986]: I1203 12:57:04.379864 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:04 crc kubenswrapper[4986]: I1203 12:57:04.379875 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:04 crc kubenswrapper[4986]: I1203 12:57:04.379894 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:04 crc kubenswrapper[4986]: I1203 12:57:04.379905 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:04Z","lastTransitionTime":"2025-12-03T12:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:04 crc kubenswrapper[4986]: I1203 12:57:04.483143 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:04 crc kubenswrapper[4986]: I1203 12:57:04.483218 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:04 crc kubenswrapper[4986]: I1203 12:57:04.483240 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:04 crc kubenswrapper[4986]: I1203 12:57:04.483266 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:04 crc kubenswrapper[4986]: I1203 12:57:04.483318 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:04Z","lastTransitionTime":"2025-12-03T12:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:04 crc kubenswrapper[4986]: I1203 12:57:04.586879 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:04 crc kubenswrapper[4986]: I1203 12:57:04.586945 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:04 crc kubenswrapper[4986]: I1203 12:57:04.586962 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:04 crc kubenswrapper[4986]: I1203 12:57:04.586988 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:04 crc kubenswrapper[4986]: I1203 12:57:04.587005 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:04Z","lastTransitionTime":"2025-12-03T12:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:04 crc kubenswrapper[4986]: I1203 12:57:04.689686 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:04 crc kubenswrapper[4986]: I1203 12:57:04.689756 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:04 crc kubenswrapper[4986]: I1203 12:57:04.689780 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:04 crc kubenswrapper[4986]: I1203 12:57:04.689808 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:04 crc kubenswrapper[4986]: I1203 12:57:04.689829 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:04Z","lastTransitionTime":"2025-12-03T12:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:04 crc kubenswrapper[4986]: I1203 12:57:04.792914 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:04 crc kubenswrapper[4986]: I1203 12:57:04.793031 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:04 crc kubenswrapper[4986]: I1203 12:57:04.793048 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:04 crc kubenswrapper[4986]: I1203 12:57:04.793072 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:04 crc kubenswrapper[4986]: I1203 12:57:04.793089 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:04Z","lastTransitionTime":"2025-12-03T12:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:04 crc kubenswrapper[4986]: I1203 12:57:04.896048 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:04 crc kubenswrapper[4986]: I1203 12:57:04.896117 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:04 crc kubenswrapper[4986]: I1203 12:57:04.896138 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:04 crc kubenswrapper[4986]: I1203 12:57:04.896165 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:04 crc kubenswrapper[4986]: I1203 12:57:04.896184 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:04Z","lastTransitionTime":"2025-12-03T12:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:04 crc kubenswrapper[4986]: I1203 12:57:04.999773 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:04 crc kubenswrapper[4986]: I1203 12:57:04.999846 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:04 crc kubenswrapper[4986]: I1203 12:57:04.999870 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:04 crc kubenswrapper[4986]: I1203 12:57:04.999900 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:04 crc kubenswrapper[4986]: I1203 12:57:04.999918 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:04Z","lastTransitionTime":"2025-12-03T12:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:05 crc kubenswrapper[4986]: I1203 12:57:05.103172 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:05 crc kubenswrapper[4986]: I1203 12:57:05.103223 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:05 crc kubenswrapper[4986]: I1203 12:57:05.103238 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:05 crc kubenswrapper[4986]: I1203 12:57:05.103258 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:05 crc kubenswrapper[4986]: I1203 12:57:05.103272 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:05Z","lastTransitionTime":"2025-12-03T12:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:05 crc kubenswrapper[4986]: I1203 12:57:05.206341 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:05 crc kubenswrapper[4986]: I1203 12:57:05.206439 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:05 crc kubenswrapper[4986]: I1203 12:57:05.206457 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:05 crc kubenswrapper[4986]: I1203 12:57:05.206487 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:05 crc kubenswrapper[4986]: I1203 12:57:05.206764 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:05Z","lastTransitionTime":"2025-12-03T12:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:05 crc kubenswrapper[4986]: I1203 12:57:05.310785 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:05 crc kubenswrapper[4986]: I1203 12:57:05.310821 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:05 crc kubenswrapper[4986]: I1203 12:57:05.310832 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:05 crc kubenswrapper[4986]: I1203 12:57:05.310847 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:05 crc kubenswrapper[4986]: I1203 12:57:05.310857 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:05Z","lastTransitionTime":"2025-12-03T12:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:05 crc kubenswrapper[4986]: I1203 12:57:05.401671 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9nf52_d3a45156-295b-4093-80e7-2059f81ddbd7/ovnkube-controller/2.log" Dec 03 12:57:05 crc kubenswrapper[4986]: I1203 12:57:05.403993 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" event={"ID":"d3a45156-295b-4093-80e7-2059f81ddbd7","Type":"ContainerStarted","Data":"70229ee3e0b6f1c7ad17628a8b9b3b456957d5cee34dc20aecb98f45140d1e88"} Dec 03 12:57:05 crc kubenswrapper[4986]: I1203 12:57:05.404437 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" Dec 03 12:57:05 crc kubenswrapper[4986]: I1203 12:57:05.413060 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:05 crc kubenswrapper[4986]: I1203 12:57:05.413112 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:05 crc kubenswrapper[4986]: I1203 12:57:05.413124 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:05 crc kubenswrapper[4986]: I1203 12:57:05.413137 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:05 crc kubenswrapper[4986]: I1203 12:57:05.413147 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:05Z","lastTransitionTime":"2025-12-03T12:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:05 crc kubenswrapper[4986]: I1203 12:57:05.421166 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0d71a3ffdbe593936c83e92e92e286eaff092333bb3dd54cb5bc2825ca46fbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a402aeaef03d8d3c684f0d8e7fe8a080bc02d761104e3c1095b3233aade2b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:57:05Z is after 2025-08-24T17:21:41Z" Dec 03 12:57:05 crc kubenswrapper[4986]: I1203 12:57:05.434620 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rkgd9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a939a03f-0eec-49cb-9b23-40e359e427d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b854d526fa4ac2b270c7f27c1b35b1ff0fbc671e40a928649b578f926d6b51b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc042debcddb7ea2233d6c224020a32acd8c996d5334d8e6eaae8d1329bb3774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc042debcddb7ea2233d6c224020a32acd8c996d5334d8e6eaae8d1329bb3774\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7128a1dd657b9829c1af31ef1a2f5ad08d2bfd65e5d352cf5a69a5be4e9aae9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7128a1dd657b9829c1af31ef1a2f5ad08d2bfd65e5d352cf5a69a5be4e9aae9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cd633caedf88a1bec633cef61b8b30ae422b8c8d91a012395831a09d398f0f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cd633caedf88a1bec633cef61b8b30ae422b8c8d91a012395831a09d398f0f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1b9e8e1f9326660fe6d7c89af18c5f7dd1134e9f2e866da8ba9c795e13cb57d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1b9e8e1f9326660fe6d7c89af18c5f7dd1134e9f2e866da8ba9c795e13cb57d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55ef54ee9f22072e2b437f38f6d06f24e6f1da02555505bc65c488b5818c87a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55ef54ee9f22072e2b437f38f6d06f24e6f1da02555505bc65c488b5818c87a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35fecc2332fdab60719b864653301c5e8322b13ace401a0ff016ff3dc09cd3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35fecc2332fdab60719b864653301c5e8322b13ace401a0ff016ff3dc09cd3d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rkgd9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:57:05Z is after 2025-08-24T17:21:41Z" Dec 03 12:57:05 crc kubenswrapper[4986]: I1203 12:57:05.455975 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rl2mt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea24f625-ded4-4e37-a23b-f96fe691b0dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhr82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhr82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rl2mt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:57:05Z is after 2025-08-24T17:21:41Z" Dec 03 12:57:05 crc kubenswrapper[4986]: I1203 12:57:05.470328 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee22d4f3-8488-4497-a757-a6dd51e84539\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4fc247caf16eab425363e12219a76c40022f0daf0bc9c8b52f4eb8e3726f242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86d62d991d6516ed42f7cad655018e1ebc85c8c1b100d624c1c2e5e8f616cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4ba9819f4f1746f03a558617fc6ee14f5627c37c362d1198a303b6f1854c7bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79cba0b7394bf25be648502fc84a916d8f2fa0949edce20b5a748046ce9b6f75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:57:05Z is after 2025-08-24T17:21:41Z" Dec 03 12:57:05 crc kubenswrapper[4986]: I1203 12:57:05.482631 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:57:05Z is after 2025-08-24T17:21:41Z" Dec 03 12:57:05 crc kubenswrapper[4986]: I1203 12:57:05.496585 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b923e52e6284c8adca87e7d3d801a432a676e4389d7ab3c0cba584c5b7d96f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:57:05Z is after 2025-08-24T17:21:41Z" Dec 03 12:57:05 crc kubenswrapper[4986]: I1203 12:57:05.506670 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d66d06e79ffb2ecd12acc0380e5e4839904147f9194e48277ca667df13a0cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:57:05Z is after 2025-08-24T17:21:41Z" Dec 03 12:57:05 crc kubenswrapper[4986]: I1203 12:57:05.515574 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:05 crc kubenswrapper[4986]: I1203 12:57:05.515621 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:05 crc kubenswrapper[4986]: I1203 12:57:05.515633 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:05 crc kubenswrapper[4986]: I1203 12:57:05.515648 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:05 crc kubenswrapper[4986]: I1203 12:57:05.515660 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:05Z","lastTransitionTime":"2025-12-03T12:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:05 crc kubenswrapper[4986]: I1203 12:57:05.519344 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://535b7b05dbe9b47808f3d358a1f370b696a5d5bb4dd96b049ac2cbe3d18ea065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgppd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0ab87b22153ba31fa79d2d210e695b274c2ddc878ad679c605aa8fb716534d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgppd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xggpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:57:05Z is after 2025-08-24T17:21:41Z" Dec 03 12:57:05 crc kubenswrapper[4986]: I1203 12:57:05.539179 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231ed674-1d19-4ee2-b29c-a1b7453ed531\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7eb1602f2d542b1998655456e5c49fca8f55a45a07cb075f989d4b10f91f6c95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0ca216d3d1121b7035d5d7ead25d459f95d56808997c02b3801cac0354c76a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04aa8cf2e01507ace747505514511263d9c560608e33f592a2c10be1edad8674\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70f0aa85e0be54890d4367deaf1a95970a7b0ac9d95afdd07c431680bdf7f15e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c0ed6891f87bfc3c62f0f13739720e012dc8fa3b122a18e799e164e4656abb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T12:56:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 12:56:01.437086 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 12:56:01.437205 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 12:56:01.438166 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1972143202/tls.crt::/tmp/serving-cert-1972143202/tls.key\\\\\\\"\\\\nI1203 12:56:02.167657 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 12:56:02.174337 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 12:56:02.174360 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 12:56:02.174377 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 12:56:02.174382 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 12:56:02.178753 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 12:56:02.179168 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 12:56:02.179175 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 12:56:02.179180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 12:56:02.179184 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 12:56:02.179187 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 12:56:02.179190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 12:56:02.178775 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 12:56:02.181220 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8ecb87e56d3fa7d72b57a6f333db4244ee7f60381778e7099a6fc88d91e1e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:57:05Z is after 2025-08-24T17:21:41Z" Dec 03 12:57:05 crc kubenswrapper[4986]: I1203 12:57:05.564562 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d51894dd-38f0-413a-adaf-22d196474bed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa98006a8736394300cdb9ef3331d2a23e1b3af2e2203f757c51ca9c700d26c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6e7d24b5f1325cfb2d210cb9f311026ab370c91deecc5d0d951bcd4eda79816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9bb9d940d346f9895e24d98173668ea2d0e68fd3035e5b03475bbc936582958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7fa0d673f037c8edcc75ee30b1bbeea73da55d5459347dacc9251fffbb38be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d4007e9ed33227938a5a3656fd74958aef599418fec4cd2048dfa7dd88924dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fb412bfaef1de92aa6be2e23c0dd73fe39eebf29b6814fb3bb6fd0281ddbbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fb412bfaef1de92aa6be2e23c0dd73fe39eebf29b6814fb3bb6fd0281ddbbd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c66ee903635427e08f96e3f96c512c4d25e79bf2fc14eb0c699b3b9a617cfeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c66ee903635427e08f96e3f96c512c4d25e79bf2fc14eb0c699b3b9a617cfeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54fb6e4ecd7f73267d49261875a00a51e5e36e6079170f694346e9d6c3df8f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54fb6e4ecd7f73267d49261875a00a51e5e36e6079170f694346e9d6c3df8f31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:57:05Z is after 2025-08-24T17:21:41Z" Dec 03 12:57:05 crc kubenswrapper[4986]: I1203 12:57:05.580238 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:57:05Z is after 2025-08-24T17:21:41Z" Dec 03 12:57:05 crc kubenswrapper[4986]: I1203 12:57:05.591791 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8lcrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75e4c223-9952-4d1b-b83b-5c45cfc51432\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b2af5d43c7842f129b6521e5b3b1963f6060b3e0c552a771c7df6a2a9ef867d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnr67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8lcrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:57:05Z is after 2025-08-24T17:21:41Z" Dec 03 12:57:05 crc kubenswrapper[4986]: I1203 12:57:05.610384 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1aad898-ad11-4c01-95c2-8a4d293aed99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77c695e6dd4e9c2fe64a41b90d59aebe2a6a54a1985702749b0a68eed6551f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0d8158a257a7d5b17302a2d6f4c81326b8c24487c18132807f91366cdb3457b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16164ea50f4f29916a5878eee07e8debc4268fae086f9086aa4c73c4d03dd082\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://865442eb9652b14d8c0642927e9f59c60344dfcffb5bd2d0cc0ec572ede971c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://865442eb9652b14d8c0642927e9f59c60344dfcffb5bd2d0cc0ec572ede971c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:57:05Z is after 2025-08-24T17:21:41Z" Dec 03 12:57:05 crc kubenswrapper[4986]: I1203 12:57:05.617241 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:05 crc kubenswrapper[4986]: I1203 12:57:05.617271 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:05 crc kubenswrapper[4986]: I1203 12:57:05.617294 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:05 crc kubenswrapper[4986]: I1203 12:57:05.617307 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:05 crc kubenswrapper[4986]: I1203 12:57:05.617316 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:05Z","lastTransitionTime":"2025-12-03T12:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:05 crc kubenswrapper[4986]: I1203 12:57:05.631195 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:57:05Z is after 2025-08-24T17:21:41Z" Dec 03 12:57:05 crc kubenswrapper[4986]: I1203 12:57:05.649983 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-px97g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97196b6d-75cc-4de4-8805-f9ce3fbd4230\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd2dabaec03ca7bc575d3b31582c870f9094270ef72a40fd7425e2cef5b54e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c28bf1aff27ba408262b5fc561e084ff3813ce95a7c14b57fea24d409eb33bea\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T12:56:50Z\\\",\\\"message\\\":\\\"2025-12-03T12:56:04+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_2e5072a4-4b1a-448b-a811-e2aaaa751560\\\\n2025-12-03T12:56:04+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_2e5072a4-4b1a-448b-a811-e2aaaa751560 to /host/opt/cni/bin/\\\\n2025-12-03T12:56:05Z [verbose] multus-daemon started\\\\n2025-12-03T12:56:05Z [verbose] Readiness Indicator file check\\\\n2025-12-03T12:56:50Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v4j5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-px97g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:57:05Z is after 2025-08-24T17:21:41Z" Dec 03 12:57:05 crc kubenswrapper[4986]: I1203 12:57:05.666946 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3a45156-295b-4093-80e7-2059f81ddbd7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c79f84fc019d109bac446dd058b4bafb0b516a13fae47b1924e4f7690658ba8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be6405facfa7726cbe56c48184cba0123774e8e34d3ceb9e18daed6f35560c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d77d078572dbe7f11e91ece0d25e76bd17004167dabce267a2b568e798aae158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2843fb3fcbadb926b104a9f7c4084468b36293aebed6c692c29419a0457b62a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d94eca13821749d7e15a350c411a831d2887f4211ed092a4fb57e26e49fcd9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ff8c1f1602e57602cae3f46dd3d2bf8d7abad6bd0952f4347152678e8606d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70229ee3e0b6f1c7ad17628a8b9b3b456957d5cee34dc20aecb98f45140d1e88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b002220fe79bb266ddd4f28ab0b9f29bb0ea1e8d204f558d023242437694625\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T12:56:33Z\\\",\\\"message\\\":\\\"go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1203 12:56:33.207496 6692 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 12:56:33.207580 6692 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 12:56:33.207656 6692 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 12:56:33.208012 6692 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1203 12:56:33.208030 6692 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1203 12:56:33.208054 6692 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1203 12:56:33.208069 6692 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1203 12:56:33.208132 6692 factory.go:656] Stopping watch factory\\\\nI1203 12:56:33.208149 6692 ovnkube.go:599] Stopped ovnkube\\\\nI1203 12:56:33.208174 6692 handler.go:208] Removed *v1.Node event handler 2\\\\nI1203 12:56:33.208187 6692 handler.go:208] Removed *v1.Node event handler 7\\\\nI1203 12:56:33.208197 6692 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1203 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:32Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:57:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dfc7520e7cb1e09eb10bf9615dbae11d7e73474f1b73eed481d18269b65aef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c34a338f506e8dd09757984f92a29fd76ddd2d25fc00d680cc4883c4766080e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c34a338f506e8dd09757984f92a29fd76ddd2d25fc00d680cc4883c4766080e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9nf52\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:57:05Z is after 2025-08-24T17:21:41Z" Dec 03 12:57:05 crc kubenswrapper[4986]: I1203 12:57:05.681444 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2p8xn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"957171e6-7b64-4d92-b690-342e3251ed8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d417b35a888f61c40a58512bcff329faed85f3b4f8d1d29a7c6f2f43634a9090\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbhn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27878a8de45ccf0e02719e2a3f9d1339d8910db5947d423a3e7400359cc96253\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbhn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2p8xn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:57:05Z is after 2025-08-24T17:21:41Z" Dec 03 12:57:05 crc kubenswrapper[4986]: I1203 12:57:05.692250 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da0bee0b-131d-40cf-98b9-5dd5d6ddf8ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7998f6aae63a6935e19e2a1f2157aeb88cb8cb14e875d25a9dcc4f80f36f2fbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b459b0d69a8744da31d3019da0622a77ccfbb8218fbc14b25c9f80af144a3278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b459b0d69a8744da31d3019da0622a77ccfbb8218fbc14b25c9f80af144a3278\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:57:05Z is after 2025-08-24T17:21:41Z" Dec 03 12:57:05 crc kubenswrapper[4986]: I1203 12:57:05.702451 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fszqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a76bcf5c-6a62-4360-826d-ecac337c88ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d94370a14317ade02cb805783e09ffcd8c6ff07f35e9124e302028e6dfa3c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhtrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fszqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:57:05Z is after 2025-08-24T17:21:41Z" Dec 03 12:57:05 crc kubenswrapper[4986]: I1203 12:57:05.719989 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:05 crc kubenswrapper[4986]: I1203 12:57:05.720037 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:05 crc kubenswrapper[4986]: I1203 12:57:05.720048 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:05 crc kubenswrapper[4986]: I1203 12:57:05.720066 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:05 crc kubenswrapper[4986]: I1203 12:57:05.720079 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:05Z","lastTransitionTime":"2025-12-03T12:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:05 crc kubenswrapper[4986]: I1203 12:57:05.774185 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 12:57:05 crc kubenswrapper[4986]: E1203 12:57:05.774396 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 12:58:09.774366615 +0000 UTC m=+149.240797876 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:57:05 crc kubenswrapper[4986]: I1203 12:57:05.774446 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 12:57:05 crc kubenswrapper[4986]: I1203 12:57:05.774490 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 12:57:05 crc kubenswrapper[4986]: E1203 12:57:05.774558 4986 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 12:57:05 crc kubenswrapper[4986]: E1203 12:57:05.774610 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 12:58:09.774597042 +0000 UTC m=+149.241028233 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 12:57:05 crc kubenswrapper[4986]: E1203 12:57:05.774622 4986 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 12:57:05 crc kubenswrapper[4986]: E1203 12:57:05.774669 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 12:58:09.774658963 +0000 UTC m=+149.241090154 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 12:57:05 crc kubenswrapper[4986]: I1203 12:57:05.822735 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:05 crc kubenswrapper[4986]: I1203 12:57:05.822773 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:05 crc kubenswrapper[4986]: I1203 12:57:05.822782 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:05 crc kubenswrapper[4986]: I1203 12:57:05.822795 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:05 crc kubenswrapper[4986]: I1203 12:57:05.822803 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:05Z","lastTransitionTime":"2025-12-03T12:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:05 crc kubenswrapper[4986]: I1203 12:57:05.875021 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 12:57:05 crc kubenswrapper[4986]: I1203 12:57:05.875067 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 12:57:05 crc kubenswrapper[4986]: E1203 12:57:05.875177 4986 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 12:57:05 crc kubenswrapper[4986]: E1203 12:57:05.875192 4986 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 12:57:05 crc kubenswrapper[4986]: E1203 12:57:05.875204 4986 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 12:57:05 crc kubenswrapper[4986]: E1203 12:57:05.875227 4986 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 12:57:05 crc kubenswrapper[4986]: E1203 12:57:05.875250 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-03 12:58:09.875237374 +0000 UTC m=+149.341668565 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 12:57:05 crc kubenswrapper[4986]: E1203 12:57:05.875258 4986 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 12:57:05 crc kubenswrapper[4986]: E1203 12:57:05.875274 4986 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 12:57:05 crc kubenswrapper[4986]: E1203 12:57:05.875354 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-03 12:58:09.875334637 +0000 UTC m=+149.341765898 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 12:57:05 crc kubenswrapper[4986]: I1203 12:57:05.925060 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:05 crc kubenswrapper[4986]: I1203 12:57:05.925100 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:05 crc kubenswrapper[4986]: I1203 12:57:05.925111 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:05 crc kubenswrapper[4986]: I1203 12:57:05.925127 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:05 crc kubenswrapper[4986]: I1203 12:57:05.925138 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:05Z","lastTransitionTime":"2025-12-03T12:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:05 crc kubenswrapper[4986]: I1203 12:57:05.942395 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 12:57:05 crc kubenswrapper[4986]: I1203 12:57:05.942420 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 12:57:05 crc kubenswrapper[4986]: I1203 12:57:05.942417 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 12:57:05 crc kubenswrapper[4986]: I1203 12:57:05.942395 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl2mt" Dec 03 12:57:05 crc kubenswrapper[4986]: E1203 12:57:05.942539 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 12:57:05 crc kubenswrapper[4986]: E1203 12:57:05.942592 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl2mt" podUID="ea24f625-ded4-4e37-a23b-f96fe691b0dd" Dec 03 12:57:05 crc kubenswrapper[4986]: E1203 12:57:05.942681 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 12:57:05 crc kubenswrapper[4986]: E1203 12:57:05.942792 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 12:57:06 crc kubenswrapper[4986]: I1203 12:57:06.028053 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:06 crc kubenswrapper[4986]: I1203 12:57:06.028133 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:06 crc kubenswrapper[4986]: I1203 12:57:06.028148 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:06 crc kubenswrapper[4986]: I1203 12:57:06.028169 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:06 crc kubenswrapper[4986]: I1203 12:57:06.028182 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:06Z","lastTransitionTime":"2025-12-03T12:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:06 crc kubenswrapper[4986]: I1203 12:57:06.130776 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:06 crc kubenswrapper[4986]: I1203 12:57:06.130826 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:06 crc kubenswrapper[4986]: I1203 12:57:06.130851 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:06 crc kubenswrapper[4986]: I1203 12:57:06.130871 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:06 crc kubenswrapper[4986]: I1203 12:57:06.130885 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:06Z","lastTransitionTime":"2025-12-03T12:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:06 crc kubenswrapper[4986]: I1203 12:57:06.233686 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:06 crc kubenswrapper[4986]: I1203 12:57:06.233756 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:06 crc kubenswrapper[4986]: I1203 12:57:06.233774 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:06 crc kubenswrapper[4986]: I1203 12:57:06.233802 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:06 crc kubenswrapper[4986]: I1203 12:57:06.233821 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:06Z","lastTransitionTime":"2025-12-03T12:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:06 crc kubenswrapper[4986]: I1203 12:57:06.336631 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:06 crc kubenswrapper[4986]: I1203 12:57:06.336700 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:06 crc kubenswrapper[4986]: I1203 12:57:06.336717 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:06 crc kubenswrapper[4986]: I1203 12:57:06.336740 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:06 crc kubenswrapper[4986]: I1203 12:57:06.336759 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:06Z","lastTransitionTime":"2025-12-03T12:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:06 crc kubenswrapper[4986]: I1203 12:57:06.439343 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:06 crc kubenswrapper[4986]: I1203 12:57:06.439403 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:06 crc kubenswrapper[4986]: I1203 12:57:06.439420 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:06 crc kubenswrapper[4986]: I1203 12:57:06.439443 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:06 crc kubenswrapper[4986]: I1203 12:57:06.439461 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:06Z","lastTransitionTime":"2025-12-03T12:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:06 crc kubenswrapper[4986]: I1203 12:57:06.542116 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:06 crc kubenswrapper[4986]: I1203 12:57:06.542165 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:06 crc kubenswrapper[4986]: I1203 12:57:06.542202 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:06 crc kubenswrapper[4986]: I1203 12:57:06.542222 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:06 crc kubenswrapper[4986]: I1203 12:57:06.542238 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:06Z","lastTransitionTime":"2025-12-03T12:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:06 crc kubenswrapper[4986]: I1203 12:57:06.644705 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:06 crc kubenswrapper[4986]: I1203 12:57:06.644783 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:06 crc kubenswrapper[4986]: I1203 12:57:06.644794 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:06 crc kubenswrapper[4986]: I1203 12:57:06.644814 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:06 crc kubenswrapper[4986]: I1203 12:57:06.644829 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:06Z","lastTransitionTime":"2025-12-03T12:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:06 crc kubenswrapper[4986]: I1203 12:57:06.748247 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:06 crc kubenswrapper[4986]: I1203 12:57:06.748379 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:06 crc kubenswrapper[4986]: I1203 12:57:06.748405 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:06 crc kubenswrapper[4986]: I1203 12:57:06.748441 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:06 crc kubenswrapper[4986]: I1203 12:57:06.748500 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:06Z","lastTransitionTime":"2025-12-03T12:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:06 crc kubenswrapper[4986]: I1203 12:57:06.851237 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:06 crc kubenswrapper[4986]: I1203 12:57:06.851323 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:06 crc kubenswrapper[4986]: I1203 12:57:06.851337 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:06 crc kubenswrapper[4986]: I1203 12:57:06.851357 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:06 crc kubenswrapper[4986]: I1203 12:57:06.851373 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:06Z","lastTransitionTime":"2025-12-03T12:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:06 crc kubenswrapper[4986]: I1203 12:57:06.954630 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:06 crc kubenswrapper[4986]: I1203 12:57:06.954692 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:06 crc kubenswrapper[4986]: I1203 12:57:06.954715 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:06 crc kubenswrapper[4986]: I1203 12:57:06.954744 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:06 crc kubenswrapper[4986]: I1203 12:57:06.954768 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:06Z","lastTransitionTime":"2025-12-03T12:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:07 crc kubenswrapper[4986]: I1203 12:57:07.057276 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:07 crc kubenswrapper[4986]: I1203 12:57:07.057485 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:07 crc kubenswrapper[4986]: I1203 12:57:07.057516 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:07 crc kubenswrapper[4986]: I1203 12:57:07.057550 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:07 crc kubenswrapper[4986]: I1203 12:57:07.057572 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:07Z","lastTransitionTime":"2025-12-03T12:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:07 crc kubenswrapper[4986]: I1203 12:57:07.160428 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:07 crc kubenswrapper[4986]: I1203 12:57:07.160491 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:07 crc kubenswrapper[4986]: I1203 12:57:07.160505 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:07 crc kubenswrapper[4986]: I1203 12:57:07.160524 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:07 crc kubenswrapper[4986]: I1203 12:57:07.160538 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:07Z","lastTransitionTime":"2025-12-03T12:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:07 crc kubenswrapper[4986]: I1203 12:57:07.263494 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:07 crc kubenswrapper[4986]: I1203 12:57:07.263533 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:07 crc kubenswrapper[4986]: I1203 12:57:07.263541 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:07 crc kubenswrapper[4986]: I1203 12:57:07.263560 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:07 crc kubenswrapper[4986]: I1203 12:57:07.263569 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:07Z","lastTransitionTime":"2025-12-03T12:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:07 crc kubenswrapper[4986]: I1203 12:57:07.366090 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:07 crc kubenswrapper[4986]: I1203 12:57:07.366137 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:07 crc kubenswrapper[4986]: I1203 12:57:07.366148 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:07 crc kubenswrapper[4986]: I1203 12:57:07.366172 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:07 crc kubenswrapper[4986]: I1203 12:57:07.366184 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:07Z","lastTransitionTime":"2025-12-03T12:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:07 crc kubenswrapper[4986]: I1203 12:57:07.468455 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:07 crc kubenswrapper[4986]: I1203 12:57:07.468504 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:07 crc kubenswrapper[4986]: I1203 12:57:07.468516 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:07 crc kubenswrapper[4986]: I1203 12:57:07.468534 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:07 crc kubenswrapper[4986]: I1203 12:57:07.468547 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:07Z","lastTransitionTime":"2025-12-03T12:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:07 crc kubenswrapper[4986]: I1203 12:57:07.570736 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:07 crc kubenswrapper[4986]: I1203 12:57:07.570768 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:07 crc kubenswrapper[4986]: I1203 12:57:07.570777 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:07 crc kubenswrapper[4986]: I1203 12:57:07.570792 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:07 crc kubenswrapper[4986]: I1203 12:57:07.570801 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:07Z","lastTransitionTime":"2025-12-03T12:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:07 crc kubenswrapper[4986]: I1203 12:57:07.673023 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:07 crc kubenswrapper[4986]: I1203 12:57:07.673076 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:07 crc kubenswrapper[4986]: I1203 12:57:07.673088 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:07 crc kubenswrapper[4986]: I1203 12:57:07.673105 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:07 crc kubenswrapper[4986]: I1203 12:57:07.673116 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:07Z","lastTransitionTime":"2025-12-03T12:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:07 crc kubenswrapper[4986]: I1203 12:57:07.774857 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:07 crc kubenswrapper[4986]: I1203 12:57:07.774931 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:07 crc kubenswrapper[4986]: I1203 12:57:07.774945 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:07 crc kubenswrapper[4986]: I1203 12:57:07.774963 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:07 crc kubenswrapper[4986]: I1203 12:57:07.774975 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:07Z","lastTransitionTime":"2025-12-03T12:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:07 crc kubenswrapper[4986]: I1203 12:57:07.877367 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:07 crc kubenswrapper[4986]: I1203 12:57:07.877413 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:07 crc kubenswrapper[4986]: I1203 12:57:07.877425 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:07 crc kubenswrapper[4986]: I1203 12:57:07.877441 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:07 crc kubenswrapper[4986]: I1203 12:57:07.877452 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:07Z","lastTransitionTime":"2025-12-03T12:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:07 crc kubenswrapper[4986]: I1203 12:57:07.943323 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl2mt" Dec 03 12:57:07 crc kubenswrapper[4986]: I1203 12:57:07.943330 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 12:57:07 crc kubenswrapper[4986]: I1203 12:57:07.943475 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 12:57:07 crc kubenswrapper[4986]: E1203 12:57:07.943725 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl2mt" podUID="ea24f625-ded4-4e37-a23b-f96fe691b0dd" Dec 03 12:57:07 crc kubenswrapper[4986]: E1203 12:57:07.943801 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 12:57:07 crc kubenswrapper[4986]: E1203 12:57:07.943881 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 12:57:07 crc kubenswrapper[4986]: I1203 12:57:07.943935 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 12:57:07 crc kubenswrapper[4986]: E1203 12:57:07.944046 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 12:57:07 crc kubenswrapper[4986]: I1203 12:57:07.980854 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:07 crc kubenswrapper[4986]: I1203 12:57:07.980936 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:07 crc kubenswrapper[4986]: I1203 12:57:07.980950 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:07 crc kubenswrapper[4986]: I1203 12:57:07.980966 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:07 crc kubenswrapper[4986]: I1203 12:57:07.980985 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:07Z","lastTransitionTime":"2025-12-03T12:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:08 crc kubenswrapper[4986]: I1203 12:57:08.083775 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:08 crc kubenswrapper[4986]: I1203 12:57:08.083882 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:08 crc kubenswrapper[4986]: I1203 12:57:08.083896 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:08 crc kubenswrapper[4986]: I1203 12:57:08.083921 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:08 crc kubenswrapper[4986]: I1203 12:57:08.083934 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:08Z","lastTransitionTime":"2025-12-03T12:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:08 crc kubenswrapper[4986]: I1203 12:57:08.186320 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:08 crc kubenswrapper[4986]: I1203 12:57:08.186352 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:08 crc kubenswrapper[4986]: I1203 12:57:08.186361 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:08 crc kubenswrapper[4986]: I1203 12:57:08.186375 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:08 crc kubenswrapper[4986]: I1203 12:57:08.186382 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:08Z","lastTransitionTime":"2025-12-03T12:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:08 crc kubenswrapper[4986]: I1203 12:57:08.288344 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:08 crc kubenswrapper[4986]: I1203 12:57:08.288426 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:08 crc kubenswrapper[4986]: I1203 12:57:08.288440 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:08 crc kubenswrapper[4986]: I1203 12:57:08.288460 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:08 crc kubenswrapper[4986]: I1203 12:57:08.288474 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:08Z","lastTransitionTime":"2025-12-03T12:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:08 crc kubenswrapper[4986]: I1203 12:57:08.391035 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:08 crc kubenswrapper[4986]: I1203 12:57:08.391088 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:08 crc kubenswrapper[4986]: I1203 12:57:08.391101 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:08 crc kubenswrapper[4986]: I1203 12:57:08.391121 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:08 crc kubenswrapper[4986]: I1203 12:57:08.391137 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:08Z","lastTransitionTime":"2025-12-03T12:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:08 crc kubenswrapper[4986]: I1203 12:57:08.416228 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9nf52_d3a45156-295b-4093-80e7-2059f81ddbd7/ovnkube-controller/3.log" Dec 03 12:57:08 crc kubenswrapper[4986]: I1203 12:57:08.417203 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9nf52_d3a45156-295b-4093-80e7-2059f81ddbd7/ovnkube-controller/2.log" Dec 03 12:57:08 crc kubenswrapper[4986]: I1203 12:57:08.420707 4986 generic.go:334] "Generic (PLEG): container finished" podID="d3a45156-295b-4093-80e7-2059f81ddbd7" containerID="70229ee3e0b6f1c7ad17628a8b9b3b456957d5cee34dc20aecb98f45140d1e88" exitCode=1 Dec 03 12:57:08 crc kubenswrapper[4986]: I1203 12:57:08.420751 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" event={"ID":"d3a45156-295b-4093-80e7-2059f81ddbd7","Type":"ContainerDied","Data":"70229ee3e0b6f1c7ad17628a8b9b3b456957d5cee34dc20aecb98f45140d1e88"} Dec 03 12:57:08 crc kubenswrapper[4986]: I1203 12:57:08.420789 4986 scope.go:117] "RemoveContainer" containerID="8b002220fe79bb266ddd4f28ab0b9f29bb0ea1e8d204f558d023242437694625" Dec 03 12:57:08 crc kubenswrapper[4986]: I1203 12:57:08.421367 4986 scope.go:117] "RemoveContainer" containerID="70229ee3e0b6f1c7ad17628a8b9b3b456957d5cee34dc20aecb98f45140d1e88" Dec 03 12:57:08 crc kubenswrapper[4986]: E1203 12:57:08.421513 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-9nf52_openshift-ovn-kubernetes(d3a45156-295b-4093-80e7-2059f81ddbd7)\"" pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" podUID="d3a45156-295b-4093-80e7-2059f81ddbd7" Dec 03 12:57:08 crc kubenswrapper[4986]: I1203 12:57:08.433810 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da0bee0b-131d-40cf-98b9-5dd5d6ddf8ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7998f6aae63a6935e19e2a1f2157aeb88cb8cb14e875d25a9dcc4f80f36f2fbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b459b0d69a8744da31d3019da0622a77ccfbb8218fbc14b25c9f80af144a3278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b459b0d69a8744da31d3019da0622a77ccfbb8218fbc14b25c9f80af144a3278\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:57:08Z is after 2025-08-24T17:21:41Z" Dec 03 12:57:08 crc kubenswrapper[4986]: I1203 12:57:08.443337 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fszqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a76bcf5c-6a62-4360-826d-ecac337c88ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d94370a14317ade02cb805783e09ffcd8c6ff07f35e9124e302028e6dfa3c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhtrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fszqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:57:08Z is after 2025-08-24T17:21:41Z" Dec 03 12:57:08 crc kubenswrapper[4986]: I1203 12:57:08.456233 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-px97g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97196b6d-75cc-4de4-8805-f9ce3fbd4230\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd2dabaec03ca7bc575d3b31582c870f9094270ef72a40fd7425e2cef5b54e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c28bf1aff27ba408262b5fc561e084ff3813ce95a7c14b57fea24d409eb33bea\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T12:56:50Z\\\",\\\"message\\\":\\\"2025-12-03T12:56:04+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_2e5072a4-4b1a-448b-a811-e2aaaa751560\\\\n2025-12-03T12:56:04+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_2e5072a4-4b1a-448b-a811-e2aaaa751560 to /host/opt/cni/bin/\\\\n2025-12-03T12:56:05Z [verbose] multus-daemon started\\\\n2025-12-03T12:56:05Z [verbose] Readiness Indicator file check\\\\n2025-12-03T12:56:50Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v4j5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-px97g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:57:08Z is after 2025-08-24T17:21:41Z" Dec 03 12:57:08 crc kubenswrapper[4986]: I1203 12:57:08.478258 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3a45156-295b-4093-80e7-2059f81ddbd7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c79f84fc019d109bac446dd058b4bafb0b516a13fae47b1924e4f7690658ba8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be6405facfa7726cbe56c48184cba0123774e8e34d3ceb9e18daed6f35560c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d77d078572dbe7f11e91ece0d25e76bd17004167dabce267a2b568e798aae158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2843fb3fcbadb926b104a9f7c4084468b36293aebed6c692c29419a0457b62a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d94eca13821749d7e15a350c411a831d2887f4211ed092a4fb57e26e49fcd9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ff8c1f1602e57602cae3f46dd3d2bf8d7abad6bd0952f4347152678e8606d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70229ee3e0b6f1c7ad17628a8b9b3b456957d5cee34dc20aecb98f45140d1e88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b002220fe79bb266ddd4f28ab0b9f29bb0ea1e8d204f558d023242437694625\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T12:56:33Z\\\",\\\"message\\\":\\\"go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1203 12:56:33.207496 6692 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 12:56:33.207580 6692 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 12:56:33.207656 6692 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 12:56:33.208012 6692 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1203 12:56:33.208030 6692 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1203 12:56:33.208054 6692 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1203 12:56:33.208069 6692 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1203 12:56:33.208132 6692 factory.go:656] Stopping watch factory\\\\nI1203 12:56:33.208149 6692 ovnkube.go:599] Stopped ovnkube\\\\nI1203 12:56:33.208174 6692 handler.go:208] Removed *v1.Node event handler 2\\\\nI1203 12:56:33.208187 6692 handler.go:208] Removed *v1.Node event handler 7\\\\nI1203 12:56:33.208197 6692 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1203 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:32Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70229ee3e0b6f1c7ad17628a8b9b3b456957d5cee34dc20aecb98f45140d1e88\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T12:57:07Z\\\",\\\"message\\\":\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/olm-operator-metrics\\\\\\\"}\\\\nF1203 12:57:05.826558 7110 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:57:05Z is after 2025-08-24T17:21:41Z]\\\\nI1203 12:57:05.826564 7110 services_controller.go:360] Finished syncing service olm-operator-metrics on namespace openshift-operator-lifecycle-manager for network=default : 1.116458ms\\\\nI1203 12:57:05.826519 7110 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]} options:{GoMap:map[iface-id-ver:5fe485a1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T12:57:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dfc7520e7cb1e09eb10bf9615dbae11d7e73474f1b73eed481d18269b65aef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c34a338f506e8dd09757984f92a29fd76ddd2d25fc00d680cc4883c4766080e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c34a338f506e8dd09757984f92a29fd76ddd2d25fc00d680cc4883c4766080e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9nf52\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:57:08Z is after 2025-08-24T17:21:41Z" Dec 03 12:57:08 crc kubenswrapper[4986]: I1203 12:57:08.491369 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2p8xn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"957171e6-7b64-4d92-b690-342e3251ed8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d417b35a888f61c40a58512bcff329faed85f3b4f8d1d29a7c6f2f43634a9090\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbhn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27878a8de45ccf0e02719e2a3f9d1339d8910db5947d423a3e7400359cc96253\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbhn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2p8xn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:57:08Z is after 2025-08-24T17:21:41Z" Dec 03 12:57:08 crc kubenswrapper[4986]: I1203 12:57:08.493914 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:08 crc kubenswrapper[4986]: I1203 12:57:08.493956 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:08 crc kubenswrapper[4986]: I1203 12:57:08.493968 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:08 crc kubenswrapper[4986]: I1203 12:57:08.493986 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:08 crc kubenswrapper[4986]: I1203 12:57:08.493999 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:08Z","lastTransitionTime":"2025-12-03T12:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:08 crc kubenswrapper[4986]: I1203 12:57:08.508018 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee22d4f3-8488-4497-a757-a6dd51e84539\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4fc247caf16eab425363e12219a76c40022f0daf0bc9c8b52f4eb8e3726f242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86d62d991d6516ed42f7cad655018e1ebc85c8c1b100d624c1c2e5e8f616cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4ba9819f4f1746f03a558617fc6ee14f5627c37c362d1198a303b6f1854c7bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79cba0b7394bf25be648502fc84a916d8f2fa0949edce20b5a748046ce9b6f75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:57:08Z is after 2025-08-24T17:21:41Z" Dec 03 12:57:08 crc kubenswrapper[4986]: I1203 12:57:08.525552 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:57:08Z is after 2025-08-24T17:21:41Z" Dec 03 12:57:08 crc kubenswrapper[4986]: I1203 12:57:08.537256 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0d71a3ffdbe593936c83e92e92e286eaff092333bb3dd54cb5bc2825ca46fbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a402aeaef03d8d3c684f0d8e7fe8a080bc02d761104e3c1095b3233aade2b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:57:08Z is after 2025-08-24T17:21:41Z" Dec 03 12:57:08 crc kubenswrapper[4986]: I1203 12:57:08.556815 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rkgd9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a939a03f-0eec-49cb-9b23-40e359e427d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b854d526fa4ac2b270c7f27c1b35b1ff0fbc671e40a928649b578f926d6b51b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc042debcddb7ea2233d6c224020a32acd8c996d5334d8e6eaae8d1329bb3774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc042debcddb7ea2233d6c224020a32acd8c996d5334d8e6eaae8d1329bb3774\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7128a1dd657b9829c1af31ef1a2f5ad08d2bfd65e5d352cf5a69a5be4e9aae9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7128a1dd657b9829c1af31ef1a2f5ad08d2bfd65e5d352cf5a69a5be4e9aae9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cd633caedf88a1bec633cef61b8b30ae422b8c8d91a012395831a09d398f0f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cd633caedf88a1bec633cef61b8b30ae422b8c8d91a012395831a09d398f0f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1b9e8e1f9326660fe6d7c89af18c5f7dd1134e9f2e866da8ba9c795e13cb57d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1b9e8e1f9326660fe6d7c89af18c5f7dd1134e9f2e866da8ba9c795e13cb57d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55ef54ee9f22072e2b437f38f6d06f24e6f1da02555505bc65c488b5818c87a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55ef54ee9f22072e2b437f38f6d06f24e6f1da02555505bc65c488b5818c87a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35fecc2332fdab60719b864653301c5e8322b13ace401a0ff016ff3dc09cd3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35fecc2332fdab60719b864653301c5e8322b13ace401a0ff016ff3dc09cd3d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rkgd9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:57:08Z is after 2025-08-24T17:21:41Z" Dec 03 12:57:08 crc kubenswrapper[4986]: I1203 12:57:08.568142 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rl2mt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea24f625-ded4-4e37-a23b-f96fe691b0dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhr82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhr82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rl2mt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:57:08Z is after 2025-08-24T17:21:41Z" Dec 03 12:57:08 crc kubenswrapper[4986]: I1203 12:57:08.581731 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231ed674-1d19-4ee2-b29c-a1b7453ed531\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7eb1602f2d542b1998655456e5c49fca8f55a45a07cb075f989d4b10f91f6c95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0ca216d3d1121b7035d5d7ead25d459f95d56808997c02b3801cac0354c76a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04aa8cf2e01507ace747505514511263d9c560608e33f592a2c10be1edad8674\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70f0aa85e0be54890d4367deaf1a95970a7b0ac9d95afdd07c431680bdf7f15e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c0ed6891f87bfc3c62f0f13739720e012dc8fa3b122a18e799e164e4656abb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T12:56:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 12:56:01.437086 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 12:56:01.437205 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 12:56:01.438166 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1972143202/tls.crt::/tmp/serving-cert-1972143202/tls.key\\\\\\\"\\\\nI1203 12:56:02.167657 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 12:56:02.174337 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 12:56:02.174360 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 12:56:02.174377 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 12:56:02.174382 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 12:56:02.178753 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 12:56:02.179168 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 12:56:02.179175 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 12:56:02.179180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 12:56:02.179184 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 12:56:02.179187 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 12:56:02.179190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 12:56:02.178775 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 12:56:02.181220 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8ecb87e56d3fa7d72b57a6f333db4244ee7f60381778e7099a6fc88d91e1e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:57:08Z is after 2025-08-24T17:21:41Z" Dec 03 12:57:08 crc kubenswrapper[4986]: I1203 12:57:08.595732 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:08 crc kubenswrapper[4986]: I1203 12:57:08.595758 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:08 crc kubenswrapper[4986]: I1203 12:57:08.595768 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:08 crc kubenswrapper[4986]: I1203 12:57:08.595785 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:08 crc kubenswrapper[4986]: I1203 12:57:08.595794 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:08Z","lastTransitionTime":"2025-12-03T12:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:08 crc kubenswrapper[4986]: I1203 12:57:08.602702 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d51894dd-38f0-413a-adaf-22d196474bed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa98006a8736394300cdb9ef3331d2a23e1b3af2e2203f757c51ca9c700d26c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6e7d24b5f1325cfb2d210cb9f311026ab370c91deecc5d0d951bcd4eda79816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9bb9d940d346f9895e24d98173668ea2d0e68fd3035e5b03475bbc936582958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7fa0d673f037c8edcc75ee30b1bbeea73da55d5459347dacc9251fffbb38be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d4007e9ed33227938a5a3656fd74958aef599418fec4cd2048dfa7dd88924dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fb412bfaef1de92aa6be2e23c0dd73fe39eebf29b6814fb3bb6fd0281ddbbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fb412bfaef1de92aa6be2e23c0dd73fe39eebf29b6814fb3bb6fd0281ddbbd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c66ee903635427e08f96e3f96c512c4d25e79bf2fc14eb0c699b3b9a617cfeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c66ee903635427e08f96e3f96c512c4d25e79bf2fc14eb0c699b3b9a617cfeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54fb6e4ecd7f73267d49261875a00a51e5e36e6079170f694346e9d6c3df8f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54fb6e4ecd7f73267d49261875a00a51e5e36e6079170f694346e9d6c3df8f31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:57:08Z is after 2025-08-24T17:21:41Z" Dec 03 12:57:08 crc kubenswrapper[4986]: I1203 12:57:08.619523 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b923e52e6284c8adca87e7d3d801a432a676e4389d7ab3c0cba584c5b7d96f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:57:08Z is after 2025-08-24T17:21:41Z" Dec 03 12:57:08 crc kubenswrapper[4986]: I1203 12:57:08.633974 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d66d06e79ffb2ecd12acc0380e5e4839904147f9194e48277ca667df13a0cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:57:08Z is after 2025-08-24T17:21:41Z" Dec 03 12:57:08 crc kubenswrapper[4986]: I1203 12:57:08.645864 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://535b7b05dbe9b47808f3d358a1f370b696a5d5bb4dd96b049ac2cbe3d18ea065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgppd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0ab87b22153ba31fa79d2d210e695b274c2ddc878ad679c605aa8fb716534d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgppd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xggpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:57:08Z is after 2025-08-24T17:21:41Z" Dec 03 12:57:08 crc kubenswrapper[4986]: I1203 12:57:08.657768 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1aad898-ad11-4c01-95c2-8a4d293aed99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77c695e6dd4e9c2fe64a41b90d59aebe2a6a54a1985702749b0a68eed6551f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0d8158a257a7d5b17302a2d6f4c81326b8c24487c18132807f91366cdb3457b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16164ea50f4f29916a5878eee07e8debc4268fae086f9086aa4c73c4d03dd082\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://865442eb9652b14d8c0642927e9f59c60344dfcffb5bd2d0cc0ec572ede971c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://865442eb9652b14d8c0642927e9f59c60344dfcffb5bd2d0cc0ec572ede971c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:57:08Z is after 2025-08-24T17:21:41Z" Dec 03 12:57:08 crc kubenswrapper[4986]: I1203 12:57:08.670350 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:57:08Z is after 2025-08-24T17:21:41Z" Dec 03 12:57:08 crc kubenswrapper[4986]: I1203 12:57:08.680115 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:57:08Z is after 2025-08-24T17:21:41Z" Dec 03 12:57:08 crc kubenswrapper[4986]: I1203 12:57:08.687739 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8lcrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75e4c223-9952-4d1b-b83b-5c45cfc51432\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b2af5d43c7842f129b6521e5b3b1963f6060b3e0c552a771c7df6a2a9ef867d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnr67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8lcrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:57:08Z is after 2025-08-24T17:21:41Z" Dec 03 12:57:08 crc kubenswrapper[4986]: I1203 12:57:08.698382 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:08 crc kubenswrapper[4986]: I1203 12:57:08.698453 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:08 crc kubenswrapper[4986]: I1203 12:57:08.698465 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:08 crc kubenswrapper[4986]: I1203 12:57:08.698480 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:08 crc kubenswrapper[4986]: I1203 12:57:08.698490 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:08Z","lastTransitionTime":"2025-12-03T12:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:08 crc kubenswrapper[4986]: I1203 12:57:08.800791 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:08 crc kubenswrapper[4986]: I1203 12:57:08.800840 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:08 crc kubenswrapper[4986]: I1203 12:57:08.800853 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:08 crc kubenswrapper[4986]: I1203 12:57:08.800874 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:08 crc kubenswrapper[4986]: I1203 12:57:08.800886 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:08Z","lastTransitionTime":"2025-12-03T12:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:08 crc kubenswrapper[4986]: I1203 12:57:08.903357 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:08 crc kubenswrapper[4986]: I1203 12:57:08.903413 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:08 crc kubenswrapper[4986]: I1203 12:57:08.903427 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:08 crc kubenswrapper[4986]: I1203 12:57:08.903443 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:08 crc kubenswrapper[4986]: I1203 12:57:08.903456 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:08Z","lastTransitionTime":"2025-12-03T12:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:09 crc kubenswrapper[4986]: I1203 12:57:09.006274 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:09 crc kubenswrapper[4986]: I1203 12:57:09.006373 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:09 crc kubenswrapper[4986]: I1203 12:57:09.006386 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:09 crc kubenswrapper[4986]: I1203 12:57:09.006404 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:09 crc kubenswrapper[4986]: I1203 12:57:09.006418 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:09Z","lastTransitionTime":"2025-12-03T12:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:09 crc kubenswrapper[4986]: I1203 12:57:09.108725 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:09 crc kubenswrapper[4986]: I1203 12:57:09.108776 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:09 crc kubenswrapper[4986]: I1203 12:57:09.108790 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:09 crc kubenswrapper[4986]: I1203 12:57:09.108807 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:09 crc kubenswrapper[4986]: I1203 12:57:09.108819 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:09Z","lastTransitionTime":"2025-12-03T12:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:09 crc kubenswrapper[4986]: I1203 12:57:09.211351 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:09 crc kubenswrapper[4986]: I1203 12:57:09.211393 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:09 crc kubenswrapper[4986]: I1203 12:57:09.211449 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:09 crc kubenswrapper[4986]: I1203 12:57:09.211466 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:09 crc kubenswrapper[4986]: I1203 12:57:09.211478 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:09Z","lastTransitionTime":"2025-12-03T12:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:09 crc kubenswrapper[4986]: I1203 12:57:09.314243 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:09 crc kubenswrapper[4986]: I1203 12:57:09.314314 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:09 crc kubenswrapper[4986]: I1203 12:57:09.314325 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:09 crc kubenswrapper[4986]: I1203 12:57:09.314342 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:09 crc kubenswrapper[4986]: I1203 12:57:09.314352 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:09Z","lastTransitionTime":"2025-12-03T12:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:09 crc kubenswrapper[4986]: I1203 12:57:09.417099 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:09 crc kubenswrapper[4986]: I1203 12:57:09.417379 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:09 crc kubenswrapper[4986]: I1203 12:57:09.417452 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:09 crc kubenswrapper[4986]: I1203 12:57:09.417524 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:09 crc kubenswrapper[4986]: I1203 12:57:09.417592 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:09Z","lastTransitionTime":"2025-12-03T12:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:09 crc kubenswrapper[4986]: I1203 12:57:09.424472 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9nf52_d3a45156-295b-4093-80e7-2059f81ddbd7/ovnkube-controller/3.log" Dec 03 12:57:09 crc kubenswrapper[4986]: I1203 12:57:09.520488 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:09 crc kubenswrapper[4986]: I1203 12:57:09.520779 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:09 crc kubenswrapper[4986]: I1203 12:57:09.520953 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:09 crc kubenswrapper[4986]: I1203 12:57:09.521105 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:09 crc kubenswrapper[4986]: I1203 12:57:09.521210 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:09Z","lastTransitionTime":"2025-12-03T12:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:09 crc kubenswrapper[4986]: I1203 12:57:09.624000 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:09 crc kubenswrapper[4986]: I1203 12:57:09.624072 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:09 crc kubenswrapper[4986]: I1203 12:57:09.624084 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:09 crc kubenswrapper[4986]: I1203 12:57:09.624106 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:09 crc kubenswrapper[4986]: I1203 12:57:09.624120 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:09Z","lastTransitionTime":"2025-12-03T12:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:09 crc kubenswrapper[4986]: I1203 12:57:09.728191 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:09 crc kubenswrapper[4986]: I1203 12:57:09.728234 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:09 crc kubenswrapper[4986]: I1203 12:57:09.728248 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:09 crc kubenswrapper[4986]: I1203 12:57:09.728265 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:09 crc kubenswrapper[4986]: I1203 12:57:09.728293 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:09Z","lastTransitionTime":"2025-12-03T12:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:09 crc kubenswrapper[4986]: I1203 12:57:09.831048 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:09 crc kubenswrapper[4986]: I1203 12:57:09.831103 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:09 crc kubenswrapper[4986]: I1203 12:57:09.831118 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:09 crc kubenswrapper[4986]: I1203 12:57:09.831136 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:09 crc kubenswrapper[4986]: I1203 12:57:09.831148 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:09Z","lastTransitionTime":"2025-12-03T12:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:09 crc kubenswrapper[4986]: I1203 12:57:09.935275 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:09 crc kubenswrapper[4986]: I1203 12:57:09.935586 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:09 crc kubenswrapper[4986]: I1203 12:57:09.935667 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:09 crc kubenswrapper[4986]: I1203 12:57:09.935736 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:09 crc kubenswrapper[4986]: I1203 12:57:09.935800 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:09Z","lastTransitionTime":"2025-12-03T12:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:09 crc kubenswrapper[4986]: I1203 12:57:09.942745 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl2mt" Dec 03 12:57:09 crc kubenswrapper[4986]: I1203 12:57:09.942815 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 12:57:09 crc kubenswrapper[4986]: E1203 12:57:09.942922 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl2mt" podUID="ea24f625-ded4-4e37-a23b-f96fe691b0dd" Dec 03 12:57:09 crc kubenswrapper[4986]: I1203 12:57:09.942917 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 12:57:09 crc kubenswrapper[4986]: E1203 12:57:09.943000 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 12:57:09 crc kubenswrapper[4986]: E1203 12:57:09.943033 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 12:57:09 crc kubenswrapper[4986]: I1203 12:57:09.943240 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 12:57:09 crc kubenswrapper[4986]: E1203 12:57:09.943406 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 12:57:10 crc kubenswrapper[4986]: I1203 12:57:10.039505 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:10 crc kubenswrapper[4986]: I1203 12:57:10.040049 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:10 crc kubenswrapper[4986]: I1203 12:57:10.040142 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:10 crc kubenswrapper[4986]: I1203 12:57:10.040208 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:10 crc kubenswrapper[4986]: I1203 12:57:10.040277 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:10Z","lastTransitionTime":"2025-12-03T12:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:10 crc kubenswrapper[4986]: I1203 12:57:10.143196 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:10 crc kubenswrapper[4986]: I1203 12:57:10.143243 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:10 crc kubenswrapper[4986]: I1203 12:57:10.143258 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:10 crc kubenswrapper[4986]: I1203 12:57:10.143277 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:10 crc kubenswrapper[4986]: I1203 12:57:10.143342 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:10Z","lastTransitionTime":"2025-12-03T12:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:10 crc kubenswrapper[4986]: I1203 12:57:10.245790 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:10 crc kubenswrapper[4986]: I1203 12:57:10.245832 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:10 crc kubenswrapper[4986]: I1203 12:57:10.245842 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:10 crc kubenswrapper[4986]: I1203 12:57:10.245858 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:10 crc kubenswrapper[4986]: I1203 12:57:10.245871 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:10Z","lastTransitionTime":"2025-12-03T12:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:10 crc kubenswrapper[4986]: I1203 12:57:10.349036 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:10 crc kubenswrapper[4986]: I1203 12:57:10.349085 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:10 crc kubenswrapper[4986]: I1203 12:57:10.349095 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:10 crc kubenswrapper[4986]: I1203 12:57:10.349112 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:10 crc kubenswrapper[4986]: I1203 12:57:10.349121 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:10Z","lastTransitionTime":"2025-12-03T12:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:10 crc kubenswrapper[4986]: I1203 12:57:10.452703 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:10 crc kubenswrapper[4986]: I1203 12:57:10.452749 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:10 crc kubenswrapper[4986]: I1203 12:57:10.452757 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:10 crc kubenswrapper[4986]: I1203 12:57:10.452771 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:10 crc kubenswrapper[4986]: I1203 12:57:10.452781 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:10Z","lastTransitionTime":"2025-12-03T12:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:10 crc kubenswrapper[4986]: I1203 12:57:10.555263 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:10 crc kubenswrapper[4986]: I1203 12:57:10.555318 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:10 crc kubenswrapper[4986]: I1203 12:57:10.555327 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:10 crc kubenswrapper[4986]: I1203 12:57:10.555341 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:10 crc kubenswrapper[4986]: I1203 12:57:10.555351 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:10Z","lastTransitionTime":"2025-12-03T12:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:10 crc kubenswrapper[4986]: I1203 12:57:10.657906 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:10 crc kubenswrapper[4986]: I1203 12:57:10.657952 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:10 crc kubenswrapper[4986]: I1203 12:57:10.657962 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:10 crc kubenswrapper[4986]: I1203 12:57:10.657979 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:10 crc kubenswrapper[4986]: I1203 12:57:10.657990 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:10Z","lastTransitionTime":"2025-12-03T12:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:10 crc kubenswrapper[4986]: I1203 12:57:10.760558 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:10 crc kubenswrapper[4986]: I1203 12:57:10.760596 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:10 crc kubenswrapper[4986]: I1203 12:57:10.760604 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:10 crc kubenswrapper[4986]: I1203 12:57:10.760619 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:10 crc kubenswrapper[4986]: I1203 12:57:10.760630 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:10Z","lastTransitionTime":"2025-12-03T12:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:10 crc kubenswrapper[4986]: I1203 12:57:10.862716 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:10 crc kubenswrapper[4986]: I1203 12:57:10.862832 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:10 crc kubenswrapper[4986]: I1203 12:57:10.862870 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:10 crc kubenswrapper[4986]: I1203 12:57:10.862889 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:10 crc kubenswrapper[4986]: I1203 12:57:10.862903 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:10Z","lastTransitionTime":"2025-12-03T12:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:10 crc kubenswrapper[4986]: I1203 12:57:10.956774 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee22d4f3-8488-4497-a757-a6dd51e84539\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4fc247caf16eab425363e12219a76c40022f0daf0bc9c8b52f4eb8e3726f242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86d62d991d6516ed42f7cad655018e1ebc85c8c1b100d624c1c2e5e8f616cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4ba9819f4f1746f03a558617fc6ee14f5627c37c362d1198a303b6f1854c7bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79cba0b7394bf25be648502fc84a916d8f2fa0949edce20b5a748046ce9b6f75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:57:10Z is after 2025-08-24T17:21:41Z" Dec 03 12:57:10 crc kubenswrapper[4986]: I1203 12:57:10.965845 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:10 crc kubenswrapper[4986]: I1203 12:57:10.965899 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:10 crc kubenswrapper[4986]: I1203 12:57:10.965911 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:10 crc kubenswrapper[4986]: I1203 12:57:10.965930 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:10 crc kubenswrapper[4986]: I1203 12:57:10.965945 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:10Z","lastTransitionTime":"2025-12-03T12:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:10 crc kubenswrapper[4986]: I1203 12:57:10.972667 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:57:10Z is after 2025-08-24T17:21:41Z" Dec 03 12:57:10 crc kubenswrapper[4986]: I1203 12:57:10.984959 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0d71a3ffdbe593936c83e92e92e286eaff092333bb3dd54cb5bc2825ca46fbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a402aeaef03d8d3c684f0d8e7fe8a080bc02d761104e3c1095b3233aade2b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:57:10Z is after 2025-08-24T17:21:41Z" Dec 03 12:57:11 crc kubenswrapper[4986]: I1203 12:57:10.999775 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rkgd9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a939a03f-0eec-49cb-9b23-40e359e427d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b854d526fa4ac2b270c7f27c1b35b1ff0fbc671e40a928649b578f926d6b51b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc042debcddb7ea2233d6c224020a32acd8c996d5334d8e6eaae8d1329bb3774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc042debcddb7ea2233d6c224020a32acd8c996d5334d8e6eaae8d1329bb3774\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7128a1dd657b9829c1af31ef1a2f5ad08d2bfd65e5d352cf5a69a5be4e9aae9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7128a1dd657b9829c1af31ef1a2f5ad08d2bfd65e5d352cf5a69a5be4e9aae9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cd633caedf88a1bec633cef61b8b30ae422b8c8d91a012395831a09d398f0f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cd633caedf88a1bec633cef61b8b30ae422b8c8d91a012395831a09d398f0f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1b9e8e1f9326660fe6d7c89af18c5f7dd1134e9f2e866da8ba9c795e13cb57d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1b9e8e1f9326660fe6d7c89af18c5f7dd1134e9f2e866da8ba9c795e13cb57d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55ef54ee9f22072e2b437f38f6d06f24e6f1da02555505bc65c488b5818c87a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55ef54ee9f22072e2b437f38f6d06f24e6f1da02555505bc65c488b5818c87a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35fecc2332fdab60719b864653301c5e8322b13ace401a0ff016ff3dc09cd3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35fecc2332fdab60719b864653301c5e8322b13ace401a0ff016ff3dc09cd3d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9x5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rkgd9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:57:10Z is after 2025-08-24T17:21:41Z" Dec 03 12:57:11 crc kubenswrapper[4986]: I1203 12:57:11.011768 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rl2mt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea24f625-ded4-4e37-a23b-f96fe691b0dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhr82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhr82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rl2mt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:57:11Z is after 2025-08-24T17:21:41Z" Dec 03 12:57:11 crc kubenswrapper[4986]: I1203 12:57:11.024209 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231ed674-1d19-4ee2-b29c-a1b7453ed531\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7eb1602f2d542b1998655456e5c49fca8f55a45a07cb075f989d4b10f91f6c95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0ca216d3d1121b7035d5d7ead25d459f95d56808997c02b3801cac0354c76a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04aa8cf2e01507ace747505514511263d9c560608e33f592a2c10be1edad8674\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70f0aa85e0be54890d4367deaf1a95970a7b0ac9d95afdd07c431680bdf7f15e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c0ed6891f87bfc3c62f0f13739720e012dc8fa3b122a18e799e164e4656abb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T12:56:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 12:56:01.437086 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 12:56:01.437205 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 12:56:01.438166 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1972143202/tls.crt::/tmp/serving-cert-1972143202/tls.key\\\\\\\"\\\\nI1203 12:56:02.167657 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 12:56:02.174337 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 12:56:02.174360 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 12:56:02.174377 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 12:56:02.174382 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 12:56:02.178753 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 12:56:02.179168 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 12:56:02.179175 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 12:56:02.179180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 12:56:02.179184 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 12:56:02.179187 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 12:56:02.179190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 12:56:02.178775 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 12:56:02.181220 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8ecb87e56d3fa7d72b57a6f333db4244ee7f60381778e7099a6fc88d91e1e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:57:11Z is after 2025-08-24T17:21:41Z" Dec 03 12:57:11 crc kubenswrapper[4986]: I1203 12:57:11.041737 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d51894dd-38f0-413a-adaf-22d196474bed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa98006a8736394300cdb9ef3331d2a23e1b3af2e2203f757c51ca9c700d26c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6e7d24b5f1325cfb2d210cb9f311026ab370c91deecc5d0d951bcd4eda79816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9bb9d940d346f9895e24d98173668ea2d0e68fd3035e5b03475bbc936582958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7fa0d673f037c8edcc75ee30b1bbeea73da55d5459347dacc9251fffbb38be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d4007e9ed33227938a5a3656fd74958aef599418fec4cd2048dfa7dd88924dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fb412bfaef1de92aa6be2e23c0dd73fe39eebf29b6814fb3bb6fd0281ddbbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fb412bfaef1de92aa6be2e23c0dd73fe39eebf29b6814fb3bb6fd0281ddbbd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c66ee903635427e08f96e3f96c512c4d25e79bf2fc14eb0c699b3b9a617cfeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c66ee903635427e08f96e3f96c512c4d25e79bf2fc14eb0c699b3b9a617cfeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54fb6e4ecd7f73267d49261875a00a51e5e36e6079170f694346e9d6c3df8f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54fb6e4ecd7f73267d49261875a00a51e5e36e6079170f694346e9d6c3df8f31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:57:11Z is after 2025-08-24T17:21:41Z" Dec 03 12:57:11 crc kubenswrapper[4986]: I1203 12:57:11.053998 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b923e52e6284c8adca87e7d3d801a432a676e4389d7ab3c0cba584c5b7d96f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:57:11Z is after 2025-08-24T17:21:41Z" Dec 03 12:57:11 crc kubenswrapper[4986]: I1203 12:57:11.064868 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d66d06e79ffb2ecd12acc0380e5e4839904147f9194e48277ca667df13a0cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:57:11Z is after 2025-08-24T17:21:41Z" Dec 03 12:57:11 crc kubenswrapper[4986]: I1203 12:57:11.069357 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:11 crc kubenswrapper[4986]: I1203 12:57:11.069458 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:11 crc kubenswrapper[4986]: I1203 12:57:11.069473 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:11 crc kubenswrapper[4986]: I1203 12:57:11.069490 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:11 crc kubenswrapper[4986]: I1203 12:57:11.069537 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:11Z","lastTransitionTime":"2025-12-03T12:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:11 crc kubenswrapper[4986]: I1203 12:57:11.075954 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://535b7b05dbe9b47808f3d358a1f370b696a5d5bb4dd96b049ac2cbe3d18ea065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgppd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0ab87b22153ba31fa79d2d210e695b274c2ddc878ad679c605aa8fb716534d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgppd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xggpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:57:11Z is after 2025-08-24T17:21:41Z" Dec 03 12:57:11 crc kubenswrapper[4986]: I1203 12:57:11.086905 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1aad898-ad11-4c01-95c2-8a4d293aed99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77c695e6dd4e9c2fe64a41b90d59aebe2a6a54a1985702749b0a68eed6551f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0d8158a257a7d5b17302a2d6f4c81326b8c24487c18132807f91366cdb3457b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16164ea50f4f29916a5878eee07e8debc4268fae086f9086aa4c73c4d03dd082\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://865442eb9652b14d8c0642927e9f59c60344dfcffb5bd2d0cc0ec572ede971c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://865442eb9652b14d8c0642927e9f59c60344dfcffb5bd2d0cc0ec572ede971c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:57:11Z is after 2025-08-24T17:21:41Z" Dec 03 12:57:11 crc kubenswrapper[4986]: I1203 12:57:11.098396 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:57:11Z is after 2025-08-24T17:21:41Z" Dec 03 12:57:11 crc kubenswrapper[4986]: I1203 12:57:11.109133 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:57:11Z is after 2025-08-24T17:21:41Z" Dec 03 12:57:11 crc kubenswrapper[4986]: I1203 12:57:11.119099 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8lcrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75e4c223-9952-4d1b-b83b-5c45cfc51432\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b2af5d43c7842f129b6521e5b3b1963f6060b3e0c552a771c7df6a2a9ef867d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnr67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8lcrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:57:11Z is after 2025-08-24T17:21:41Z" Dec 03 12:57:11 crc kubenswrapper[4986]: I1203 12:57:11.127723 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da0bee0b-131d-40cf-98b9-5dd5d6ddf8ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:55:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7998f6aae63a6935e19e2a1f2157aeb88cb8cb14e875d25a9dcc4f80f36f2fbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b459b0d69a8744da31d3019da0622a77ccfbb8218fbc14b25c9f80af144a3278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b459b0d69a8744da31d3019da0622a77ccfbb8218fbc14b25c9f80af144a3278\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:55:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:55:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:57:11Z is after 2025-08-24T17:21:41Z" Dec 03 12:57:11 crc kubenswrapper[4986]: I1203 12:57:11.135413 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fszqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a76bcf5c-6a62-4360-826d-ecac337c88ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d94370a14317ade02cb805783e09ffcd8c6ff07f35e9124e302028e6dfa3c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhtrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fszqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:57:11Z is after 2025-08-24T17:21:41Z" Dec 03 12:57:11 crc kubenswrapper[4986]: I1203 12:57:11.147584 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-px97g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97196b6d-75cc-4de4-8805-f9ce3fbd4230\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd2dabaec03ca7bc575d3b31582c870f9094270ef72a40fd7425e2cef5b54e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c28bf1aff27ba408262b5fc561e084ff3813ce95a7c14b57fea24d409eb33bea\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T12:56:50Z\\\",\\\"message\\\":\\\"2025-12-03T12:56:04+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_2e5072a4-4b1a-448b-a811-e2aaaa751560\\\\n2025-12-03T12:56:04+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_2e5072a4-4b1a-448b-a811-e2aaaa751560 to /host/opt/cni/bin/\\\\n2025-12-03T12:56:05Z [verbose] multus-daemon started\\\\n2025-12-03T12:56:05Z [verbose] Readiness Indicator file check\\\\n2025-12-03T12:56:50Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v4j5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-px97g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:57:11Z is after 2025-08-24T17:21:41Z" Dec 03 12:57:11 crc kubenswrapper[4986]: I1203 12:57:11.163154 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3a45156-295b-4093-80e7-2059f81ddbd7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c79f84fc019d109bac446dd058b4bafb0b516a13fae47b1924e4f7690658ba8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be6405facfa7726cbe56c48184cba0123774e8e34d3ceb9e18daed6f35560c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d77d078572dbe7f11e91ece0d25e76bd17004167dabce267a2b568e798aae158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2843fb3fcbadb926b104a9f7c4084468b36293aebed6c692c29419a0457b62a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d94eca13821749d7e15a350c411a831d2887f4211ed092a4fb57e26e49fcd9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ff8c1f1602e57602cae3f46dd3d2bf8d7abad6bd0952f4347152678e8606d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70229ee3e0b6f1c7ad17628a8b9b3b456957d5cee34dc20aecb98f45140d1e88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b002220fe79bb266ddd4f28ab0b9f29bb0ea1e8d204f558d023242437694625\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T12:56:33Z\\\",\\\"message\\\":\\\"go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1203 12:56:33.207496 6692 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 12:56:33.207580 6692 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 12:56:33.207656 6692 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 12:56:33.208012 6692 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1203 12:56:33.208030 6692 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1203 12:56:33.208054 6692 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1203 12:56:33.208069 6692 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1203 12:56:33.208132 6692 factory.go:656] Stopping watch factory\\\\nI1203 12:56:33.208149 6692 ovnkube.go:599] Stopped ovnkube\\\\nI1203 12:56:33.208174 6692 handler.go:208] Removed *v1.Node event handler 2\\\\nI1203 12:56:33.208187 6692 handler.go:208] Removed *v1.Node event handler 7\\\\nI1203 12:56:33.208197 6692 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1203 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:32Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70229ee3e0b6f1c7ad17628a8b9b3b456957d5cee34dc20aecb98f45140d1e88\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T12:57:07Z\\\",\\\"message\\\":\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/olm-operator-metrics\\\\\\\"}\\\\nF1203 12:57:05.826558 7110 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:57:05Z is after 2025-08-24T17:21:41Z]\\\\nI1203 12:57:05.826564 7110 services_controller.go:360] Finished syncing service olm-operator-metrics on namespace openshift-operator-lifecycle-manager for network=default : 1.116458ms\\\\nI1203 12:57:05.826519 7110 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]} options:{GoMap:map[iface-id-ver:5fe485a1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T12:57:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dfc7520e7cb1e09eb10bf9615dbae11d7e73474f1b73eed481d18269b65aef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c34a338f506e8dd09757984f92a29fd76ddd2d25fc00d680cc4883c4766080e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c34a338f506e8dd09757984f92a29fd76ddd2d25fc00d680cc4883c4766080e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T12:56:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T12:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j752\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9nf52\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:57:11Z is after 2025-08-24T17:21:41Z" Dec 03 12:57:11 crc kubenswrapper[4986]: I1203 12:57:11.171810 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:11 crc kubenswrapper[4986]: I1203 12:57:11.171848 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:11 crc kubenswrapper[4986]: I1203 12:57:11.171858 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:11 crc kubenswrapper[4986]: I1203 12:57:11.171875 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:11 crc kubenswrapper[4986]: I1203 12:57:11.171886 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:11Z","lastTransitionTime":"2025-12-03T12:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:11 crc kubenswrapper[4986]: I1203 12:57:11.172791 4986 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2p8xn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"957171e6-7b64-4d92-b690-342e3251ed8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T12:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d417b35a888f61c40a58512bcff329faed85f3b4f8d1d29a7c6f2f43634a9090\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbhn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27878a8de45ccf0e02719e2a3f9d1339d8910db5947d423a3e7400359cc96253\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T12:56:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbhn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T12:56:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2p8xn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T12:57:11Z is after 2025-08-24T17:21:41Z" Dec 03 12:57:11 crc kubenswrapper[4986]: I1203 12:57:11.273831 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:11 crc kubenswrapper[4986]: I1203 12:57:11.274345 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:11 crc kubenswrapper[4986]: I1203 12:57:11.274431 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:11 crc kubenswrapper[4986]: I1203 12:57:11.274547 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:11 crc kubenswrapper[4986]: I1203 12:57:11.274619 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:11Z","lastTransitionTime":"2025-12-03T12:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:11 crc kubenswrapper[4986]: I1203 12:57:11.377339 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:11 crc kubenswrapper[4986]: I1203 12:57:11.377426 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:11 crc kubenswrapper[4986]: I1203 12:57:11.377441 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:11 crc kubenswrapper[4986]: I1203 12:57:11.377457 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:11 crc kubenswrapper[4986]: I1203 12:57:11.377469 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:11Z","lastTransitionTime":"2025-12-03T12:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:11 crc kubenswrapper[4986]: I1203 12:57:11.479264 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:11 crc kubenswrapper[4986]: I1203 12:57:11.479316 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:11 crc kubenswrapper[4986]: I1203 12:57:11.479324 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:11 crc kubenswrapper[4986]: I1203 12:57:11.479337 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:11 crc kubenswrapper[4986]: I1203 12:57:11.479346 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:11Z","lastTransitionTime":"2025-12-03T12:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:11 crc kubenswrapper[4986]: I1203 12:57:11.582714 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:11 crc kubenswrapper[4986]: I1203 12:57:11.582749 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:11 crc kubenswrapper[4986]: I1203 12:57:11.582758 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:11 crc kubenswrapper[4986]: I1203 12:57:11.582773 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:11 crc kubenswrapper[4986]: I1203 12:57:11.582783 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:11Z","lastTransitionTime":"2025-12-03T12:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:11 crc kubenswrapper[4986]: I1203 12:57:11.684736 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:11 crc kubenswrapper[4986]: I1203 12:57:11.685081 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:11 crc kubenswrapper[4986]: I1203 12:57:11.685214 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:11 crc kubenswrapper[4986]: I1203 12:57:11.685336 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:11 crc kubenswrapper[4986]: I1203 12:57:11.685436 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:11Z","lastTransitionTime":"2025-12-03T12:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:11 crc kubenswrapper[4986]: I1203 12:57:11.788011 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:11 crc kubenswrapper[4986]: I1203 12:57:11.788053 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:11 crc kubenswrapper[4986]: I1203 12:57:11.788067 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:11 crc kubenswrapper[4986]: I1203 12:57:11.788084 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:11 crc kubenswrapper[4986]: I1203 12:57:11.788098 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:11Z","lastTransitionTime":"2025-12-03T12:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:11 crc kubenswrapper[4986]: I1203 12:57:11.890475 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:11 crc kubenswrapper[4986]: I1203 12:57:11.890522 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:11 crc kubenswrapper[4986]: I1203 12:57:11.890534 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:11 crc kubenswrapper[4986]: I1203 12:57:11.890551 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:11 crc kubenswrapper[4986]: I1203 12:57:11.890562 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:11Z","lastTransitionTime":"2025-12-03T12:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:11 crc kubenswrapper[4986]: I1203 12:57:11.943314 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 12:57:11 crc kubenswrapper[4986]: I1203 12:57:11.943363 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 12:57:11 crc kubenswrapper[4986]: I1203 12:57:11.943596 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 12:57:11 crc kubenswrapper[4986]: E1203 12:57:11.943727 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 12:57:11 crc kubenswrapper[4986]: E1203 12:57:11.943906 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 12:57:11 crc kubenswrapper[4986]: I1203 12:57:11.944022 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl2mt" Dec 03 12:57:11 crc kubenswrapper[4986]: E1203 12:57:11.944069 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 12:57:11 crc kubenswrapper[4986]: E1203 12:57:11.944414 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl2mt" podUID="ea24f625-ded4-4e37-a23b-f96fe691b0dd" Dec 03 12:57:11 crc kubenswrapper[4986]: I1203 12:57:11.992968 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:11 crc kubenswrapper[4986]: I1203 12:57:11.993015 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:11 crc kubenswrapper[4986]: I1203 12:57:11.993026 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:11 crc kubenswrapper[4986]: I1203 12:57:11.993043 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:11 crc kubenswrapper[4986]: I1203 12:57:11.993053 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:11Z","lastTransitionTime":"2025-12-03T12:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:12 crc kubenswrapper[4986]: I1203 12:57:12.094755 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:12 crc kubenswrapper[4986]: I1203 12:57:12.094789 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:12 crc kubenswrapper[4986]: I1203 12:57:12.094799 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:12 crc kubenswrapper[4986]: I1203 12:57:12.094814 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:12 crc kubenswrapper[4986]: I1203 12:57:12.094824 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:12Z","lastTransitionTime":"2025-12-03T12:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:12 crc kubenswrapper[4986]: I1203 12:57:12.196411 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:12 crc kubenswrapper[4986]: I1203 12:57:12.196445 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:12 crc kubenswrapper[4986]: I1203 12:57:12.196454 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:12 crc kubenswrapper[4986]: I1203 12:57:12.196466 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:12 crc kubenswrapper[4986]: I1203 12:57:12.196475 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:12Z","lastTransitionTime":"2025-12-03T12:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:12 crc kubenswrapper[4986]: I1203 12:57:12.298595 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:12 crc kubenswrapper[4986]: I1203 12:57:12.298661 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:12 crc kubenswrapper[4986]: I1203 12:57:12.298669 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:12 crc kubenswrapper[4986]: I1203 12:57:12.298682 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:12 crc kubenswrapper[4986]: I1203 12:57:12.298691 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:12Z","lastTransitionTime":"2025-12-03T12:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:12 crc kubenswrapper[4986]: I1203 12:57:12.401634 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:12 crc kubenswrapper[4986]: I1203 12:57:12.401680 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:12 crc kubenswrapper[4986]: I1203 12:57:12.401691 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:12 crc kubenswrapper[4986]: I1203 12:57:12.401756 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:12 crc kubenswrapper[4986]: I1203 12:57:12.401769 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:12Z","lastTransitionTime":"2025-12-03T12:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:12 crc kubenswrapper[4986]: I1203 12:57:12.505503 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:12 crc kubenswrapper[4986]: I1203 12:57:12.505548 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:12 crc kubenswrapper[4986]: I1203 12:57:12.505558 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:12 crc kubenswrapper[4986]: I1203 12:57:12.505574 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:12 crc kubenswrapper[4986]: I1203 12:57:12.505585 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:12Z","lastTransitionTime":"2025-12-03T12:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:12 crc kubenswrapper[4986]: I1203 12:57:12.608079 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:12 crc kubenswrapper[4986]: I1203 12:57:12.608124 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:12 crc kubenswrapper[4986]: I1203 12:57:12.608136 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:12 crc kubenswrapper[4986]: I1203 12:57:12.608151 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:12 crc kubenswrapper[4986]: I1203 12:57:12.608162 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:12Z","lastTransitionTime":"2025-12-03T12:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:12 crc kubenswrapper[4986]: I1203 12:57:12.710583 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:12 crc kubenswrapper[4986]: I1203 12:57:12.710617 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:12 crc kubenswrapper[4986]: I1203 12:57:12.710628 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:12 crc kubenswrapper[4986]: I1203 12:57:12.710642 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:12 crc kubenswrapper[4986]: I1203 12:57:12.710653 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:12Z","lastTransitionTime":"2025-12-03T12:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:12 crc kubenswrapper[4986]: I1203 12:57:12.814021 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:12 crc kubenswrapper[4986]: I1203 12:57:12.814079 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:12 crc kubenswrapper[4986]: I1203 12:57:12.814091 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:12 crc kubenswrapper[4986]: I1203 12:57:12.814112 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:12 crc kubenswrapper[4986]: I1203 12:57:12.814127 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:12Z","lastTransitionTime":"2025-12-03T12:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:12 crc kubenswrapper[4986]: I1203 12:57:12.917238 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:12 crc kubenswrapper[4986]: I1203 12:57:12.917309 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:12 crc kubenswrapper[4986]: I1203 12:57:12.917324 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:12 crc kubenswrapper[4986]: I1203 12:57:12.917346 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:12 crc kubenswrapper[4986]: I1203 12:57:12.917358 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:12Z","lastTransitionTime":"2025-12-03T12:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:13 crc kubenswrapper[4986]: I1203 12:57:13.019990 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:13 crc kubenswrapper[4986]: I1203 12:57:13.020027 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:13 crc kubenswrapper[4986]: I1203 12:57:13.020036 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:13 crc kubenswrapper[4986]: I1203 12:57:13.020055 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:13 crc kubenswrapper[4986]: I1203 12:57:13.020067 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:13Z","lastTransitionTime":"2025-12-03T12:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:13 crc kubenswrapper[4986]: I1203 12:57:13.102176 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 12:57:13 crc kubenswrapper[4986]: I1203 12:57:13.102245 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 12:57:13 crc kubenswrapper[4986]: I1203 12:57:13.102263 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 12:57:13 crc kubenswrapper[4986]: I1203 12:57:13.102365 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 12:57:13 crc kubenswrapper[4986]: I1203 12:57:13.102418 4986 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T12:57:13Z","lastTransitionTime":"2025-12-03T12:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 12:57:13 crc kubenswrapper[4986]: I1203 12:57:13.156932 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-7kjpl"] Dec 03 12:57:13 crc kubenswrapper[4986]: I1203 12:57:13.157362 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7kjpl" Dec 03 12:57:13 crc kubenswrapper[4986]: I1203 12:57:13.160705 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 03 12:57:13 crc kubenswrapper[4986]: I1203 12:57:13.160813 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 03 12:57:13 crc kubenswrapper[4986]: I1203 12:57:13.161681 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 03 12:57:13 crc kubenswrapper[4986]: I1203 12:57:13.162400 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 03 12:57:13 crc kubenswrapper[4986]: I1203 12:57:13.186260 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-fszqj" podStartSLOduration=72.185565301 podStartE2EDuration="1m12.185565301s" podCreationTimestamp="2025-12-03 12:56:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 12:57:13.185350425 +0000 UTC m=+92.651781616" watchObservedRunningTime="2025-12-03 12:57:13.185565301 +0000 UTC m=+92.651996492" Dec 03 12:57:13 crc kubenswrapper[4986]: I1203 12:57:13.186472 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=19.186464633 podStartE2EDuration="19.186464633s" podCreationTimestamp="2025-12-03 12:56:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 12:57:13.172055802 +0000 UTC m=+92.638486993" watchObservedRunningTime="2025-12-03 12:57:13.186464633 +0000 UTC m=+92.652895824" Dec 03 12:57:13 crc kubenswrapper[4986]: I1203 12:57:13.223703 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-px97g" podStartSLOduration=71.223681096 podStartE2EDuration="1m11.223681096s" podCreationTimestamp="2025-12-03 12:56:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 12:57:13.20349482 +0000 UTC m=+92.669926011" watchObservedRunningTime="2025-12-03 12:57:13.223681096 +0000 UTC m=+92.690112287" Dec 03 12:57:13 crc kubenswrapper[4986]: I1203 12:57:13.256705 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f960f45-2b80-46b4-a304-6dd6f0be0e9d-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-7kjpl\" (UID: \"8f960f45-2b80-46b4-a304-6dd6f0be0e9d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7kjpl" Dec 03 12:57:13 crc kubenswrapper[4986]: I1203 12:57:13.256780 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8f960f45-2b80-46b4-a304-6dd6f0be0e9d-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-7kjpl\" (UID: \"8f960f45-2b80-46b4-a304-6dd6f0be0e9d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7kjpl" Dec 03 12:57:13 crc kubenswrapper[4986]: I1203 12:57:13.256812 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8f960f45-2b80-46b4-a304-6dd6f0be0e9d-service-ca\") pod \"cluster-version-operator-5c965bbfc6-7kjpl\" (UID: \"8f960f45-2b80-46b4-a304-6dd6f0be0e9d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7kjpl" Dec 03 12:57:13 crc kubenswrapper[4986]: I1203 12:57:13.256939 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/8f960f45-2b80-46b4-a304-6dd6f0be0e9d-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-7kjpl\" (UID: \"8f960f45-2b80-46b4-a304-6dd6f0be0e9d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7kjpl" Dec 03 12:57:13 crc kubenswrapper[4986]: I1203 12:57:13.256989 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/8f960f45-2b80-46b4-a304-6dd6f0be0e9d-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-7kjpl\" (UID: \"8f960f45-2b80-46b4-a304-6dd6f0be0e9d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7kjpl" Dec 03 12:57:13 crc kubenswrapper[4986]: I1203 12:57:13.273179 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2p8xn" podStartSLOduration=70.273156847 podStartE2EDuration="1m10.273156847s" podCreationTimestamp="2025-12-03 12:56:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 12:57:13.245790251 +0000 UTC m=+92.712221452" watchObservedRunningTime="2025-12-03 12:57:13.273156847 +0000 UTC m=+92.739588048" Dec 03 12:57:13 crc kubenswrapper[4986]: I1203 12:57:13.305475 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=66.305455486 podStartE2EDuration="1m6.305455486s" podCreationTimestamp="2025-12-03 12:56:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 12:57:13.274241454 +0000 UTC m=+92.740672645" watchObservedRunningTime="2025-12-03 12:57:13.305455486 +0000 UTC m=+92.771886677" Dec 03 12:57:13 crc kubenswrapper[4986]: I1203 12:57:13.358203 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/8f960f45-2b80-46b4-a304-6dd6f0be0e9d-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-7kjpl\" (UID: \"8f960f45-2b80-46b4-a304-6dd6f0be0e9d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7kjpl" Dec 03 12:57:13 crc kubenswrapper[4986]: I1203 12:57:13.358257 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/8f960f45-2b80-46b4-a304-6dd6f0be0e9d-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-7kjpl\" (UID: \"8f960f45-2b80-46b4-a304-6dd6f0be0e9d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7kjpl" Dec 03 12:57:13 crc kubenswrapper[4986]: I1203 12:57:13.358320 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f960f45-2b80-46b4-a304-6dd6f0be0e9d-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-7kjpl\" (UID: \"8f960f45-2b80-46b4-a304-6dd6f0be0e9d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7kjpl" Dec 03 12:57:13 crc kubenswrapper[4986]: I1203 12:57:13.358344 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8f960f45-2b80-46b4-a304-6dd6f0be0e9d-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-7kjpl\" (UID: \"8f960f45-2b80-46b4-a304-6dd6f0be0e9d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7kjpl" Dec 03 12:57:13 crc kubenswrapper[4986]: I1203 12:57:13.358368 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8f960f45-2b80-46b4-a304-6dd6f0be0e9d-service-ca\") pod \"cluster-version-operator-5c965bbfc6-7kjpl\" (UID: \"8f960f45-2b80-46b4-a304-6dd6f0be0e9d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7kjpl" Dec 03 12:57:13 crc kubenswrapper[4986]: I1203 12:57:13.358368 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/8f960f45-2b80-46b4-a304-6dd6f0be0e9d-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-7kjpl\" (UID: \"8f960f45-2b80-46b4-a304-6dd6f0be0e9d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7kjpl" Dec 03 12:57:13 crc kubenswrapper[4986]: I1203 12:57:13.358546 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/8f960f45-2b80-46b4-a304-6dd6f0be0e9d-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-7kjpl\" (UID: \"8f960f45-2b80-46b4-a304-6dd6f0be0e9d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7kjpl" Dec 03 12:57:13 crc kubenswrapper[4986]: I1203 12:57:13.359130 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8f960f45-2b80-46b4-a304-6dd6f0be0e9d-service-ca\") pod \"cluster-version-operator-5c965bbfc6-7kjpl\" (UID: \"8f960f45-2b80-46b4-a304-6dd6f0be0e9d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7kjpl" Dec 03 12:57:13 crc kubenswrapper[4986]: I1203 12:57:13.363253 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-rkgd9" podStartSLOduration=71.363235665 podStartE2EDuration="1m11.363235665s" podCreationTimestamp="2025-12-03 12:56:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 12:57:13.34508864 +0000 UTC m=+92.811519851" watchObservedRunningTime="2025-12-03 12:57:13.363235665 +0000 UTC m=+92.829666856" Dec 03 12:57:13 crc kubenswrapper[4986]: I1203 12:57:13.371206 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f960f45-2b80-46b4-a304-6dd6f0be0e9d-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-7kjpl\" (UID: \"8f960f45-2b80-46b4-a304-6dd6f0be0e9d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7kjpl" Dec 03 12:57:13 crc kubenswrapper[4986]: I1203 12:57:13.379724 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8f960f45-2b80-46b4-a304-6dd6f0be0e9d-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-7kjpl\" (UID: \"8f960f45-2b80-46b4-a304-6dd6f0be0e9d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7kjpl" Dec 03 12:57:13 crc kubenswrapper[4986]: I1203 12:57:13.387821 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=70.387798171 podStartE2EDuration="1m10.387798171s" podCreationTimestamp="2025-12-03 12:56:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 12:57:13.386074338 +0000 UTC m=+92.852505549" watchObservedRunningTime="2025-12-03 12:57:13.387798171 +0000 UTC m=+92.854229382" Dec 03 12:57:13 crc kubenswrapper[4986]: I1203 12:57:13.436828 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=70.43680372 podStartE2EDuration="1m10.43680372s" podCreationTimestamp="2025-12-03 12:56:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 12:57:13.41406343 +0000 UTC m=+92.880494621" watchObservedRunningTime="2025-12-03 12:57:13.43680372 +0000 UTC m=+92.903234911" Dec 03 12:57:13 crc kubenswrapper[4986]: I1203 12:57:13.469543 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podStartSLOduration=71.46952234 podStartE2EDuration="1m11.46952234s" podCreationTimestamp="2025-12-03 12:56:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 12:57:13.46951469 +0000 UTC m=+92.935945881" watchObservedRunningTime="2025-12-03 12:57:13.46952234 +0000 UTC m=+92.935953531" Dec 03 12:57:13 crc kubenswrapper[4986]: I1203 12:57:13.471897 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7kjpl" Dec 03 12:57:13 crc kubenswrapper[4986]: I1203 12:57:13.483834 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=42.483816879 podStartE2EDuration="42.483816879s" podCreationTimestamp="2025-12-03 12:56:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 12:57:13.483202213 +0000 UTC m=+92.949633404" watchObservedRunningTime="2025-12-03 12:57:13.483816879 +0000 UTC m=+92.950248070" Dec 03 12:57:13 crc kubenswrapper[4986]: I1203 12:57:13.532526 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-8lcrn" podStartSLOduration=71.532507799 podStartE2EDuration="1m11.532507799s" podCreationTimestamp="2025-12-03 12:56:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 12:57:13.531639158 +0000 UTC m=+92.998070359" watchObservedRunningTime="2025-12-03 12:57:13.532507799 +0000 UTC m=+92.998938990" Dec 03 12:57:13 crc kubenswrapper[4986]: I1203 12:57:13.942771 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl2mt" Dec 03 12:57:13 crc kubenswrapper[4986]: I1203 12:57:13.942808 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 12:57:13 crc kubenswrapper[4986]: I1203 12:57:13.942866 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 12:57:13 crc kubenswrapper[4986]: I1203 12:57:13.942886 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 12:57:13 crc kubenswrapper[4986]: E1203 12:57:13.943008 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl2mt" podUID="ea24f625-ded4-4e37-a23b-f96fe691b0dd" Dec 03 12:57:13 crc kubenswrapper[4986]: E1203 12:57:13.943272 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 12:57:13 crc kubenswrapper[4986]: E1203 12:57:13.943458 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 12:57:13 crc kubenswrapper[4986]: E1203 12:57:13.943671 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 12:57:14 crc kubenswrapper[4986]: I1203 12:57:14.442909 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7kjpl" event={"ID":"8f960f45-2b80-46b4-a304-6dd6f0be0e9d","Type":"ContainerStarted","Data":"b12c5a9d9de61a6d03a9304cff1e11013c5f0655c9231fde183e404b30dccded"} Dec 03 12:57:14 crc kubenswrapper[4986]: I1203 12:57:14.442959 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7kjpl" event={"ID":"8f960f45-2b80-46b4-a304-6dd6f0be0e9d","Type":"ContainerStarted","Data":"f54a84767dc97d31f7c74a52c9ba88b9f548209e07d512791f6966f3d3e3f506"} Dec 03 12:57:14 crc kubenswrapper[4986]: I1203 12:57:14.456511 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7kjpl" podStartSLOduration=72.456492637 podStartE2EDuration="1m12.456492637s" podCreationTimestamp="2025-12-03 12:56:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 12:57:14.45581152 +0000 UTC m=+93.922242731" watchObservedRunningTime="2025-12-03 12:57:14.456492637 +0000 UTC m=+93.922923818" Dec 03 12:57:15 crc kubenswrapper[4986]: I1203 12:57:15.943310 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 12:57:15 crc kubenswrapper[4986]: I1203 12:57:15.943335 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 12:57:15 crc kubenswrapper[4986]: I1203 12:57:15.943354 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 12:57:15 crc kubenswrapper[4986]: I1203 12:57:15.943391 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl2mt" Dec 03 12:57:15 crc kubenswrapper[4986]: E1203 12:57:15.943878 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 12:57:15 crc kubenswrapper[4986]: E1203 12:57:15.943994 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 12:57:15 crc kubenswrapper[4986]: E1203 12:57:15.944100 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 12:57:15 crc kubenswrapper[4986]: E1203 12:57:15.944179 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl2mt" podUID="ea24f625-ded4-4e37-a23b-f96fe691b0dd" Dec 03 12:57:17 crc kubenswrapper[4986]: I1203 12:57:17.942364 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 12:57:17 crc kubenswrapper[4986]: I1203 12:57:17.942391 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl2mt" Dec 03 12:57:17 crc kubenswrapper[4986]: I1203 12:57:17.942433 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 12:57:17 crc kubenswrapper[4986]: E1203 12:57:17.942518 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 12:57:17 crc kubenswrapper[4986]: E1203 12:57:17.942680 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl2mt" podUID="ea24f625-ded4-4e37-a23b-f96fe691b0dd" Dec 03 12:57:17 crc kubenswrapper[4986]: E1203 12:57:17.942743 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 12:57:17 crc kubenswrapper[4986]: I1203 12:57:17.942390 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 12:57:17 crc kubenswrapper[4986]: E1203 12:57:17.943446 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 12:57:19 crc kubenswrapper[4986]: I1203 12:57:19.926035 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ea24f625-ded4-4e37-a23b-f96fe691b0dd-metrics-certs\") pod \"network-metrics-daemon-rl2mt\" (UID: \"ea24f625-ded4-4e37-a23b-f96fe691b0dd\") " pod="openshift-multus/network-metrics-daemon-rl2mt" Dec 03 12:57:19 crc kubenswrapper[4986]: E1203 12:57:19.926237 4986 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 12:57:19 crc kubenswrapper[4986]: E1203 12:57:19.926374 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ea24f625-ded4-4e37-a23b-f96fe691b0dd-metrics-certs podName:ea24f625-ded4-4e37-a23b-f96fe691b0dd nodeName:}" failed. No retries permitted until 2025-12-03 12:58:23.926350846 +0000 UTC m=+163.392782107 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ea24f625-ded4-4e37-a23b-f96fe691b0dd-metrics-certs") pod "network-metrics-daemon-rl2mt" (UID: "ea24f625-ded4-4e37-a23b-f96fe691b0dd") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 12:57:19 crc kubenswrapper[4986]: I1203 12:57:19.942823 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 12:57:19 crc kubenswrapper[4986]: I1203 12:57:19.942890 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl2mt" Dec 03 12:57:19 crc kubenswrapper[4986]: I1203 12:57:19.942912 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 12:57:19 crc kubenswrapper[4986]: I1203 12:57:19.942857 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 12:57:19 crc kubenswrapper[4986]: E1203 12:57:19.942967 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 12:57:19 crc kubenswrapper[4986]: E1203 12:57:19.943123 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 12:57:19 crc kubenswrapper[4986]: E1203 12:57:19.943201 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl2mt" podUID="ea24f625-ded4-4e37-a23b-f96fe691b0dd" Dec 03 12:57:19 crc kubenswrapper[4986]: E1203 12:57:19.943253 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 12:57:21 crc kubenswrapper[4986]: I1203 12:57:21.942855 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 12:57:21 crc kubenswrapper[4986]: I1203 12:57:21.942914 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl2mt" Dec 03 12:57:21 crc kubenswrapper[4986]: I1203 12:57:21.942975 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 12:57:21 crc kubenswrapper[4986]: E1203 12:57:21.943088 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 12:57:21 crc kubenswrapper[4986]: I1203 12:57:21.943217 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 12:57:21 crc kubenswrapper[4986]: E1203 12:57:21.943450 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 12:57:21 crc kubenswrapper[4986]: E1203 12:57:21.943600 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl2mt" podUID="ea24f625-ded4-4e37-a23b-f96fe691b0dd" Dec 03 12:57:21 crc kubenswrapper[4986]: E1203 12:57:21.943762 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 12:57:22 crc kubenswrapper[4986]: I1203 12:57:22.944677 4986 scope.go:117] "RemoveContainer" containerID="70229ee3e0b6f1c7ad17628a8b9b3b456957d5cee34dc20aecb98f45140d1e88" Dec 03 12:57:22 crc kubenswrapper[4986]: E1203 12:57:22.945005 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-9nf52_openshift-ovn-kubernetes(d3a45156-295b-4093-80e7-2059f81ddbd7)\"" pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" podUID="d3a45156-295b-4093-80e7-2059f81ddbd7" Dec 03 12:57:23 crc kubenswrapper[4986]: I1203 12:57:23.943081 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 12:57:23 crc kubenswrapper[4986]: E1203 12:57:23.943230 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 12:57:23 crc kubenswrapper[4986]: I1203 12:57:23.943585 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 12:57:23 crc kubenswrapper[4986]: I1203 12:57:23.943705 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl2mt" Dec 03 12:57:23 crc kubenswrapper[4986]: I1203 12:57:23.943749 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 12:57:23 crc kubenswrapper[4986]: E1203 12:57:23.943958 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 12:57:23 crc kubenswrapper[4986]: E1203 12:57:23.944017 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl2mt" podUID="ea24f625-ded4-4e37-a23b-f96fe691b0dd" Dec 03 12:57:23 crc kubenswrapper[4986]: E1203 12:57:23.944069 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 12:57:25 crc kubenswrapper[4986]: I1203 12:57:25.943264 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 12:57:25 crc kubenswrapper[4986]: I1203 12:57:25.943367 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 12:57:25 crc kubenswrapper[4986]: I1203 12:57:25.943264 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 12:57:25 crc kubenswrapper[4986]: E1203 12:57:25.943440 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 12:57:25 crc kubenswrapper[4986]: E1203 12:57:25.943514 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 12:57:25 crc kubenswrapper[4986]: I1203 12:57:25.943323 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl2mt" Dec 03 12:57:25 crc kubenswrapper[4986]: E1203 12:57:25.943607 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 12:57:25 crc kubenswrapper[4986]: E1203 12:57:25.943693 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl2mt" podUID="ea24f625-ded4-4e37-a23b-f96fe691b0dd" Dec 03 12:57:27 crc kubenswrapper[4986]: I1203 12:57:27.942422 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 12:57:27 crc kubenswrapper[4986]: I1203 12:57:27.942491 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl2mt" Dec 03 12:57:27 crc kubenswrapper[4986]: I1203 12:57:27.942533 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 12:57:27 crc kubenswrapper[4986]: I1203 12:57:27.942598 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 12:57:27 crc kubenswrapper[4986]: E1203 12:57:27.942643 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 12:57:27 crc kubenswrapper[4986]: E1203 12:57:27.942738 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl2mt" podUID="ea24f625-ded4-4e37-a23b-f96fe691b0dd" Dec 03 12:57:27 crc kubenswrapper[4986]: E1203 12:57:27.942828 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 12:57:27 crc kubenswrapper[4986]: E1203 12:57:27.942900 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 12:57:29 crc kubenswrapper[4986]: I1203 12:57:29.943177 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 12:57:29 crc kubenswrapper[4986]: I1203 12:57:29.943200 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl2mt" Dec 03 12:57:29 crc kubenswrapper[4986]: I1203 12:57:29.943248 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 12:57:29 crc kubenswrapper[4986]: I1203 12:57:29.943251 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 12:57:29 crc kubenswrapper[4986]: E1203 12:57:29.943354 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 12:57:29 crc kubenswrapper[4986]: E1203 12:57:29.943465 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 12:57:29 crc kubenswrapper[4986]: E1203 12:57:29.943581 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl2mt" podUID="ea24f625-ded4-4e37-a23b-f96fe691b0dd" Dec 03 12:57:29 crc kubenswrapper[4986]: E1203 12:57:29.943814 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 12:57:31 crc kubenswrapper[4986]: I1203 12:57:31.942320 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 12:57:31 crc kubenswrapper[4986]: I1203 12:57:31.942363 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 12:57:31 crc kubenswrapper[4986]: I1203 12:57:31.942384 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 12:57:31 crc kubenswrapper[4986]: I1203 12:57:31.942455 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl2mt" Dec 03 12:57:31 crc kubenswrapper[4986]: E1203 12:57:31.942568 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 12:57:31 crc kubenswrapper[4986]: E1203 12:57:31.942909 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 12:57:31 crc kubenswrapper[4986]: E1203 12:57:31.942964 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 12:57:31 crc kubenswrapper[4986]: E1203 12:57:31.943039 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl2mt" podUID="ea24f625-ded4-4e37-a23b-f96fe691b0dd" Dec 03 12:57:33 crc kubenswrapper[4986]: I1203 12:57:33.942671 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 12:57:33 crc kubenswrapper[4986]: E1203 12:57:33.942875 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 12:57:33 crc kubenswrapper[4986]: I1203 12:57:33.942690 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 12:57:33 crc kubenswrapper[4986]: I1203 12:57:33.942689 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 12:57:33 crc kubenswrapper[4986]: E1203 12:57:33.942981 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 12:57:33 crc kubenswrapper[4986]: I1203 12:57:33.942708 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl2mt" Dec 03 12:57:33 crc kubenswrapper[4986]: E1203 12:57:33.943345 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 12:57:33 crc kubenswrapper[4986]: E1203 12:57:33.943391 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl2mt" podUID="ea24f625-ded4-4e37-a23b-f96fe691b0dd" Dec 03 12:57:35 crc kubenswrapper[4986]: I1203 12:57:35.943067 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl2mt" Dec 03 12:57:35 crc kubenswrapper[4986]: I1203 12:57:35.943767 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 12:57:35 crc kubenswrapper[4986]: I1203 12:57:35.943875 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 12:57:35 crc kubenswrapper[4986]: E1203 12:57:35.944016 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 12:57:35 crc kubenswrapper[4986]: E1203 12:57:35.944463 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 12:57:35 crc kubenswrapper[4986]: E1203 12:57:35.944548 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl2mt" podUID="ea24f625-ded4-4e37-a23b-f96fe691b0dd" Dec 03 12:57:35 crc kubenswrapper[4986]: I1203 12:57:35.944587 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 12:57:35 crc kubenswrapper[4986]: E1203 12:57:35.944791 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 12:57:36 crc kubenswrapper[4986]: I1203 12:57:36.943837 4986 scope.go:117] "RemoveContainer" containerID="70229ee3e0b6f1c7ad17628a8b9b3b456957d5cee34dc20aecb98f45140d1e88" Dec 03 12:57:36 crc kubenswrapper[4986]: E1203 12:57:36.943999 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-9nf52_openshift-ovn-kubernetes(d3a45156-295b-4093-80e7-2059f81ddbd7)\"" pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" podUID="d3a45156-295b-4093-80e7-2059f81ddbd7" Dec 03 12:57:37 crc kubenswrapper[4986]: I1203 12:57:37.548629 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-px97g_97196b6d-75cc-4de4-8805-f9ce3fbd4230/kube-multus/1.log" Dec 03 12:57:37 crc kubenswrapper[4986]: I1203 12:57:37.549515 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-px97g_97196b6d-75cc-4de4-8805-f9ce3fbd4230/kube-multus/0.log" Dec 03 12:57:37 crc kubenswrapper[4986]: I1203 12:57:37.549600 4986 generic.go:334] "Generic (PLEG): container finished" podID="97196b6d-75cc-4de4-8805-f9ce3fbd4230" containerID="1dd2dabaec03ca7bc575d3b31582c870f9094270ef72a40fd7425e2cef5b54e0" exitCode=1 Dec 03 12:57:37 crc kubenswrapper[4986]: I1203 12:57:37.549662 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-px97g" event={"ID":"97196b6d-75cc-4de4-8805-f9ce3fbd4230","Type":"ContainerDied","Data":"1dd2dabaec03ca7bc575d3b31582c870f9094270ef72a40fd7425e2cef5b54e0"} Dec 03 12:57:37 crc kubenswrapper[4986]: I1203 12:57:37.549751 4986 scope.go:117] "RemoveContainer" containerID="c28bf1aff27ba408262b5fc561e084ff3813ce95a7c14b57fea24d409eb33bea" Dec 03 12:57:37 crc kubenswrapper[4986]: I1203 12:57:37.550714 4986 scope.go:117] "RemoveContainer" containerID="1dd2dabaec03ca7bc575d3b31582c870f9094270ef72a40fd7425e2cef5b54e0" Dec 03 12:57:37 crc kubenswrapper[4986]: E1203 12:57:37.551025 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-px97g_openshift-multus(97196b6d-75cc-4de4-8805-f9ce3fbd4230)\"" pod="openshift-multus/multus-px97g" podUID="97196b6d-75cc-4de4-8805-f9ce3fbd4230" Dec 03 12:57:37 crc kubenswrapper[4986]: I1203 12:57:37.942553 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 12:57:37 crc kubenswrapper[4986]: I1203 12:57:37.942594 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl2mt" Dec 03 12:57:37 crc kubenswrapper[4986]: E1203 12:57:37.942675 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 12:57:37 crc kubenswrapper[4986]: I1203 12:57:37.942684 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 12:57:37 crc kubenswrapper[4986]: I1203 12:57:37.942711 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 12:57:37 crc kubenswrapper[4986]: E1203 12:57:37.942750 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 12:57:37 crc kubenswrapper[4986]: E1203 12:57:37.942810 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 12:57:37 crc kubenswrapper[4986]: E1203 12:57:37.942874 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl2mt" podUID="ea24f625-ded4-4e37-a23b-f96fe691b0dd" Dec 03 12:57:38 crc kubenswrapper[4986]: I1203 12:57:38.554470 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-px97g_97196b6d-75cc-4de4-8805-f9ce3fbd4230/kube-multus/1.log" Dec 03 12:57:39 crc kubenswrapper[4986]: I1203 12:57:39.942796 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl2mt" Dec 03 12:57:39 crc kubenswrapper[4986]: I1203 12:57:39.942974 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 12:57:39 crc kubenswrapper[4986]: E1203 12:57:39.943589 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl2mt" podUID="ea24f625-ded4-4e37-a23b-f96fe691b0dd" Dec 03 12:57:39 crc kubenswrapper[4986]: I1203 12:57:39.943052 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 12:57:39 crc kubenswrapper[4986]: E1203 12:57:39.943637 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 12:57:39 crc kubenswrapper[4986]: I1203 12:57:39.942980 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 12:57:39 crc kubenswrapper[4986]: E1203 12:57:39.943698 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 12:57:39 crc kubenswrapper[4986]: E1203 12:57:39.943824 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 12:57:40 crc kubenswrapper[4986]: E1203 12:57:40.901111 4986 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Dec 03 12:57:41 crc kubenswrapper[4986]: E1203 12:57:41.563171 4986 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 03 12:57:41 crc kubenswrapper[4986]: I1203 12:57:41.943199 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 12:57:41 crc kubenswrapper[4986]: I1203 12:57:41.943257 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl2mt" Dec 03 12:57:41 crc kubenswrapper[4986]: I1203 12:57:41.943371 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 12:57:41 crc kubenswrapper[4986]: E1203 12:57:41.943490 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 12:57:41 crc kubenswrapper[4986]: I1203 12:57:41.943706 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 12:57:41 crc kubenswrapper[4986]: E1203 12:57:41.943788 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl2mt" podUID="ea24f625-ded4-4e37-a23b-f96fe691b0dd" Dec 03 12:57:41 crc kubenswrapper[4986]: E1203 12:57:41.943933 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 12:57:41 crc kubenswrapper[4986]: E1203 12:57:41.944192 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 12:57:43 crc kubenswrapper[4986]: I1203 12:57:43.942587 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl2mt" Dec 03 12:57:43 crc kubenswrapper[4986]: E1203 12:57:43.942739 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl2mt" podUID="ea24f625-ded4-4e37-a23b-f96fe691b0dd" Dec 03 12:57:43 crc kubenswrapper[4986]: I1203 12:57:43.942609 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 12:57:43 crc kubenswrapper[4986]: E1203 12:57:43.942819 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 12:57:43 crc kubenswrapper[4986]: I1203 12:57:43.942609 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 12:57:43 crc kubenswrapper[4986]: E1203 12:57:43.942883 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 12:57:43 crc kubenswrapper[4986]: I1203 12:57:43.942604 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 12:57:43 crc kubenswrapper[4986]: E1203 12:57:43.942954 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 12:57:45 crc kubenswrapper[4986]: I1203 12:57:45.943333 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 12:57:45 crc kubenswrapper[4986]: I1203 12:57:45.943412 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 12:57:45 crc kubenswrapper[4986]: E1203 12:57:45.943469 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 12:57:45 crc kubenswrapper[4986]: I1203 12:57:45.943427 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 12:57:45 crc kubenswrapper[4986]: E1203 12:57:45.943539 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 12:57:45 crc kubenswrapper[4986]: E1203 12:57:45.943658 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 12:57:45 crc kubenswrapper[4986]: I1203 12:57:45.944112 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl2mt" Dec 03 12:57:45 crc kubenswrapper[4986]: E1203 12:57:45.944342 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl2mt" podUID="ea24f625-ded4-4e37-a23b-f96fe691b0dd" Dec 03 12:57:46 crc kubenswrapper[4986]: E1203 12:57:46.564846 4986 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 03 12:57:47 crc kubenswrapper[4986]: I1203 12:57:47.942558 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 12:57:47 crc kubenswrapper[4986]: I1203 12:57:47.942575 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 12:57:47 crc kubenswrapper[4986]: I1203 12:57:47.942614 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl2mt" Dec 03 12:57:47 crc kubenswrapper[4986]: E1203 12:57:47.943519 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 12:57:47 crc kubenswrapper[4986]: E1203 12:57:47.943341 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 12:57:47 crc kubenswrapper[4986]: I1203 12:57:47.942620 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 12:57:47 crc kubenswrapper[4986]: E1203 12:57:47.943603 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl2mt" podUID="ea24f625-ded4-4e37-a23b-f96fe691b0dd" Dec 03 12:57:47 crc kubenswrapper[4986]: E1203 12:57:47.943724 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 12:57:49 crc kubenswrapper[4986]: I1203 12:57:49.942717 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 12:57:49 crc kubenswrapper[4986]: I1203 12:57:49.942728 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl2mt" Dec 03 12:57:49 crc kubenswrapper[4986]: I1203 12:57:49.942717 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 12:57:49 crc kubenswrapper[4986]: I1203 12:57:49.942868 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 12:57:49 crc kubenswrapper[4986]: E1203 12:57:49.942964 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 12:57:49 crc kubenswrapper[4986]: E1203 12:57:49.943142 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 12:57:49 crc kubenswrapper[4986]: E1203 12:57:49.943200 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl2mt" podUID="ea24f625-ded4-4e37-a23b-f96fe691b0dd" Dec 03 12:57:49 crc kubenswrapper[4986]: E1203 12:57:49.943250 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 12:57:51 crc kubenswrapper[4986]: E1203 12:57:51.565779 4986 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 03 12:57:51 crc kubenswrapper[4986]: I1203 12:57:51.942734 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 12:57:51 crc kubenswrapper[4986]: I1203 12:57:51.942929 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 12:57:51 crc kubenswrapper[4986]: E1203 12:57:51.942932 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 12:57:51 crc kubenswrapper[4986]: I1203 12:57:51.943026 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl2mt" Dec 03 12:57:51 crc kubenswrapper[4986]: E1203 12:57:51.943119 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 12:57:51 crc kubenswrapper[4986]: I1203 12:57:51.943274 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 12:57:51 crc kubenswrapper[4986]: E1203 12:57:51.943640 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 12:57:51 crc kubenswrapper[4986]: E1203 12:57:51.943892 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl2mt" podUID="ea24f625-ded4-4e37-a23b-f96fe691b0dd" Dec 03 12:57:51 crc kubenswrapper[4986]: I1203 12:57:51.944157 4986 scope.go:117] "RemoveContainer" containerID="1dd2dabaec03ca7bc575d3b31582c870f9094270ef72a40fd7425e2cef5b54e0" Dec 03 12:57:51 crc kubenswrapper[4986]: I1203 12:57:51.944538 4986 scope.go:117] "RemoveContainer" containerID="70229ee3e0b6f1c7ad17628a8b9b3b456957d5cee34dc20aecb98f45140d1e88" Dec 03 12:57:53 crc kubenswrapper[4986]: I1203 12:57:53.320818 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-rl2mt"] Dec 03 12:57:53 crc kubenswrapper[4986]: I1203 12:57:53.321229 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl2mt" Dec 03 12:57:53 crc kubenswrapper[4986]: E1203 12:57:53.321368 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl2mt" podUID="ea24f625-ded4-4e37-a23b-f96fe691b0dd" Dec 03 12:57:53 crc kubenswrapper[4986]: I1203 12:57:53.610916 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-px97g_97196b6d-75cc-4de4-8805-f9ce3fbd4230/kube-multus/1.log" Dec 03 12:57:53 crc kubenswrapper[4986]: I1203 12:57:53.610974 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-px97g" event={"ID":"97196b6d-75cc-4de4-8805-f9ce3fbd4230","Type":"ContainerStarted","Data":"f13dd00d8e806f3c535437f4f032413b30df5ad03e14969e338d1bd53faab5be"} Dec 03 12:57:53 crc kubenswrapper[4986]: I1203 12:57:53.613443 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9nf52_d3a45156-295b-4093-80e7-2059f81ddbd7/ovnkube-controller/3.log" Dec 03 12:57:53 crc kubenswrapper[4986]: I1203 12:57:53.616463 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" event={"ID":"d3a45156-295b-4093-80e7-2059f81ddbd7","Type":"ContainerStarted","Data":"0284afd2c8d9ae3edc84d051a83f44827c82b51ce989aff5f0442eb2f8fc0af7"} Dec 03 12:57:53 crc kubenswrapper[4986]: I1203 12:57:53.617130 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" Dec 03 12:57:53 crc kubenswrapper[4986]: I1203 12:57:53.651414 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" podStartSLOduration=111.651397219 podStartE2EDuration="1m51.651397219s" podCreationTimestamp="2025-12-03 12:56:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 12:57:53.649840845 +0000 UTC m=+133.116272036" watchObservedRunningTime="2025-12-03 12:57:53.651397219 +0000 UTC m=+133.117828410" Dec 03 12:57:53 crc kubenswrapper[4986]: I1203 12:57:53.943123 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 12:57:53 crc kubenswrapper[4986]: I1203 12:57:53.943181 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 12:57:53 crc kubenswrapper[4986]: E1203 12:57:53.943266 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 12:57:53 crc kubenswrapper[4986]: I1203 12:57:53.943126 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 12:57:53 crc kubenswrapper[4986]: E1203 12:57:53.943414 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 12:57:53 crc kubenswrapper[4986]: E1203 12:57:53.943467 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 12:57:54 crc kubenswrapper[4986]: I1203 12:57:54.943037 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl2mt" Dec 03 12:57:54 crc kubenswrapper[4986]: E1203 12:57:54.943177 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl2mt" podUID="ea24f625-ded4-4e37-a23b-f96fe691b0dd" Dec 03 12:57:55 crc kubenswrapper[4986]: I1203 12:57:55.943434 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 12:57:55 crc kubenswrapper[4986]: I1203 12:57:55.943494 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 12:57:55 crc kubenswrapper[4986]: I1203 12:57:55.943542 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 12:57:55 crc kubenswrapper[4986]: E1203 12:57:55.943690 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 12:57:55 crc kubenswrapper[4986]: E1203 12:57:55.943875 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 12:57:55 crc kubenswrapper[4986]: E1203 12:57:55.944268 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 12:57:56 crc kubenswrapper[4986]: E1203 12:57:56.567015 4986 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 03 12:57:56 crc kubenswrapper[4986]: I1203 12:57:56.943455 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl2mt" Dec 03 12:57:56 crc kubenswrapper[4986]: E1203 12:57:56.943593 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl2mt" podUID="ea24f625-ded4-4e37-a23b-f96fe691b0dd" Dec 03 12:57:57 crc kubenswrapper[4986]: I1203 12:57:57.942496 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 12:57:57 crc kubenswrapper[4986]: I1203 12:57:57.942551 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 12:57:57 crc kubenswrapper[4986]: I1203 12:57:57.942585 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 12:57:57 crc kubenswrapper[4986]: E1203 12:57:57.942722 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 12:57:57 crc kubenswrapper[4986]: E1203 12:57:57.942925 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 12:57:57 crc kubenswrapper[4986]: E1203 12:57:57.943113 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 12:57:58 crc kubenswrapper[4986]: I1203 12:57:58.943337 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl2mt" Dec 03 12:57:58 crc kubenswrapper[4986]: E1203 12:57:58.943608 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl2mt" podUID="ea24f625-ded4-4e37-a23b-f96fe691b0dd" Dec 03 12:57:59 crc kubenswrapper[4986]: I1203 12:57:59.942252 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 12:57:59 crc kubenswrapper[4986]: I1203 12:57:59.942379 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 12:57:59 crc kubenswrapper[4986]: I1203 12:57:59.942322 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 12:57:59 crc kubenswrapper[4986]: E1203 12:57:59.942491 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 12:57:59 crc kubenswrapper[4986]: E1203 12:57:59.942629 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 12:57:59 crc kubenswrapper[4986]: E1203 12:57:59.942812 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 12:58:00 crc kubenswrapper[4986]: I1203 12:58:00.942685 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl2mt" Dec 03 12:58:00 crc kubenswrapper[4986]: E1203 12:58:00.944976 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl2mt" podUID="ea24f625-ded4-4e37-a23b-f96fe691b0dd" Dec 03 12:58:01 crc kubenswrapper[4986]: E1203 12:58:01.568650 4986 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 03 12:58:01 crc kubenswrapper[4986]: I1203 12:58:01.942980 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 12:58:01 crc kubenswrapper[4986]: I1203 12:58:01.943005 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 12:58:01 crc kubenswrapper[4986]: E1203 12:58:01.943540 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 12:58:01 crc kubenswrapper[4986]: E1203 12:58:01.943701 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 12:58:01 crc kubenswrapper[4986]: I1203 12:58:01.943063 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 12:58:01 crc kubenswrapper[4986]: E1203 12:58:01.943813 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 12:58:02 crc kubenswrapper[4986]: I1203 12:58:02.943979 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl2mt" Dec 03 12:58:02 crc kubenswrapper[4986]: E1203 12:58:02.944231 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl2mt" podUID="ea24f625-ded4-4e37-a23b-f96fe691b0dd" Dec 03 12:58:03 crc kubenswrapper[4986]: I1203 12:58:03.943009 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 12:58:03 crc kubenswrapper[4986]: I1203 12:58:03.943053 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 12:58:03 crc kubenswrapper[4986]: I1203 12:58:03.943122 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 12:58:03 crc kubenswrapper[4986]: E1203 12:58:03.943160 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 12:58:03 crc kubenswrapper[4986]: E1203 12:58:03.943417 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 12:58:03 crc kubenswrapper[4986]: E1203 12:58:03.943543 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 12:58:04 crc kubenswrapper[4986]: I1203 12:58:04.942447 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl2mt" Dec 03 12:58:04 crc kubenswrapper[4986]: E1203 12:58:04.942641 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl2mt" podUID="ea24f625-ded4-4e37-a23b-f96fe691b0dd" Dec 03 12:58:05 crc kubenswrapper[4986]: I1203 12:58:05.942824 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 12:58:05 crc kubenswrapper[4986]: I1203 12:58:05.942925 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 12:58:05 crc kubenswrapper[4986]: E1203 12:58:05.943056 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 12:58:05 crc kubenswrapper[4986]: I1203 12:58:05.943375 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 12:58:05 crc kubenswrapper[4986]: E1203 12:58:05.943514 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 12:58:05 crc kubenswrapper[4986]: E1203 12:58:05.943926 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 12:58:06 crc kubenswrapper[4986]: I1203 12:58:06.942709 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl2mt" Dec 03 12:58:06 crc kubenswrapper[4986]: I1203 12:58:06.945092 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 03 12:58:06 crc kubenswrapper[4986]: I1203 12:58:06.951255 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 03 12:58:07 crc kubenswrapper[4986]: I1203 12:58:07.943200 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 12:58:07 crc kubenswrapper[4986]: I1203 12:58:07.943211 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 12:58:07 crc kubenswrapper[4986]: I1203 12:58:07.943354 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 12:58:07 crc kubenswrapper[4986]: I1203 12:58:07.945533 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 03 12:58:07 crc kubenswrapper[4986]: I1203 12:58:07.945924 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 03 12:58:07 crc kubenswrapper[4986]: I1203 12:58:07.946362 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 03 12:58:07 crc kubenswrapper[4986]: I1203 12:58:07.947626 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 03 12:58:09 crc kubenswrapper[4986]: I1203 12:58:09.857160 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 12:58:09 crc kubenswrapper[4986]: E1203 12:58:09.857324 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 13:00:11.857267623 +0000 UTC m=+271.323698824 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:09 crc kubenswrapper[4986]: I1203 12:58:09.857452 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 12:58:09 crc kubenswrapper[4986]: I1203 12:58:09.857570 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 12:58:09 crc kubenswrapper[4986]: I1203 12:58:09.858744 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 12:58:09 crc kubenswrapper[4986]: I1203 12:58:09.867938 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 12:58:09 crc kubenswrapper[4986]: I1203 12:58:09.959111 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 12:58:09 crc kubenswrapper[4986]: I1203 12:58:09.959183 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 12:58:09 crc kubenswrapper[4986]: I1203 12:58:09.963199 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 12:58:09 crc kubenswrapper[4986]: I1203 12:58:09.963808 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 12:58:10 crc kubenswrapper[4986]: I1203 12:58:10.065887 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 12:58:10 crc kubenswrapper[4986]: I1203 12:58:10.083772 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 12:58:10 crc kubenswrapper[4986]: I1203 12:58:10.102711 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 12:58:12 crc kubenswrapper[4986]: W1203 12:58:12.755335 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-7bb14da3bab1256aff77bb54b9042866be2e3f7c19850e516b0a4363437c4bd4 WatchSource:0}: Error finding container 7bb14da3bab1256aff77bb54b9042866be2e3f7c19850e516b0a4363437c4bd4: Status 404 returned error can't find the container with id 7bb14da3bab1256aff77bb54b9042866be2e3f7c19850e516b0a4363437c4bd4 Dec 03 12:58:12 crc kubenswrapper[4986]: W1203 12:58:12.969227 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-4985fc4ec7a9385540b48b2dec2a2565f894c438686acd68c0c74df85046c195 WatchSource:0}: Error finding container 4985fc4ec7a9385540b48b2dec2a2565f894c438686acd68c0c74df85046c195: Status 404 returned error can't find the container with id 4985fc4ec7a9385540b48b2dec2a2565f894c438686acd68c0c74df85046c195 Dec 03 12:58:13 crc kubenswrapper[4986]: I1203 12:58:13.683234 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"4985fc4ec7a9385540b48b2dec2a2565f894c438686acd68c0c74df85046c195"} Dec 03 12:58:13 crc kubenswrapper[4986]: I1203 12:58:13.684457 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"7bb14da3bab1256aff77bb54b9042866be2e3f7c19850e516b0a4363437c4bd4"} Dec 03 12:58:13 crc kubenswrapper[4986]: I1203 12:58:13.685402 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"8c1d85a6d96a95df7139f01a1378cab46b64810bf9145c20d74fa38af4b6ef87"} Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.689037 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"aff6cd29e0918114ad7231d66af67bdc44e0c8229bda1f6feabd9035fe172b8a"} Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.708272 4986 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.751431 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-pjgnh"] Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.751824 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pjgnh" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.753333 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-ptgv8"] Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.753856 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-ptgv8" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.754183 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-npjwk"] Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.754488 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-npjwk" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.754822 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-glvvh"] Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.755264 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-glvvh" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.756051 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-72n7g"] Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.756517 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-62rd9"] Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.756536 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-72n7g" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.761737 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-62rd9" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.784792 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-2k592"] Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.785455 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vz29n"] Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.785834 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vz29n" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.785935 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-2k592" Dec 03 12:58:14 crc kubenswrapper[4986]: W1203 12:58:14.785954 4986 reflector.go:561] object-"openshift-console-operator"/"serving-cert": failed to list *v1.Secret: secrets "serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-console-operator": no relationship found between node 'crc' and this object Dec 03 12:58:14 crc kubenswrapper[4986]: E1203 12:58:14.786033 4986 reflector.go:158] "Unhandled Error" err="object-\"openshift-console-operator\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-console-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.786158 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.786173 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.786313 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.786428 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.786713 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 03 12:58:14 crc kubenswrapper[4986]: W1203 12:58:14.786762 4986 reflector.go:561] object-"openshift-apiserver"/"image-import-ca": failed to list *v1.ConfigMap: configmaps "image-import-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Dec 03 12:58:14 crc kubenswrapper[4986]: E1203 12:58:14.786782 4986 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"image-import-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"image-import-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.786763 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.786904 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 03 12:58:14 crc kubenswrapper[4986]: W1203 12:58:14.786941 4986 reflector.go:561] object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr": failed to list *v1.Secret: secrets "console-operator-dockercfg-4xjcr" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-console-operator": no relationship found between node 'crc' and this object Dec 03 12:58:14 crc kubenswrapper[4986]: E1203 12:58:14.786979 4986 reflector.go:158] "Unhandled Error" err="object-\"openshift-console-operator\"/\"console-operator-dockercfg-4xjcr\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"console-operator-dockercfg-4xjcr\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-console-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.786714 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.787065 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 03 12:58:14 crc kubenswrapper[4986]: W1203 12:58:14.787267 4986 reflector.go:561] object-"openshift-console-operator"/"trusted-ca": failed to list *v1.ConfigMap: configmaps "trusted-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-console-operator": no relationship found between node 'crc' and this object Dec 03 12:58:14 crc kubenswrapper[4986]: E1203 12:58:14.787332 4986 reflector.go:158] "Unhandled Error" err="object-\"openshift-console-operator\"/\"trusted-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"trusted-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-console-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.787298 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.787365 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 03 12:58:14 crc kubenswrapper[4986]: W1203 12:58:14.787368 4986 reflector.go:561] object-"openshift-console-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-console-operator": no relationship found between node 'crc' and this object Dec 03 12:58:14 crc kubenswrapper[4986]: W1203 12:58:14.786714 4986 reflector.go:561] object-"openshift-console-operator"/"console-operator-config": failed to list *v1.ConfigMap: configmaps "console-operator-config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-console-operator": no relationship found between node 'crc' and this object Dec 03 12:58:14 crc kubenswrapper[4986]: E1203 12:58:14.787426 4986 reflector.go:158] "Unhandled Error" err="object-\"openshift-console-operator\"/\"console-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"console-operator-config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-console-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 03 12:58:14 crc kubenswrapper[4986]: E1203 12:58:14.787398 4986 reflector.go:158] "Unhandled Error" err="object-\"openshift-console-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-console-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.787459 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.787514 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.787600 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.787844 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 03 12:58:14 crc kubenswrapper[4986]: W1203 12:58:14.787961 4986 reflector.go:561] object-"openshift-console-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-console-operator": no relationship found between node 'crc' and this object Dec 03 12:58:14 crc kubenswrapper[4986]: E1203 12:58:14.787982 4986 reflector.go:158] "Unhandled Error" err="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-console-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.788192 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.788678 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.788768 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.789598 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.789694 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.789835 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.789995 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.790138 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.790382 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2glzg"] Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.790542 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.790888 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-sk8ll"] Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.791198 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-sk8ll" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.791257 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.791352 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.791427 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.791465 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2glzg" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.791392 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.792176 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-md588"] Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.792239 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.792684 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-md588" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.792923 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c246g"] Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.793123 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.794780 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.795191 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.796224 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.796681 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.796795 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.798617 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.798734 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.798779 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.798852 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.799038 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.799178 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.799343 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.799423 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.799474 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.799565 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.799595 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.799699 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.800026 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-wvzt8"] Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.800485 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-hhmft"] Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.800928 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-xn84j"] Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.801429 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-xn84j" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.802336 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c246g" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.803453 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-wvzt8" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.803800 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hhmft" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.804352 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.804549 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.804674 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.804760 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.804800 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.804911 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.805111 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.805224 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.805356 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.805427 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.806173 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4c7tt"] Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.805456 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.805483 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.805513 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.805538 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.805561 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.805699 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.805738 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.806794 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4c7tt" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.805872 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.805890 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.805916 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.805972 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.806147 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.807605 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5825d5b3-a095-48d6-9365-d03d1faa63ca-serving-cert\") pod \"apiserver-76f77b778f-72n7g\" (UID: \"5825d5b3-a095-48d6-9365-d03d1faa63ca\") " pod="openshift-apiserver/apiserver-76f77b778f-72n7g" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.807638 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0636c9e2-4b00-4d01-8426-5fbfd9f9fa88-audit-policies\") pod \"oauth-openshift-558db77b4-2k592\" (UID: \"0636c9e2-4b00-4d01-8426-5fbfd9f9fa88\") " pod="openshift-authentication/oauth-openshift-558db77b4-2k592" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.807661 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0636c9e2-4b00-4d01-8426-5fbfd9f9fa88-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-2k592\" (UID: \"0636c9e2-4b00-4d01-8426-5fbfd9f9fa88\") " pod="openshift-authentication/oauth-openshift-558db77b4-2k592" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.807682 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kr6m9\" (UniqueName: \"kubernetes.io/projected/0725188c-6c61-4369-8247-ffdda7e830e8-kube-api-access-kr6m9\") pod \"controller-manager-879f6c89f-ptgv8\" (UID: \"0725188c-6c61-4369-8247-ffdda7e830e8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ptgv8" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.807701 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5c9ab9a7-030c-4d45-8c75-950f457bb69c-audit-policies\") pod \"apiserver-7bbb656c7d-62rd9\" (UID: \"5c9ab9a7-030c-4d45-8c75-950f457bb69c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-62rd9" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.807720 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5c9ab9a7-030c-4d45-8c75-950f457bb69c-encryption-config\") pod \"apiserver-7bbb656c7d-62rd9\" (UID: \"5c9ab9a7-030c-4d45-8c75-950f457bb69c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-62rd9" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.807741 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggt4k\" (UniqueName: \"kubernetes.io/projected/5c9ab9a7-030c-4d45-8c75-950f457bb69c-kube-api-access-ggt4k\") pod \"apiserver-7bbb656c7d-62rd9\" (UID: \"5c9ab9a7-030c-4d45-8c75-950f457bb69c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-62rd9" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.807761 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0725188c-6c61-4369-8247-ffdda7e830e8-client-ca\") pod \"controller-manager-879f6c89f-ptgv8\" (UID: \"0725188c-6c61-4369-8247-ffdda7e830e8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ptgv8" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.807781 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5c9ab9a7-030c-4d45-8c75-950f457bb69c-serving-cert\") pod \"apiserver-7bbb656c7d-62rd9\" (UID: \"5c9ab9a7-030c-4d45-8c75-950f457bb69c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-62rd9" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.807801 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5825d5b3-a095-48d6-9365-d03d1faa63ca-audit-dir\") pod \"apiserver-76f77b778f-72n7g\" (UID: \"5825d5b3-a095-48d6-9365-d03d1faa63ca\") " pod="openshift-apiserver/apiserver-76f77b778f-72n7g" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.807821 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0636c9e2-4b00-4d01-8426-5fbfd9f9fa88-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-2k592\" (UID: \"0636c9e2-4b00-4d01-8426-5fbfd9f9fa88\") " pod="openshift-authentication/oauth-openshift-558db77b4-2k592" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.807844 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llkv9\" (UniqueName: \"kubernetes.io/projected/5825d5b3-a095-48d6-9365-d03d1faa63ca-kube-api-access-llkv9\") pod \"apiserver-76f77b778f-72n7g\" (UID: \"5825d5b3-a095-48d6-9365-d03d1faa63ca\") " pod="openshift-apiserver/apiserver-76f77b778f-72n7g" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.807865 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5825d5b3-a095-48d6-9365-d03d1faa63ca-config\") pod \"apiserver-76f77b778f-72n7g\" (UID: \"5825d5b3-a095-48d6-9365-d03d1faa63ca\") " pod="openshift-apiserver/apiserver-76f77b778f-72n7g" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.807887 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/005be756-d1ff-452c-aa4e-e4df06f28839-config\") pod \"console-operator-58897d9998-glvvh\" (UID: \"005be756-d1ff-452c-aa4e-e4df06f28839\") " pod="openshift-console-operator/console-operator-58897d9998-glvvh" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.807908 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0636c9e2-4b00-4d01-8426-5fbfd9f9fa88-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-2k592\" (UID: \"0636c9e2-4b00-4d01-8426-5fbfd9f9fa88\") " pod="openshift-authentication/oauth-openshift-558db77b4-2k592" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.807930 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0636c9e2-4b00-4d01-8426-5fbfd9f9fa88-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-2k592\" (UID: \"0636c9e2-4b00-4d01-8426-5fbfd9f9fa88\") " pod="openshift-authentication/oauth-openshift-558db77b4-2k592" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.807952 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrj84\" (UniqueName: \"kubernetes.io/projected/01d4de81-8b50-4231-912d-3a65797a9754-kube-api-access-vrj84\") pod \"route-controller-manager-6576b87f9c-npjwk\" (UID: \"01d4de81-8b50-4231-912d-3a65797a9754\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-npjwk" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.807974 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/5825d5b3-a095-48d6-9365-d03d1faa63ca-image-import-ca\") pod \"apiserver-76f77b778f-72n7g\" (UID: \"5825d5b3-a095-48d6-9365-d03d1faa63ca\") " pod="openshift-apiserver/apiserver-76f77b778f-72n7g" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.807994 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/54da68a4-65a4-488f-a029-783cf51bdc04-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-vz29n\" (UID: \"54da68a4-65a4-488f-a029-783cf51bdc04\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vz29n" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.808015 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0636c9e2-4b00-4d01-8426-5fbfd9f9fa88-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-2k592\" (UID: \"0636c9e2-4b00-4d01-8426-5fbfd9f9fa88\") " pod="openshift-authentication/oauth-openshift-558db77b4-2k592" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.808038 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/3e7e7e0f-8493-47c5-859d-3e7046c7ddab-machine-approver-tls\") pod \"machine-approver-56656f9798-pjgnh\" (UID: \"3e7e7e0f-8493-47c5-859d-3e7046c7ddab\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pjgnh" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.808058 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/005be756-d1ff-452c-aa4e-e4df06f28839-serving-cert\") pod \"console-operator-58897d9998-glvvh\" (UID: \"005be756-d1ff-452c-aa4e-e4df06f28839\") " pod="openshift-console-operator/console-operator-58897d9998-glvvh" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.808082 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3e7e7e0f-8493-47c5-859d-3e7046c7ddab-auth-proxy-config\") pod \"machine-approver-56656f9798-pjgnh\" (UID: \"3e7e7e0f-8493-47c5-859d-3e7046c7ddab\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pjgnh" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.808101 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/5825d5b3-a095-48d6-9365-d03d1faa63ca-audit\") pod \"apiserver-76f77b778f-72n7g\" (UID: \"5825d5b3-a095-48d6-9365-d03d1faa63ca\") " pod="openshift-apiserver/apiserver-76f77b778f-72n7g" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.808121 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnfjq\" (UniqueName: \"kubernetes.io/projected/0636c9e2-4b00-4d01-8426-5fbfd9f9fa88-kube-api-access-xnfjq\") pod \"oauth-openshift-558db77b4-2k592\" (UID: \"0636c9e2-4b00-4d01-8426-5fbfd9f9fa88\") " pod="openshift-authentication/oauth-openshift-558db77b4-2k592" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.808142 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0636c9e2-4b00-4d01-8426-5fbfd9f9fa88-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-2k592\" (UID: \"0636c9e2-4b00-4d01-8426-5fbfd9f9fa88\") " pod="openshift-authentication/oauth-openshift-558db77b4-2k592" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.808163 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5825d5b3-a095-48d6-9365-d03d1faa63ca-trusted-ca-bundle\") pod \"apiserver-76f77b778f-72n7g\" (UID: \"5825d5b3-a095-48d6-9365-d03d1faa63ca\") " pod="openshift-apiserver/apiserver-76f77b778f-72n7g" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.808181 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c9ab9a7-030c-4d45-8c75-950f457bb69c-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-62rd9\" (UID: \"5c9ab9a7-030c-4d45-8c75-950f457bb69c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-62rd9" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.808204 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nlqq\" (UniqueName: \"kubernetes.io/projected/005be756-d1ff-452c-aa4e-e4df06f28839-kube-api-access-9nlqq\") pod \"console-operator-58897d9998-glvvh\" (UID: \"005be756-d1ff-452c-aa4e-e4df06f28839\") " pod="openshift-console-operator/console-operator-58897d9998-glvvh" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.808225 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0636c9e2-4b00-4d01-8426-5fbfd9f9fa88-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-2k592\" (UID: \"0636c9e2-4b00-4d01-8426-5fbfd9f9fa88\") " pod="openshift-authentication/oauth-openshift-558db77b4-2k592" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.808245 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5825d5b3-a095-48d6-9365-d03d1faa63ca-encryption-config\") pod \"apiserver-76f77b778f-72n7g\" (UID: \"5825d5b3-a095-48d6-9365-d03d1faa63ca\") " pod="openshift-apiserver/apiserver-76f77b778f-72n7g" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.808267 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/01d4de81-8b50-4231-912d-3a65797a9754-client-ca\") pod \"route-controller-manager-6576b87f9c-npjwk\" (UID: \"01d4de81-8b50-4231-912d-3a65797a9754\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-npjwk" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.808307 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0725188c-6c61-4369-8247-ffdda7e830e8-config\") pod \"controller-manager-879f6c89f-ptgv8\" (UID: \"0725188c-6c61-4369-8247-ffdda7e830e8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ptgv8" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.808331 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5c9ab9a7-030c-4d45-8c75-950f457bb69c-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-62rd9\" (UID: \"5c9ab9a7-030c-4d45-8c75-950f457bb69c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-62rd9" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.808351 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0636c9e2-4b00-4d01-8426-5fbfd9f9fa88-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-2k592\" (UID: \"0636c9e2-4b00-4d01-8426-5fbfd9f9fa88\") " pod="openshift-authentication/oauth-openshift-558db77b4-2k592" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.808372 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01d4de81-8b50-4231-912d-3a65797a9754-config\") pod \"route-controller-manager-6576b87f9c-npjwk\" (UID: \"01d4de81-8b50-4231-912d-3a65797a9754\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-npjwk" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.808393 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tp7f9\" (UniqueName: \"kubernetes.io/projected/54da68a4-65a4-488f-a029-783cf51bdc04-kube-api-access-tp7f9\") pod \"openshift-apiserver-operator-796bbdcf4f-vz29n\" (UID: \"54da68a4-65a4-488f-a029-783cf51bdc04\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vz29n" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.808559 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01d4de81-8b50-4231-912d-3a65797a9754-serving-cert\") pod \"route-controller-manager-6576b87f9c-npjwk\" (UID: \"01d4de81-8b50-4231-912d-3a65797a9754\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-npjwk" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.808582 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5825d5b3-a095-48d6-9365-d03d1faa63ca-etcd-client\") pod \"apiserver-76f77b778f-72n7g\" (UID: \"5825d5b3-a095-48d6-9365-d03d1faa63ca\") " pod="openshift-apiserver/apiserver-76f77b778f-72n7g" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.808638 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e7e7e0f-8493-47c5-859d-3e7046c7ddab-config\") pod \"machine-approver-56656f9798-pjgnh\" (UID: \"3e7e7e0f-8493-47c5-859d-3e7046c7ddab\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pjgnh" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.808729 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0636c9e2-4b00-4d01-8426-5fbfd9f9fa88-audit-dir\") pod \"oauth-openshift-558db77b4-2k592\" (UID: \"0636c9e2-4b00-4d01-8426-5fbfd9f9fa88\") " pod="openshift-authentication/oauth-openshift-558db77b4-2k592" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.808780 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5c9ab9a7-030c-4d45-8c75-950f457bb69c-etcd-client\") pod \"apiserver-7bbb656c7d-62rd9\" (UID: \"5c9ab9a7-030c-4d45-8c75-950f457bb69c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-62rd9" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.808803 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0636c9e2-4b00-4d01-8426-5fbfd9f9fa88-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-2k592\" (UID: \"0636c9e2-4b00-4d01-8426-5fbfd9f9fa88\") " pod="openshift-authentication/oauth-openshift-558db77b4-2k592" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.808827 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54da68a4-65a4-488f-a029-783cf51bdc04-config\") pod \"openshift-apiserver-operator-796bbdcf4f-vz29n\" (UID: \"54da68a4-65a4-488f-a029-783cf51bdc04\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vz29n" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.808864 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kd75\" (UniqueName: \"kubernetes.io/projected/3e7e7e0f-8493-47c5-859d-3e7046c7ddab-kube-api-access-7kd75\") pod \"machine-approver-56656f9798-pjgnh\" (UID: \"3e7e7e0f-8493-47c5-859d-3e7046c7ddab\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pjgnh" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.808881 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0725188c-6c61-4369-8247-ffdda7e830e8-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-ptgv8\" (UID: \"0725188c-6c61-4369-8247-ffdda7e830e8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ptgv8" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.808907 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5825d5b3-a095-48d6-9365-d03d1faa63ca-node-pullsecrets\") pod \"apiserver-76f77b778f-72n7g\" (UID: \"5825d5b3-a095-48d6-9365-d03d1faa63ca\") " pod="openshift-apiserver/apiserver-76f77b778f-72n7g" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.808923 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5c9ab9a7-030c-4d45-8c75-950f457bb69c-audit-dir\") pod \"apiserver-7bbb656c7d-62rd9\" (UID: \"5c9ab9a7-030c-4d45-8c75-950f457bb69c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-62rd9" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.808954 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/005be756-d1ff-452c-aa4e-e4df06f28839-trusted-ca\") pod \"console-operator-58897d9998-glvvh\" (UID: \"005be756-d1ff-452c-aa4e-e4df06f28839\") " pod="openshift-console-operator/console-operator-58897d9998-glvvh" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.808971 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0636c9e2-4b00-4d01-8426-5fbfd9f9fa88-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-2k592\" (UID: \"0636c9e2-4b00-4d01-8426-5fbfd9f9fa88\") " pod="openshift-authentication/oauth-openshift-558db77b4-2k592" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.808987 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0636c9e2-4b00-4d01-8426-5fbfd9f9fa88-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-2k592\" (UID: \"0636c9e2-4b00-4d01-8426-5fbfd9f9fa88\") " pod="openshift-authentication/oauth-openshift-558db77b4-2k592" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.809009 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0725188c-6c61-4369-8247-ffdda7e830e8-serving-cert\") pod \"controller-manager-879f6c89f-ptgv8\" (UID: \"0725188c-6c61-4369-8247-ffdda7e830e8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ptgv8" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.809048 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5825d5b3-a095-48d6-9365-d03d1faa63ca-etcd-serving-ca\") pod \"apiserver-76f77b778f-72n7g\" (UID: \"5825d5b3-a095-48d6-9365-d03d1faa63ca\") " pod="openshift-apiserver/apiserver-76f77b778f-72n7g" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.810126 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-npjwk"] Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.812050 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.812095 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-72n7g"] Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.813076 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 03 12:58:14 crc kubenswrapper[4986]: I1203 12:58:14.813205 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.440163 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.447807 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.448504 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.448715 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.448739 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.450656 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5825d5b3-a095-48d6-9365-d03d1faa63ca-trusted-ca-bundle\") pod \"apiserver-76f77b778f-72n7g\" (UID: \"5825d5b3-a095-48d6-9365-d03d1faa63ca\") " pod="openshift-apiserver/apiserver-76f77b778f-72n7g" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.450712 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c9ab9a7-030c-4d45-8c75-950f457bb69c-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-62rd9\" (UID: \"5c9ab9a7-030c-4d45-8c75-950f457bb69c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-62rd9" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.450751 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nlqq\" (UniqueName: \"kubernetes.io/projected/005be756-d1ff-452c-aa4e-e4df06f28839-kube-api-access-9nlqq\") pod \"console-operator-58897d9998-glvvh\" (UID: \"005be756-d1ff-452c-aa4e-e4df06f28839\") " pod="openshift-console-operator/console-operator-58897d9998-glvvh" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.450784 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0636c9e2-4b00-4d01-8426-5fbfd9f9fa88-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-2k592\" (UID: \"0636c9e2-4b00-4d01-8426-5fbfd9f9fa88\") " pod="openshift-authentication/oauth-openshift-558db77b4-2k592" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.450823 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wt5qr\" (UniqueName: \"kubernetes.io/projected/1b4d169b-dd53-42aa-9b14-f9bd24e801e6-kube-api-access-wt5qr\") pod \"cluster-image-registry-operator-dc59b4c8b-c246g\" (UID: \"1b4d169b-dd53-42aa-9b14-f9bd24e801e6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c246g" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.450858 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a077eb6-dc4e-4b2c-88be-eb4fad075e7a-serving-cert\") pod \"openshift-config-operator-7777fb866f-hhmft\" (UID: \"6a077eb6-dc4e-4b2c-88be-eb4fad075e7a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hhmft" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.450892 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5825d5b3-a095-48d6-9365-d03d1faa63ca-encryption-config\") pod \"apiserver-76f77b778f-72n7g\" (UID: \"5825d5b3-a095-48d6-9365-d03d1faa63ca\") " pod="openshift-apiserver/apiserver-76f77b778f-72n7g" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.450924 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/1b4d169b-dd53-42aa-9b14-f9bd24e801e6-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-c246g\" (UID: \"1b4d169b-dd53-42aa-9b14-f9bd24e801e6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c246g" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.450960 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0725188c-6c61-4369-8247-ffdda7e830e8-config\") pod \"controller-manager-879f6c89f-ptgv8\" (UID: \"0725188c-6c61-4369-8247-ffdda7e830e8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ptgv8" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.450992 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2crr2\" (UniqueName: \"kubernetes.io/projected/f51b3907-2cbf-4b80-a800-411404195052-kube-api-access-2crr2\") pod \"cluster-samples-operator-665b6dd947-2glzg\" (UID: \"f51b3907-2cbf-4b80-a800-411404195052\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2glzg" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.451025 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/01d4de81-8b50-4231-912d-3a65797a9754-client-ca\") pod \"route-controller-manager-6576b87f9c-npjwk\" (UID: \"01d4de81-8b50-4231-912d-3a65797a9754\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-npjwk" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.451055 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5c9ab9a7-030c-4d45-8c75-950f457bb69c-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-62rd9\" (UID: \"5c9ab9a7-030c-4d45-8c75-950f457bb69c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-62rd9" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.451085 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-df95s\" (UniqueName: \"kubernetes.io/projected/a069c6b7-9e3b-40fd-b830-0d2a82fadd9a-kube-api-access-df95s\") pod \"console-f9d7485db-sk8ll\" (UID: \"a069c6b7-9e3b-40fd-b830-0d2a82fadd9a\") " pod="openshift-console/console-f9d7485db-sk8ll" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.451114 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cee77bd5-92e1-4458-bdf2-49912954144d-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-4c7tt\" (UID: \"cee77bd5-92e1-4458-bdf2-49912954144d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4c7tt" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.451175 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0636c9e2-4b00-4d01-8426-5fbfd9f9fa88-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-2k592\" (UID: \"0636c9e2-4b00-4d01-8426-5fbfd9f9fa88\") " pod="openshift-authentication/oauth-openshift-558db77b4-2k592" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.451206 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01d4de81-8b50-4231-912d-3a65797a9754-config\") pod \"route-controller-manager-6576b87f9c-npjwk\" (UID: \"01d4de81-8b50-4231-912d-3a65797a9754\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-npjwk" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.451262 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tp7f9\" (UniqueName: \"kubernetes.io/projected/54da68a4-65a4-488f-a029-783cf51bdc04-kube-api-access-tp7f9\") pod \"openshift-apiserver-operator-796bbdcf4f-vz29n\" (UID: \"54da68a4-65a4-488f-a029-783cf51bdc04\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vz29n" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.451319 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01d4de81-8b50-4231-912d-3a65797a9754-serving-cert\") pod \"route-controller-manager-6576b87f9c-npjwk\" (UID: \"01d4de81-8b50-4231-912d-3a65797a9754\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-npjwk" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.451350 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5825d5b3-a095-48d6-9365-d03d1faa63ca-etcd-client\") pod \"apiserver-76f77b778f-72n7g\" (UID: \"5825d5b3-a095-48d6-9365-d03d1faa63ca\") " pod="openshift-apiserver/apiserver-76f77b778f-72n7g" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.451385 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/6a077eb6-dc4e-4b2c-88be-eb4fad075e7a-available-featuregates\") pod \"openshift-config-operator-7777fb866f-hhmft\" (UID: \"6a077eb6-dc4e-4b2c-88be-eb4fad075e7a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hhmft" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.451429 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e7e7e0f-8493-47c5-859d-3e7046c7ddab-config\") pod \"machine-approver-56656f9798-pjgnh\" (UID: \"3e7e7e0f-8493-47c5-859d-3e7046c7ddab\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pjgnh" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.451460 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-km2d9\" (UniqueName: \"kubernetes.io/projected/6a077eb6-dc4e-4b2c-88be-eb4fad075e7a-kube-api-access-km2d9\") pod \"openshift-config-operator-7777fb866f-hhmft\" (UID: \"6a077eb6-dc4e-4b2c-88be-eb4fad075e7a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hhmft" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.451512 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0636c9e2-4b00-4d01-8426-5fbfd9f9fa88-audit-dir\") pod \"oauth-openshift-558db77b4-2k592\" (UID: \"0636c9e2-4b00-4d01-8426-5fbfd9f9fa88\") " pod="openshift-authentication/oauth-openshift-558db77b4-2k592" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.451545 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xv9bz\" (UniqueName: \"kubernetes.io/projected/2588cb3b-8139-4529-a6e1-c57532afdfa7-kube-api-access-xv9bz\") pod \"machine-api-operator-5694c8668f-xn84j\" (UID: \"2588cb3b-8139-4529-a6e1-c57532afdfa7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xn84j" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.451585 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0636c9e2-4b00-4d01-8426-5fbfd9f9fa88-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-2k592\" (UID: \"0636c9e2-4b00-4d01-8426-5fbfd9f9fa88\") " pod="openshift-authentication/oauth-openshift-558db77b4-2k592" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.451618 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5c9ab9a7-030c-4d45-8c75-950f457bb69c-etcd-client\") pod \"apiserver-7bbb656c7d-62rd9\" (UID: \"5c9ab9a7-030c-4d45-8c75-950f457bb69c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-62rd9" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.451650 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/2588cb3b-8139-4529-a6e1-c57532afdfa7-images\") pod \"machine-api-operator-5694c8668f-xn84j\" (UID: \"2588cb3b-8139-4529-a6e1-c57532afdfa7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xn84j" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.451682 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54da68a4-65a4-488f-a029-783cf51bdc04-config\") pod \"openshift-apiserver-operator-796bbdcf4f-vz29n\" (UID: \"54da68a4-65a4-488f-a029-783cf51bdc04\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vz29n" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.451715 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1b4d169b-dd53-42aa-9b14-f9bd24e801e6-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-c246g\" (UID: \"1b4d169b-dd53-42aa-9b14-f9bd24e801e6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c246g" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.451752 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kd75\" (UniqueName: \"kubernetes.io/projected/3e7e7e0f-8493-47c5-859d-3e7046c7ddab-kube-api-access-7kd75\") pod \"machine-approver-56656f9798-pjgnh\" (UID: \"3e7e7e0f-8493-47c5-859d-3e7046c7ddab\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pjgnh" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.451784 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0725188c-6c61-4369-8247-ffdda7e830e8-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-ptgv8\" (UID: \"0725188c-6c61-4369-8247-ffdda7e830e8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ptgv8" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.451820 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j27sw\" (UniqueName: \"kubernetes.io/projected/cee77bd5-92e1-4458-bdf2-49912954144d-kube-api-access-j27sw\") pod \"openshift-controller-manager-operator-756b6f6bc6-4c7tt\" (UID: \"cee77bd5-92e1-4458-bdf2-49912954144d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4c7tt" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.451855 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5825d5b3-a095-48d6-9365-d03d1faa63ca-node-pullsecrets\") pod \"apiserver-76f77b778f-72n7g\" (UID: \"5825d5b3-a095-48d6-9365-d03d1faa63ca\") " pod="openshift-apiserver/apiserver-76f77b778f-72n7g" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.451885 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5c9ab9a7-030c-4d45-8c75-950f457bb69c-audit-dir\") pod \"apiserver-7bbb656c7d-62rd9\" (UID: \"5c9ab9a7-030c-4d45-8c75-950f457bb69c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-62rd9" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.451915 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a069c6b7-9e3b-40fd-b830-0d2a82fadd9a-trusted-ca-bundle\") pod \"console-f9d7485db-sk8ll\" (UID: \"a069c6b7-9e3b-40fd-b830-0d2a82fadd9a\") " pod="openshift-console/console-f9d7485db-sk8ll" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.451948 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/005be756-d1ff-452c-aa4e-e4df06f28839-trusted-ca\") pod \"console-operator-58897d9998-glvvh\" (UID: \"005be756-d1ff-452c-aa4e-e4df06f28839\") " pod="openshift-console-operator/console-operator-58897d9998-glvvh" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.452009 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0636c9e2-4b00-4d01-8426-5fbfd9f9fa88-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-2k592\" (UID: \"0636c9e2-4b00-4d01-8426-5fbfd9f9fa88\") " pod="openshift-authentication/oauth-openshift-558db77b4-2k592" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.452043 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0636c9e2-4b00-4d01-8426-5fbfd9f9fa88-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-2k592\" (UID: \"0636c9e2-4b00-4d01-8426-5fbfd9f9fa88\") " pod="openshift-authentication/oauth-openshift-558db77b4-2k592" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.452077 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0725188c-6c61-4369-8247-ffdda7e830e8-serving-cert\") pod \"controller-manager-879f6c89f-ptgv8\" (UID: \"0725188c-6c61-4369-8247-ffdda7e830e8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ptgv8" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.452106 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1b4d169b-dd53-42aa-9b14-f9bd24e801e6-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-c246g\" (UID: \"1b4d169b-dd53-42aa-9b14-f9bd24e801e6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c246g" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.452142 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5825d5b3-a095-48d6-9365-d03d1faa63ca-etcd-serving-ca\") pod \"apiserver-76f77b778f-72n7g\" (UID: \"5825d5b3-a095-48d6-9365-d03d1faa63ca\") " pod="openshift-apiserver/apiserver-76f77b778f-72n7g" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.452177 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ae56075-fe7d-4f73-9cd9-ba58ff4c6bfb-serving-cert\") pod \"authentication-operator-69f744f599-md588\" (UID: \"3ae56075-fe7d-4f73-9cd9-ba58ff4c6bfb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-md588" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.452206 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cee77bd5-92e1-4458-bdf2-49912954144d-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-4c7tt\" (UID: \"cee77bd5-92e1-4458-bdf2-49912954144d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4c7tt" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.452228 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c9ab9a7-030c-4d45-8c75-950f457bb69c-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-62rd9\" (UID: \"5c9ab9a7-030c-4d45-8c75-950f457bb69c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-62rd9" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.452242 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0636c9e2-4b00-4d01-8426-5fbfd9f9fa88-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-2k592\" (UID: \"0636c9e2-4b00-4d01-8426-5fbfd9f9fa88\") " pod="openshift-authentication/oauth-openshift-558db77b4-2k592" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.452276 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kr6m9\" (UniqueName: \"kubernetes.io/projected/0725188c-6c61-4369-8247-ffdda7e830e8-kube-api-access-kr6m9\") pod \"controller-manager-879f6c89f-ptgv8\" (UID: \"0725188c-6c61-4369-8247-ffdda7e830e8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ptgv8" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.452530 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5c9ab9a7-030c-4d45-8c75-950f457bb69c-audit-policies\") pod \"apiserver-7bbb656c7d-62rd9\" (UID: \"5c9ab9a7-030c-4d45-8c75-950f457bb69c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-62rd9" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.452575 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5c9ab9a7-030c-4d45-8c75-950f457bb69c-encryption-config\") pod \"apiserver-7bbb656c7d-62rd9\" (UID: \"5c9ab9a7-030c-4d45-8c75-950f457bb69c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-62rd9" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.452610 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5825d5b3-a095-48d6-9365-d03d1faa63ca-serving-cert\") pod \"apiserver-76f77b778f-72n7g\" (UID: \"5825d5b3-a095-48d6-9365-d03d1faa63ca\") " pod="openshift-apiserver/apiserver-76f77b778f-72n7g" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.452639 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0636c9e2-4b00-4d01-8426-5fbfd9f9fa88-audit-policies\") pod \"oauth-openshift-558db77b4-2k592\" (UID: \"0636c9e2-4b00-4d01-8426-5fbfd9f9fa88\") " pod="openshift-authentication/oauth-openshift-558db77b4-2k592" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.452670 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggt4k\" (UniqueName: \"kubernetes.io/projected/5c9ab9a7-030c-4d45-8c75-950f457bb69c-kube-api-access-ggt4k\") pod \"apiserver-7bbb656c7d-62rd9\" (UID: \"5c9ab9a7-030c-4d45-8c75-950f457bb69c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-62rd9" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.452706 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ae56075-fe7d-4f73-9cd9-ba58ff4c6bfb-config\") pod \"authentication-operator-69f744f599-md588\" (UID: \"3ae56075-fe7d-4f73-9cd9-ba58ff4c6bfb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-md588" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.452739 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5c9ab9a7-030c-4d45-8c75-950f457bb69c-serving-cert\") pod \"apiserver-7bbb656c7d-62rd9\" (UID: \"5c9ab9a7-030c-4d45-8c75-950f457bb69c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-62rd9" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.452804 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a069c6b7-9e3b-40fd-b830-0d2a82fadd9a-console-serving-cert\") pod \"console-f9d7485db-sk8ll\" (UID: \"a069c6b7-9e3b-40fd-b830-0d2a82fadd9a\") " pod="openshift-console/console-f9d7485db-sk8ll" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.452868 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0725188c-6c61-4369-8247-ffdda7e830e8-client-ca\") pod \"controller-manager-879f6c89f-ptgv8\" (UID: \"0725188c-6c61-4369-8247-ffdda7e830e8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ptgv8" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.452907 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0636c9e2-4b00-4d01-8426-5fbfd9f9fa88-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-2k592\" (UID: \"0636c9e2-4b00-4d01-8426-5fbfd9f9fa88\") " pod="openshift-authentication/oauth-openshift-558db77b4-2k592" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.452942 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2588cb3b-8139-4529-a6e1-c57532afdfa7-config\") pod \"machine-api-operator-5694c8668f-xn84j\" (UID: \"2588cb3b-8139-4529-a6e1-c57532afdfa7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xn84j" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.452978 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5825d5b3-a095-48d6-9365-d03d1faa63ca-audit-dir\") pod \"apiserver-76f77b778f-72n7g\" (UID: \"5825d5b3-a095-48d6-9365-d03d1faa63ca\") " pod="openshift-apiserver/apiserver-76f77b778f-72n7g" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.453011 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llkv9\" (UniqueName: \"kubernetes.io/projected/5825d5b3-a095-48d6-9365-d03d1faa63ca-kube-api-access-llkv9\") pod \"apiserver-76f77b778f-72n7g\" (UID: \"5825d5b3-a095-48d6-9365-d03d1faa63ca\") " pod="openshift-apiserver/apiserver-76f77b778f-72n7g" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.453045 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5h4n\" (UniqueName: \"kubernetes.io/projected/d366e2f7-22ef-46e5-855f-0f26e6a9186c-kube-api-access-f5h4n\") pod \"downloads-7954f5f757-wvzt8\" (UID: \"d366e2f7-22ef-46e5-855f-0f26e6a9186c\") " pod="openshift-console/downloads-7954f5f757-wvzt8" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.453123 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5tbr\" (UniqueName: \"kubernetes.io/projected/3ae56075-fe7d-4f73-9cd9-ba58ff4c6bfb-kube-api-access-g5tbr\") pod \"authentication-operator-69f744f599-md588\" (UID: \"3ae56075-fe7d-4f73-9cd9-ba58ff4c6bfb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-md588" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.453175 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5825d5b3-a095-48d6-9365-d03d1faa63ca-config\") pod \"apiserver-76f77b778f-72n7g\" (UID: \"5825d5b3-a095-48d6-9365-d03d1faa63ca\") " pod="openshift-apiserver/apiserver-76f77b778f-72n7g" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.453211 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a069c6b7-9e3b-40fd-b830-0d2a82fadd9a-service-ca\") pod \"console-f9d7485db-sk8ll\" (UID: \"a069c6b7-9e3b-40fd-b830-0d2a82fadd9a\") " pod="openshift-console/console-f9d7485db-sk8ll" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.453248 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f51b3907-2cbf-4b80-a800-411404195052-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-2glzg\" (UID: \"f51b3907-2cbf-4b80-a800-411404195052\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2glzg" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.453322 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0636c9e2-4b00-4d01-8426-5fbfd9f9fa88-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-2k592\" (UID: \"0636c9e2-4b00-4d01-8426-5fbfd9f9fa88\") " pod="openshift-authentication/oauth-openshift-558db77b4-2k592" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.453358 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrj84\" (UniqueName: \"kubernetes.io/projected/01d4de81-8b50-4231-912d-3a65797a9754-kube-api-access-vrj84\") pod \"route-controller-manager-6576b87f9c-npjwk\" (UID: \"01d4de81-8b50-4231-912d-3a65797a9754\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-npjwk" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.453395 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/005be756-d1ff-452c-aa4e-e4df06f28839-config\") pod \"console-operator-58897d9998-glvvh\" (UID: \"005be756-d1ff-452c-aa4e-e4df06f28839\") " pod="openshift-console-operator/console-operator-58897d9998-glvvh" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.453448 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0636c9e2-4b00-4d01-8426-5fbfd9f9fa88-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-2k592\" (UID: \"0636c9e2-4b00-4d01-8426-5fbfd9f9fa88\") " pod="openshift-authentication/oauth-openshift-558db77b4-2k592" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.453482 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/54da68a4-65a4-488f-a029-783cf51bdc04-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-vz29n\" (UID: \"54da68a4-65a4-488f-a029-783cf51bdc04\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vz29n" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.453515 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a069c6b7-9e3b-40fd-b830-0d2a82fadd9a-oauth-serving-cert\") pod \"console-f9d7485db-sk8ll\" (UID: \"a069c6b7-9e3b-40fd-b830-0d2a82fadd9a\") " pod="openshift-console/console-f9d7485db-sk8ll" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.453554 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/5825d5b3-a095-48d6-9365-d03d1faa63ca-image-import-ca\") pod \"apiserver-76f77b778f-72n7g\" (UID: \"5825d5b3-a095-48d6-9365-d03d1faa63ca\") " pod="openshift-apiserver/apiserver-76f77b778f-72n7g" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.453589 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0636c9e2-4b00-4d01-8426-5fbfd9f9fa88-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-2k592\" (UID: \"0636c9e2-4b00-4d01-8426-5fbfd9f9fa88\") " pod="openshift-authentication/oauth-openshift-558db77b4-2k592" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.453626 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/3e7e7e0f-8493-47c5-859d-3e7046c7ddab-machine-approver-tls\") pod \"machine-approver-56656f9798-pjgnh\" (UID: \"3e7e7e0f-8493-47c5-859d-3e7046c7ddab\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pjgnh" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.453825 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/005be756-d1ff-452c-aa4e-e4df06f28839-serving-cert\") pod \"console-operator-58897d9998-glvvh\" (UID: \"005be756-d1ff-452c-aa4e-e4df06f28839\") " pod="openshift-console-operator/console-operator-58897d9998-glvvh" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.453862 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/2588cb3b-8139-4529-a6e1-c57532afdfa7-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-xn84j\" (UID: \"2588cb3b-8139-4529-a6e1-c57532afdfa7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xn84j" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.453874 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-ptgv8"] Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.453893 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a069c6b7-9e3b-40fd-b830-0d2a82fadd9a-console-config\") pod \"console-f9d7485db-sk8ll\" (UID: \"a069c6b7-9e3b-40fd-b830-0d2a82fadd9a\") " pod="openshift-console/console-f9d7485db-sk8ll" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.453912 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-62rd9"] Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.453980 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/5825d5b3-a095-48d6-9365-d03d1faa63ca-audit\") pod \"apiserver-76f77b778f-72n7g\" (UID: \"5825d5b3-a095-48d6-9365-d03d1faa63ca\") " pod="openshift-apiserver/apiserver-76f77b778f-72n7g" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.454014 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnfjq\" (UniqueName: \"kubernetes.io/projected/0636c9e2-4b00-4d01-8426-5fbfd9f9fa88-kube-api-access-xnfjq\") pod \"oauth-openshift-558db77b4-2k592\" (UID: \"0636c9e2-4b00-4d01-8426-5fbfd9f9fa88\") " pod="openshift-authentication/oauth-openshift-558db77b4-2k592" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.454070 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a069c6b7-9e3b-40fd-b830-0d2a82fadd9a-console-oauth-config\") pod \"console-f9d7485db-sk8ll\" (UID: \"a069c6b7-9e3b-40fd-b830-0d2a82fadd9a\") " pod="openshift-console/console-f9d7485db-sk8ll" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.454102 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3ae56075-fe7d-4f73-9cd9-ba58ff4c6bfb-service-ca-bundle\") pod \"authentication-operator-69f744f599-md588\" (UID: \"3ae56075-fe7d-4f73-9cd9-ba58ff4c6bfb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-md588" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.454136 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3e7e7e0f-8493-47c5-859d-3e7046c7ddab-auth-proxy-config\") pod \"machine-approver-56656f9798-pjgnh\" (UID: \"3e7e7e0f-8493-47c5-859d-3e7046c7ddab\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pjgnh" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.454191 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3ae56075-fe7d-4f73-9cd9-ba58ff4c6bfb-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-md588\" (UID: \"3ae56075-fe7d-4f73-9cd9-ba58ff4c6bfb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-md588" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.454230 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0636c9e2-4b00-4d01-8426-5fbfd9f9fa88-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-2k592\" (UID: \"0636c9e2-4b00-4d01-8426-5fbfd9f9fa88\") " pod="openshift-authentication/oauth-openshift-558db77b4-2k592" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.450668 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.460067 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5c9ab9a7-030c-4d45-8c75-950f457bb69c-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-62rd9\" (UID: \"5c9ab9a7-030c-4d45-8c75-950f457bb69c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-62rd9" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.462174 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5825d5b3-a095-48d6-9365-d03d1faa63ca-node-pullsecrets\") pod \"apiserver-76f77b778f-72n7g\" (UID: \"5825d5b3-a095-48d6-9365-d03d1faa63ca\") " pod="openshift-apiserver/apiserver-76f77b778f-72n7g" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.467307 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0725188c-6c61-4369-8247-ffdda7e830e8-config\") pod \"controller-manager-879f6c89f-ptgv8\" (UID: \"0725188c-6c61-4369-8247-ffdda7e830e8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ptgv8" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.467306 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0636c9e2-4b00-4d01-8426-5fbfd9f9fa88-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-2k592\" (UID: \"0636c9e2-4b00-4d01-8426-5fbfd9f9fa88\") " pod="openshift-authentication/oauth-openshift-558db77b4-2k592" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.470365 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5825d5b3-a095-48d6-9365-d03d1faa63ca-audit-dir\") pod \"apiserver-76f77b778f-72n7g\" (UID: \"5825d5b3-a095-48d6-9365-d03d1faa63ca\") " pod="openshift-apiserver/apiserver-76f77b778f-72n7g" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.471264 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0725188c-6c61-4369-8247-ffdda7e830e8-client-ca\") pod \"controller-manager-879f6c89f-ptgv8\" (UID: \"0725188c-6c61-4369-8247-ffdda7e830e8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ptgv8" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.471396 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01d4de81-8b50-4231-912d-3a65797a9754-config\") pod \"route-controller-manager-6576b87f9c-npjwk\" (UID: \"01d4de81-8b50-4231-912d-3a65797a9754\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-npjwk" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.473622 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0636c9e2-4b00-4d01-8426-5fbfd9f9fa88-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-2k592\" (UID: \"0636c9e2-4b00-4d01-8426-5fbfd9f9fa88\") " pod="openshift-authentication/oauth-openshift-558db77b4-2k592" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.473917 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/01d4de81-8b50-4231-912d-3a65797a9754-client-ca\") pod \"route-controller-manager-6576b87f9c-npjwk\" (UID: \"01d4de81-8b50-4231-912d-3a65797a9754\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-npjwk" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.474002 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5c9ab9a7-030c-4d45-8c75-950f457bb69c-audit-dir\") pod \"apiserver-7bbb656c7d-62rd9\" (UID: \"5c9ab9a7-030c-4d45-8c75-950f457bb69c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-62rd9" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.474610 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0636c9e2-4b00-4d01-8426-5fbfd9f9fa88-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-2k592\" (UID: \"0636c9e2-4b00-4d01-8426-5fbfd9f9fa88\") " pod="openshift-authentication/oauth-openshift-558db77b4-2k592" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.476641 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.478973 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0636c9e2-4b00-4d01-8426-5fbfd9f9fa88-audit-dir\") pod \"oauth-openshift-558db77b4-2k592\" (UID: \"0636c9e2-4b00-4d01-8426-5fbfd9f9fa88\") " pod="openshift-authentication/oauth-openshift-558db77b4-2k592" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.479122 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5825d5b3-a095-48d6-9365-d03d1faa63ca-config\") pod \"apiserver-76f77b778f-72n7g\" (UID: \"5825d5b3-a095-48d6-9365-d03d1faa63ca\") " pod="openshift-apiserver/apiserver-76f77b778f-72n7g" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.479677 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.479957 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.480135 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54da68a4-65a4-488f-a029-783cf51bdc04-config\") pod \"openshift-apiserver-operator-796bbdcf4f-vz29n\" (UID: \"54da68a4-65a4-488f-a029-783cf51bdc04\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vz29n" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.480264 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0636c9e2-4b00-4d01-8426-5fbfd9f9fa88-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-2k592\" (UID: \"0636c9e2-4b00-4d01-8426-5fbfd9f9fa88\") " pod="openshift-authentication/oauth-openshift-558db77b4-2k592" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.480565 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e7e7e0f-8493-47c5-859d-3e7046c7ddab-config\") pod \"machine-approver-56656f9798-pjgnh\" (UID: \"3e7e7e0f-8493-47c5-859d-3e7046c7ddab\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pjgnh" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.482740 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5c9ab9a7-030c-4d45-8c75-950f457bb69c-audit-policies\") pod \"apiserver-7bbb656c7d-62rd9\" (UID: \"5c9ab9a7-030c-4d45-8c75-950f457bb69c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-62rd9" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.453501 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.454580 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.456645 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.483577 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/5825d5b3-a095-48d6-9365-d03d1faa63ca-audit\") pod \"apiserver-76f77b778f-72n7g\" (UID: \"5825d5b3-a095-48d6-9365-d03d1faa63ca\") " pod="openshift-apiserver/apiserver-76f77b778f-72n7g" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.484091 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0636c9e2-4b00-4d01-8426-5fbfd9f9fa88-audit-policies\") pod \"oauth-openshift-558db77b4-2k592\" (UID: \"0636c9e2-4b00-4d01-8426-5fbfd9f9fa88\") " pod="openshift-authentication/oauth-openshift-558db77b4-2k592" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.486456 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3e7e7e0f-8493-47c5-859d-3e7046c7ddab-auth-proxy-config\") pod \"machine-approver-56656f9798-pjgnh\" (UID: \"3e7e7e0f-8493-47c5-859d-3e7046c7ddab\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pjgnh" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.487005 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0725188c-6c61-4369-8247-ffdda7e830e8-serving-cert\") pod \"controller-manager-879f6c89f-ptgv8\" (UID: \"0725188c-6c61-4369-8247-ffdda7e830e8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ptgv8" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.487392 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5825d5b3-a095-48d6-9365-d03d1faa63ca-serving-cert\") pod \"apiserver-76f77b778f-72n7g\" (UID: \"5825d5b3-a095-48d6-9365-d03d1faa63ca\") " pod="openshift-apiserver/apiserver-76f77b778f-72n7g" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.487461 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.487562 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.487721 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.487854 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.487914 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01d4de81-8b50-4231-912d-3a65797a9754-serving-cert\") pod \"route-controller-manager-6576b87f9c-npjwk\" (UID: \"01d4de81-8b50-4231-912d-3a65797a9754\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-npjwk" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.488022 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.488175 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.488474 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5825d5b3-a095-48d6-9365-d03d1faa63ca-trusted-ca-bundle\") pod \"apiserver-76f77b778f-72n7g\" (UID: \"5825d5b3-a095-48d6-9365-d03d1faa63ca\") " pod="openshift-apiserver/apiserver-76f77b778f-72n7g" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.488768 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.488989 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.489092 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.489159 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.490344 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0725188c-6c61-4369-8247-ffdda7e830e8-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-ptgv8\" (UID: \"0725188c-6c61-4369-8247-ffdda7e830e8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ptgv8" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.490644 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5825d5b3-a095-48d6-9365-d03d1faa63ca-etcd-client\") pod \"apiserver-76f77b778f-72n7g\" (UID: \"5825d5b3-a095-48d6-9365-d03d1faa63ca\") " pod="openshift-apiserver/apiserver-76f77b778f-72n7g" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.490969 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.492750 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/54da68a4-65a4-488f-a029-783cf51bdc04-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-vz29n\" (UID: \"54da68a4-65a4-488f-a029-783cf51bdc04\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vz29n" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.493226 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0636c9e2-4b00-4d01-8426-5fbfd9f9fa88-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-2k592\" (UID: \"0636c9e2-4b00-4d01-8426-5fbfd9f9fa88\") " pod="openshift-authentication/oauth-openshift-558db77b4-2k592" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.496949 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5c9ab9a7-030c-4d45-8c75-950f457bb69c-encryption-config\") pod \"apiserver-7bbb656c7d-62rd9\" (UID: \"5c9ab9a7-030c-4d45-8c75-950f457bb69c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-62rd9" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.498834 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5825d5b3-a095-48d6-9365-d03d1faa63ca-etcd-serving-ca\") pod \"apiserver-76f77b778f-72n7g\" (UID: \"5825d5b3-a095-48d6-9365-d03d1faa63ca\") " pod="openshift-apiserver/apiserver-76f77b778f-72n7g" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.499444 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0636c9e2-4b00-4d01-8426-5fbfd9f9fa88-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-2k592\" (UID: \"0636c9e2-4b00-4d01-8426-5fbfd9f9fa88\") " pod="openshift-authentication/oauth-openshift-558db77b4-2k592" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.500201 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0636c9e2-4b00-4d01-8426-5fbfd9f9fa88-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-2k592\" (UID: \"0636c9e2-4b00-4d01-8426-5fbfd9f9fa88\") " pod="openshift-authentication/oauth-openshift-558db77b4-2k592" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.500644 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.509126 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.509402 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0636c9e2-4b00-4d01-8426-5fbfd9f9fa88-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-2k592\" (UID: \"0636c9e2-4b00-4d01-8426-5fbfd9f9fa88\") " pod="openshift-authentication/oauth-openshift-558db77b4-2k592" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.512142 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0636c9e2-4b00-4d01-8426-5fbfd9f9fa88-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-2k592\" (UID: \"0636c9e2-4b00-4d01-8426-5fbfd9f9fa88\") " pod="openshift-authentication/oauth-openshift-558db77b4-2k592" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.512677 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5c9ab9a7-030c-4d45-8c75-950f457bb69c-etcd-client\") pod \"apiserver-7bbb656c7d-62rd9\" (UID: \"5c9ab9a7-030c-4d45-8c75-950f457bb69c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-62rd9" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.513147 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0636c9e2-4b00-4d01-8426-5fbfd9f9fa88-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-2k592\" (UID: \"0636c9e2-4b00-4d01-8426-5fbfd9f9fa88\") " pod="openshift-authentication/oauth-openshift-558db77b4-2k592" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.513493 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggt4k\" (UniqueName: \"kubernetes.io/projected/5c9ab9a7-030c-4d45-8c75-950f457bb69c-kube-api-access-ggt4k\") pod \"apiserver-7bbb656c7d-62rd9\" (UID: \"5c9ab9a7-030c-4d45-8c75-950f457bb69c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-62rd9" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.515124 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnfjq\" (UniqueName: \"kubernetes.io/projected/0636c9e2-4b00-4d01-8426-5fbfd9f9fa88-kube-api-access-xnfjq\") pod \"oauth-openshift-558db77b4-2k592\" (UID: \"0636c9e2-4b00-4d01-8426-5fbfd9f9fa88\") " pod="openshift-authentication/oauth-openshift-558db77b4-2k592" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.517376 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2glzg"] Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.518492 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrj84\" (UniqueName: \"kubernetes.io/projected/01d4de81-8b50-4231-912d-3a65797a9754-kube-api-access-vrj84\") pod \"route-controller-manager-6576b87f9c-npjwk\" (UID: \"01d4de81-8b50-4231-912d-3a65797a9754\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-npjwk" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.518612 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c246g"] Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.522863 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5825d5b3-a095-48d6-9365-d03d1faa63ca-encryption-config\") pod \"apiserver-76f77b778f-72n7g\" (UID: \"5825d5b3-a095-48d6-9365-d03d1faa63ca\") " pod="openshift-apiserver/apiserver-76f77b778f-72n7g" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.522902 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/3e7e7e0f-8493-47c5-859d-3e7046c7ddab-machine-approver-tls\") pod \"machine-approver-56656f9798-pjgnh\" (UID: \"3e7e7e0f-8493-47c5-859d-3e7046c7ddab\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pjgnh" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.523046 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5c9ab9a7-030c-4d45-8c75-950f457bb69c-serving-cert\") pod \"apiserver-7bbb656c7d-62rd9\" (UID: \"5c9ab9a7-030c-4d45-8c75-950f457bb69c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-62rd9" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.523129 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tp7f9\" (UniqueName: \"kubernetes.io/projected/54da68a4-65a4-488f-a029-783cf51bdc04-kube-api-access-tp7f9\") pod \"openshift-apiserver-operator-796bbdcf4f-vz29n\" (UID: \"54da68a4-65a4-488f-a029-783cf51bdc04\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vz29n" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.523452 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llkv9\" (UniqueName: \"kubernetes.io/projected/5825d5b3-a095-48d6-9365-d03d1faa63ca-kube-api-access-llkv9\") pod \"apiserver-76f77b778f-72n7g\" (UID: \"5825d5b3-a095-48d6-9365-d03d1faa63ca\") " pod="openshift-apiserver/apiserver-76f77b778f-72n7g" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.525970 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kd75\" (UniqueName: \"kubernetes.io/projected/3e7e7e0f-8493-47c5-859d-3e7046c7ddab-kube-api-access-7kd75\") pod \"machine-approver-56656f9798-pjgnh\" (UID: \"3e7e7e0f-8493-47c5-859d-3e7046c7ddab\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pjgnh" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.526018 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-md588"] Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.528696 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vz29n"] Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.528730 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-2k592"] Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.535067 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-sk8ll"] Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.537065 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kr6m9\" (UniqueName: \"kubernetes.io/projected/0725188c-6c61-4369-8247-ffdda7e830e8-kube-api-access-kr6m9\") pod \"controller-manager-879f6c89f-ptgv8\" (UID: \"0725188c-6c61-4369-8247-ffdda7e830e8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ptgv8" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.538176 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0636c9e2-4b00-4d01-8426-5fbfd9f9fa88-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-2k592\" (UID: \"0636c9e2-4b00-4d01-8426-5fbfd9f9fa88\") " pod="openshift-authentication/oauth-openshift-558db77b4-2k592" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.540763 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-hhmft"] Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.552343 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-xn84j"] Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.555350 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/2588cb3b-8139-4529-a6e1-c57532afdfa7-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-xn84j\" (UID: \"2588cb3b-8139-4529-a6e1-c57532afdfa7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xn84j" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.555381 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a069c6b7-9e3b-40fd-b830-0d2a82fadd9a-console-config\") pod \"console-f9d7485db-sk8ll\" (UID: \"a069c6b7-9e3b-40fd-b830-0d2a82fadd9a\") " pod="openshift-console/console-f9d7485db-sk8ll" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.555400 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a069c6b7-9e3b-40fd-b830-0d2a82fadd9a-console-oauth-config\") pod \"console-f9d7485db-sk8ll\" (UID: \"a069c6b7-9e3b-40fd-b830-0d2a82fadd9a\") " pod="openshift-console/console-f9d7485db-sk8ll" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.555418 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3ae56075-fe7d-4f73-9cd9-ba58ff4c6bfb-service-ca-bundle\") pod \"authentication-operator-69f744f599-md588\" (UID: \"3ae56075-fe7d-4f73-9cd9-ba58ff4c6bfb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-md588" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.555434 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3ae56075-fe7d-4f73-9cd9-ba58ff4c6bfb-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-md588\" (UID: \"3ae56075-fe7d-4f73-9cd9-ba58ff4c6bfb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-md588" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.555457 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wt5qr\" (UniqueName: \"kubernetes.io/projected/1b4d169b-dd53-42aa-9b14-f9bd24e801e6-kube-api-access-wt5qr\") pod \"cluster-image-registry-operator-dc59b4c8b-c246g\" (UID: \"1b4d169b-dd53-42aa-9b14-f9bd24e801e6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c246g" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.555476 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a077eb6-dc4e-4b2c-88be-eb4fad075e7a-serving-cert\") pod \"openshift-config-operator-7777fb866f-hhmft\" (UID: \"6a077eb6-dc4e-4b2c-88be-eb4fad075e7a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hhmft" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.555508 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/1b4d169b-dd53-42aa-9b14-f9bd24e801e6-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-c246g\" (UID: \"1b4d169b-dd53-42aa-9b14-f9bd24e801e6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c246g" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.555526 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2crr2\" (UniqueName: \"kubernetes.io/projected/f51b3907-2cbf-4b80-a800-411404195052-kube-api-access-2crr2\") pod \"cluster-samples-operator-665b6dd947-2glzg\" (UID: \"f51b3907-2cbf-4b80-a800-411404195052\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2glzg" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.555550 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-df95s\" (UniqueName: \"kubernetes.io/projected/a069c6b7-9e3b-40fd-b830-0d2a82fadd9a-kube-api-access-df95s\") pod \"console-f9d7485db-sk8ll\" (UID: \"a069c6b7-9e3b-40fd-b830-0d2a82fadd9a\") " pod="openshift-console/console-f9d7485db-sk8ll" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.555567 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cee77bd5-92e1-4458-bdf2-49912954144d-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-4c7tt\" (UID: \"cee77bd5-92e1-4458-bdf2-49912954144d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4c7tt" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.555599 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/6a077eb6-dc4e-4b2c-88be-eb4fad075e7a-available-featuregates\") pod \"openshift-config-operator-7777fb866f-hhmft\" (UID: \"6a077eb6-dc4e-4b2c-88be-eb4fad075e7a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hhmft" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.555614 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-km2d9\" (UniqueName: \"kubernetes.io/projected/6a077eb6-dc4e-4b2c-88be-eb4fad075e7a-kube-api-access-km2d9\") pod \"openshift-config-operator-7777fb866f-hhmft\" (UID: \"6a077eb6-dc4e-4b2c-88be-eb4fad075e7a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hhmft" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.555644 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xv9bz\" (UniqueName: \"kubernetes.io/projected/2588cb3b-8139-4529-a6e1-c57532afdfa7-kube-api-access-xv9bz\") pod \"machine-api-operator-5694c8668f-xn84j\" (UID: \"2588cb3b-8139-4529-a6e1-c57532afdfa7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xn84j" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.555662 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/2588cb3b-8139-4529-a6e1-c57532afdfa7-images\") pod \"machine-api-operator-5694c8668f-xn84j\" (UID: \"2588cb3b-8139-4529-a6e1-c57532afdfa7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xn84j" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.555680 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1b4d169b-dd53-42aa-9b14-f9bd24e801e6-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-c246g\" (UID: \"1b4d169b-dd53-42aa-9b14-f9bd24e801e6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c246g" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.555697 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j27sw\" (UniqueName: \"kubernetes.io/projected/cee77bd5-92e1-4458-bdf2-49912954144d-kube-api-access-j27sw\") pod \"openshift-controller-manager-operator-756b6f6bc6-4c7tt\" (UID: \"cee77bd5-92e1-4458-bdf2-49912954144d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4c7tt" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.555711 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a069c6b7-9e3b-40fd-b830-0d2a82fadd9a-trusted-ca-bundle\") pod \"console-f9d7485db-sk8ll\" (UID: \"a069c6b7-9e3b-40fd-b830-0d2a82fadd9a\") " pod="openshift-console/console-f9d7485db-sk8ll" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.555736 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1b4d169b-dd53-42aa-9b14-f9bd24e801e6-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-c246g\" (UID: \"1b4d169b-dd53-42aa-9b14-f9bd24e801e6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c246g" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.555764 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ae56075-fe7d-4f73-9cd9-ba58ff4c6bfb-serving-cert\") pod \"authentication-operator-69f744f599-md588\" (UID: \"3ae56075-fe7d-4f73-9cd9-ba58ff4c6bfb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-md588" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.555779 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cee77bd5-92e1-4458-bdf2-49912954144d-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-4c7tt\" (UID: \"cee77bd5-92e1-4458-bdf2-49912954144d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4c7tt" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.555798 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ae56075-fe7d-4f73-9cd9-ba58ff4c6bfb-config\") pod \"authentication-operator-69f744f599-md588\" (UID: \"3ae56075-fe7d-4f73-9cd9-ba58ff4c6bfb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-md588" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.555814 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a069c6b7-9e3b-40fd-b830-0d2a82fadd9a-console-serving-cert\") pod \"console-f9d7485db-sk8ll\" (UID: \"a069c6b7-9e3b-40fd-b830-0d2a82fadd9a\") " pod="openshift-console/console-f9d7485db-sk8ll" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.555832 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2588cb3b-8139-4529-a6e1-c57532afdfa7-config\") pod \"machine-api-operator-5694c8668f-xn84j\" (UID: \"2588cb3b-8139-4529-a6e1-c57532afdfa7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xn84j" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.555849 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5h4n\" (UniqueName: \"kubernetes.io/projected/d366e2f7-22ef-46e5-855f-0f26e6a9186c-kube-api-access-f5h4n\") pod \"downloads-7954f5f757-wvzt8\" (UID: \"d366e2f7-22ef-46e5-855f-0f26e6a9186c\") " pod="openshift-console/downloads-7954f5f757-wvzt8" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.555868 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5tbr\" (UniqueName: \"kubernetes.io/projected/3ae56075-fe7d-4f73-9cd9-ba58ff4c6bfb-kube-api-access-g5tbr\") pod \"authentication-operator-69f744f599-md588\" (UID: \"3ae56075-fe7d-4f73-9cd9-ba58ff4c6bfb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-md588" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.555885 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a069c6b7-9e3b-40fd-b830-0d2a82fadd9a-service-ca\") pod \"console-f9d7485db-sk8ll\" (UID: \"a069c6b7-9e3b-40fd-b830-0d2a82fadd9a\") " pod="openshift-console/console-f9d7485db-sk8ll" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.555900 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f51b3907-2cbf-4b80-a800-411404195052-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-2glzg\" (UID: \"f51b3907-2cbf-4b80-a800-411404195052\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2glzg" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.555934 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a069c6b7-9e3b-40fd-b830-0d2a82fadd9a-oauth-serving-cert\") pod \"console-f9d7485db-sk8ll\" (UID: \"a069c6b7-9e3b-40fd-b830-0d2a82fadd9a\") " pod="openshift-console/console-f9d7485db-sk8ll" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.556934 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a069c6b7-9e3b-40fd-b830-0d2a82fadd9a-oauth-serving-cert\") pod \"console-f9d7485db-sk8ll\" (UID: \"a069c6b7-9e3b-40fd-b830-0d2a82fadd9a\") " pod="openshift-console/console-f9d7485db-sk8ll" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.557986 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ae56075-fe7d-4f73-9cd9-ba58ff4c6bfb-config\") pod \"authentication-operator-69f744f599-md588\" (UID: \"3ae56075-fe7d-4f73-9cd9-ba58ff4c6bfb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-md588" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.559802 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3ae56075-fe7d-4f73-9cd9-ba58ff4c6bfb-service-ca-bundle\") pod \"authentication-operator-69f744f599-md588\" (UID: \"3ae56075-fe7d-4f73-9cd9-ba58ff4c6bfb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-md588" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.573187 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/1b4d169b-dd53-42aa-9b14-f9bd24e801e6-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-c246g\" (UID: \"1b4d169b-dd53-42aa-9b14-f9bd24e801e6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c246g" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.573523 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2588cb3b-8139-4529-a6e1-c57532afdfa7-config\") pod \"machine-api-operator-5694c8668f-xn84j\" (UID: \"2588cb3b-8139-4529-a6e1-c57532afdfa7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xn84j" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.574446 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-wvzt8"] Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.576124 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cee77bd5-92e1-4458-bdf2-49912954144d-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-4c7tt\" (UID: \"cee77bd5-92e1-4458-bdf2-49912954144d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4c7tt" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.582524 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/2588cb3b-8139-4529-a6e1-c57532afdfa7-images\") pod \"machine-api-operator-5694c8668f-xn84j\" (UID: \"2588cb3b-8139-4529-a6e1-c57532afdfa7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xn84j" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.582710 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a069c6b7-9e3b-40fd-b830-0d2a82fadd9a-service-ca\") pod \"console-f9d7485db-sk8ll\" (UID: \"a069c6b7-9e3b-40fd-b830-0d2a82fadd9a\") " pod="openshift-console/console-f9d7485db-sk8ll" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.583355 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-glvvh"] Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.583369 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a069c6b7-9e3b-40fd-b830-0d2a82fadd9a-console-config\") pod \"console-f9d7485db-sk8ll\" (UID: \"a069c6b7-9e3b-40fd-b830-0d2a82fadd9a\") " pod="openshift-console/console-f9d7485db-sk8ll" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.598977 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/6a077eb6-dc4e-4b2c-88be-eb4fad075e7a-available-featuregates\") pod \"openshift-config-operator-7777fb866f-hhmft\" (UID: \"6a077eb6-dc4e-4b2c-88be-eb4fad075e7a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hhmft" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.599094 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1b4d169b-dd53-42aa-9b14-f9bd24e801e6-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-c246g\" (UID: \"1b4d169b-dd53-42aa-9b14-f9bd24e801e6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c246g" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.590397 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a069c6b7-9e3b-40fd-b830-0d2a82fadd9a-console-oauth-config\") pod \"console-f9d7485db-sk8ll\" (UID: \"a069c6b7-9e3b-40fd-b830-0d2a82fadd9a\") " pod="openshift-console/console-f9d7485db-sk8ll" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.599819 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ae56075-fe7d-4f73-9cd9-ba58ff4c6bfb-serving-cert\") pod \"authentication-operator-69f744f599-md588\" (UID: \"3ae56075-fe7d-4f73-9cd9-ba58ff4c6bfb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-md588" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.601360 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4c7tt"] Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.601412 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cee77bd5-92e1-4458-bdf2-49912954144d-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-4c7tt\" (UID: \"cee77bd5-92e1-4458-bdf2-49912954144d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4c7tt" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.601438 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-df95s\" (UniqueName: \"kubernetes.io/projected/a069c6b7-9e3b-40fd-b830-0d2a82fadd9a-kube-api-access-df95s\") pod \"console-f9d7485db-sk8ll\" (UID: \"a069c6b7-9e3b-40fd-b830-0d2a82fadd9a\") " pod="openshift-console/console-f9d7485db-sk8ll" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.602225 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f51b3907-2cbf-4b80-a800-411404195052-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-2glzg\" (UID: \"f51b3907-2cbf-4b80-a800-411404195052\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2glzg" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.602335 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-zn5lj"] Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.602652 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3ae56075-fe7d-4f73-9cd9-ba58ff4c6bfb-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-md588\" (UID: \"3ae56075-fe7d-4f73-9cd9-ba58ff4c6bfb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-md588" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.602729 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/2588cb3b-8139-4529-a6e1-c57532afdfa7-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-xn84j\" (UID: \"2588cb3b-8139-4529-a6e1-c57532afdfa7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xn84j" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.603200 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xv9bz\" (UniqueName: \"kubernetes.io/projected/2588cb3b-8139-4529-a6e1-c57532afdfa7-kube-api-access-xv9bz\") pod \"machine-api-operator-5694c8668f-xn84j\" (UID: \"2588cb3b-8139-4529-a6e1-c57532afdfa7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xn84j" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.604209 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a069c6b7-9e3b-40fd-b830-0d2a82fadd9a-console-serving-cert\") pod \"console-f9d7485db-sk8ll\" (UID: \"a069c6b7-9e3b-40fd-b830-0d2a82fadd9a\") " pod="openshift-console/console-f9d7485db-sk8ll" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.604842 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a069c6b7-9e3b-40fd-b830-0d2a82fadd9a-trusted-ca-bundle\") pod \"console-f9d7485db-sk8ll\" (UID: \"a069c6b7-9e3b-40fd-b830-0d2a82fadd9a\") " pod="openshift-console/console-f9d7485db-sk8ll" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.606995 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5h4n\" (UniqueName: \"kubernetes.io/projected/d366e2f7-22ef-46e5-855f-0f26e6a9186c-kube-api-access-f5h4n\") pod \"downloads-7954f5f757-wvzt8\" (UID: \"d366e2f7-22ef-46e5-855f-0f26e6a9186c\") " pod="openshift-console/downloads-7954f5f757-wvzt8" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.607619 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vz29n" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.609897 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5tbr\" (UniqueName: \"kubernetes.io/projected/3ae56075-fe7d-4f73-9cd9-ba58ff4c6bfb-kube-api-access-g5tbr\") pod \"authentication-operator-69f744f599-md588\" (UID: \"3ae56075-fe7d-4f73-9cd9-ba58ff4c6bfb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-md588" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.609997 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1b4d169b-dd53-42aa-9b14-f9bd24e801e6-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-c246g\" (UID: \"1b4d169b-dd53-42aa-9b14-f9bd24e801e6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c246g" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.615585 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-hjt88"] Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.616156 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-jwlmm"] Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.616725 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kx8xd"] Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.617063 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-2k592" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.617108 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6k9lf"] Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.617270 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-km2d9\" (UniqueName: \"kubernetes.io/projected/6a077eb6-dc4e-4b2c-88be-eb4fad075e7a-kube-api-access-km2d9\") pod \"openshift-config-operator-7777fb866f-hhmft\" (UID: \"6a077eb6-dc4e-4b2c-88be-eb4fad075e7a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hhmft" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.617452 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-d7f6t"] Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.618063 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hjt88" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.618142 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-jwlmm" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.618228 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-85fg2"] Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.618326 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2crr2\" (UniqueName: \"kubernetes.io/projected/f51b3907-2cbf-4b80-a800-411404195052-kube-api-access-2crr2\") pod \"cluster-samples-operator-665b6dd947-2glzg\" (UID: \"f51b3907-2cbf-4b80-a800-411404195052\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2glzg" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.618552 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kx8xd" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.618602 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7xp9j"] Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.618781 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.618925 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-dqctr"] Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.619646 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-d7f6t" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.624480 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zzj7p"] Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.619674 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6k9lf" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.619872 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7xp9j" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.625263 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-zzj7p" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.625443 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-dqctr" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.620031 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a077eb6-dc4e-4b2c-88be-eb4fad075e7a-serving-cert\") pod \"openshift-config-operator-7777fb866f-hhmft\" (UID: \"6a077eb6-dc4e-4b2c-88be-eb4fad075e7a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hhmft" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.622343 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j27sw\" (UniqueName: \"kubernetes.io/projected/cee77bd5-92e1-4458-bdf2-49912954144d-kube-api-access-j27sw\") pod \"openshift-controller-manager-operator-756b6f6bc6-4c7tt\" (UID: \"cee77bd5-92e1-4458-bdf2-49912954144d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4c7tt" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.625190 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-2tq6k"] Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.621844 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.619701 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-85fg2" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.621903 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.621065 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wt5qr\" (UniqueName: \"kubernetes.io/projected/1b4d169b-dd53-42aa-9b14-f9bd24e801e6-kube-api-access-wt5qr\") pod \"cluster-image-registry-operator-dc59b4c8b-c246g\" (UID: \"1b4d169b-dd53-42aa-9b14-f9bd24e801e6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c246g" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.622586 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.622608 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.622941 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.628017 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.628119 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.624040 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.626052 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.626464 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.627956 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.627795 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-zn5lj"] Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.628241 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n6hjp"] Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.627960 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.627847 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2tq6k" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.628967 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.629011 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n6hjp" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.629932 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-sk8ll" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.631582 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s6wq8"] Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.632079 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s6wq8" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.632445 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412765-29wpf"] Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.633851 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.634244 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-575sd"] Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.634608 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412765-29wpf" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.635164 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-575sd" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.635196 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-9wnfz"] Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.635980 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-9wnfz" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.637922 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.640049 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-8hs4t"] Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.641673 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-8hs4t" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.641720 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kjctk"] Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.645350 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-rwrf2"] Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.645755 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2glzg" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.645901 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kjctk" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.645872 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-rwrf2" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.645784 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-vlhbw"] Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.648027 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-vlhbw" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.657603 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.658761 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-rskz9"] Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.659291 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-szpmg"] Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.659564 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-qxpz7"] Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.659646 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-rskz9" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.659822 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-szpmg" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.671478 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-md588" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.672704 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-hjt88"] Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.672745 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-k58fj"] Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.673083 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qxpz7" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.676167 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-xn84j" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.679757 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-7wtlt"] Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.679988 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-k58fj" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.683770 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-r4wn7"] Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.684145 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-jwlmm"] Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.684176 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kx8xd"] Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.684189 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-dqctr"] Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.684271 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-7wtlt" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.684711 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-r4wn7" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.685435 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pjgnh" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.685792 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-d7f6t"] Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.688651 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c246g" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.689888 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-85fg2"] Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.691713 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6k9lf"] Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.691773 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7xp9j"] Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.693009 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s6wq8"] Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.693618 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.694643 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n6hjp"] Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.696994 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-rwrf2"] Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.698390 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-2tq6k"] Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.700939 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-vlhbw"] Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.703315 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412765-29wpf"] Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.709033 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-rskz9"] Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.709135 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-575sd"] Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.710848 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-62rd9" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.712503 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zzj7p"] Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.712549 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kjctk"] Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.713547 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-8hs4t"] Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.715098 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-k58fj"] Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.716574 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-7wtlt"] Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.716943 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.718067 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-szpmg"] Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.719004 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-wvzt8" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.719353 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-qxpz7"] Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.734468 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hhmft" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.737475 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.737525 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-ptgv8" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.748566 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4c7tt" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.753803 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.757819 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cdca175-2ec1-427b-8c68-99a065f9d5d7-config\") pod \"kube-controller-manager-operator-78b949d7b-85fg2\" (UID: \"2cdca175-2ec1-427b-8c68-99a065f9d5d7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-85fg2" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.757874 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcq64\" (UniqueName: \"kubernetes.io/projected/68f994ae-dec7-4be6-b4e7-7a1875b40f4f-kube-api-access-fcq64\") pod \"olm-operator-6b444d44fb-575sd\" (UID: \"68f994ae-dec7-4be6-b4e7-7a1875b40f4f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-575sd" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.757902 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6645982e-cae8-46d2-8b4d-e5d2a01ad127-config\") pod \"service-ca-operator-777779d784-rwrf2\" (UID: \"6645982e-cae8-46d2-8b4d-e5d2a01ad127\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rwrf2" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.757935 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/869ead05-05ad-4705-8193-0f5cf6987257-config\") pod \"kube-apiserver-operator-766d6c64bb-6k9lf\" (UID: \"869ead05-05ad-4705-8193-0f5cf6987257\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6k9lf" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.757952 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a5484631-f1bd-49f7-a0bd-46e783e44095-profile-collector-cert\") pod \"catalog-operator-68c6474976-n6hjp\" (UID: \"a5484631-f1bd-49f7-a0bd-46e783e44095\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n6hjp" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.757989 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/fe999f2d-8f13-4ea6-8afb-ff99ff665d91-etcd-service-ca\") pod \"etcd-operator-b45778765-d7f6t\" (UID: \"fe999f2d-8f13-4ea6-8afb-ff99ff665d91\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d7f6t" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.758008 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqnrb\" (UniqueName: \"kubernetes.io/projected/d68d65c4-d762-426b-84bf-d1e05738d0ee-kube-api-access-zqnrb\") pod \"dns-operator-744455d44c-jwlmm\" (UID: \"d68d65c4-d762-426b-84bf-d1e05738d0ee\") " pod="openshift-dns-operator/dns-operator-744455d44c-jwlmm" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.758024 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/68f994ae-dec7-4be6-b4e7-7a1875b40f4f-srv-cert\") pod \"olm-operator-6b444d44fb-575sd\" (UID: \"68f994ae-dec7-4be6-b4e7-7a1875b40f4f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-575sd" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.758051 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4cd4de5c-8c29-4f8e-bde8-1c21bd833d80-proxy-tls\") pod \"machine-config-controller-84d6567774-2tq6k\" (UID: \"4cd4de5c-8c29-4f8e-bde8-1c21bd833d80\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2tq6k" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.758086 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2cdca175-2ec1-427b-8c68-99a065f9d5d7-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-85fg2\" (UID: \"2cdca175-2ec1-427b-8c68-99a065f9d5d7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-85fg2" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.758114 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7cc7f044-be33-41ca-b4af-bdf5a0ca066f-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-vlhbw\" (UID: \"7cc7f044-be33-41ca-b4af-bdf5a0ca066f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-vlhbw" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.758129 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/d22bd794-105c-4c41-b1c3-46723ed0cb79-mountpoint-dir\") pod \"csi-hostpathplugin-8hs4t\" (UID: \"d22bd794-105c-4c41-b1c3-46723ed0cb79\") " pod="hostpath-provisioner/csi-hostpathplugin-8hs4t" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.758187 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/d22bd794-105c-4c41-b1c3-46723ed0cb79-plugins-dir\") pod \"csi-hostpathplugin-8hs4t\" (UID: \"d22bd794-105c-4c41-b1c3-46723ed0cb79\") " pod="hostpath-provisioner/csi-hostpathplugin-8hs4t" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.758212 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06ff5ccd-6dfa-4826-8a1d-929ac5deadf4-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-7xp9j\" (UID: \"06ff5ccd-6dfa-4826-8a1d-929ac5deadf4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7xp9j" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.758229 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/89c50427-14ae-409d-89d5-a56be0ff97d1-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-zzj7p\" (UID: \"89c50427-14ae-409d-89d5-a56be0ff97d1\") " pod="openshift-marketplace/marketplace-operator-79b997595-zzj7p" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.758247 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/fe999f2d-8f13-4ea6-8afb-ff99ff665d91-etcd-ca\") pod \"etcd-operator-b45778765-d7f6t\" (UID: \"fe999f2d-8f13-4ea6-8afb-ff99ff665d91\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d7f6t" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.758366 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/89c50427-14ae-409d-89d5-a56be0ff97d1-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-zzj7p\" (UID: \"89c50427-14ae-409d-89d5-a56be0ff97d1\") " pod="openshift-marketplace/marketplace-operator-79b997595-zzj7p" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.758382 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5zdm\" (UniqueName: \"kubernetes.io/projected/d22bd794-105c-4c41-b1c3-46723ed0cb79-kube-api-access-t5zdm\") pod \"csi-hostpathplugin-8hs4t\" (UID: \"d22bd794-105c-4c41-b1c3-46723ed0cb79\") " pod="hostpath-provisioner/csi-hostpathplugin-8hs4t" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.758403 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcvf6\" (UniqueName: \"kubernetes.io/projected/89c50427-14ae-409d-89d5-a56be0ff97d1-kube-api-access-lcvf6\") pod \"marketplace-operator-79b997595-zzj7p\" (UID: \"89c50427-14ae-409d-89d5-a56be0ff97d1\") " pod="openshift-marketplace/marketplace-operator-79b997595-zzj7p" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.758443 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2cdca175-2ec1-427b-8c68-99a065f9d5d7-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-85fg2\" (UID: \"2cdca175-2ec1-427b-8c68-99a065f9d5d7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-85fg2" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.758466 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fe999f2d-8f13-4ea6-8afb-ff99ff665d91-etcd-client\") pod \"etcd-operator-b45778765-d7f6t\" (UID: \"fe999f2d-8f13-4ea6-8afb-ff99ff665d91\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d7f6t" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.758489 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d68d65c4-d762-426b-84bf-d1e05738d0ee-metrics-tls\") pod \"dns-operator-744455d44c-jwlmm\" (UID: \"d68d65c4-d762-426b-84bf-d1e05738d0ee\") " pod="openshift-dns-operator/dns-operator-744455d44c-jwlmm" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.758506 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gg4x\" (UniqueName: \"kubernetes.io/projected/7cc7f044-be33-41ca-b4af-bdf5a0ca066f-kube-api-access-6gg4x\") pod \"multus-admission-controller-857f4d67dd-vlhbw\" (UID: \"7cc7f044-be33-41ca-b4af-bdf5a0ca066f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-vlhbw" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.758524 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/05a3b920-eb04-4864-81ac-924ba7c63d4e-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-s6wq8\" (UID: \"05a3b920-eb04-4864-81ac-924ba7c63d4e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s6wq8" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.758541 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4rzv\" (UniqueName: \"kubernetes.io/projected/6645982e-cae8-46d2-8b4d-e5d2a01ad127-kube-api-access-f4rzv\") pod \"service-ca-operator-777779d784-rwrf2\" (UID: \"6645982e-cae8-46d2-8b4d-e5d2a01ad127\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rwrf2" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.758555 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03a67bcc-6366-4e73-a3d9-39dcdd4f2f82-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-kx8xd\" (UID: \"03a67bcc-6366-4e73-a3d9-39dcdd4f2f82\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kx8xd" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.758590 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxkb8\" (UniqueName: \"kubernetes.io/projected/fe999f2d-8f13-4ea6-8afb-ff99ff665d91-kube-api-access-wxkb8\") pod \"etcd-operator-b45778765-d7f6t\" (UID: \"fe999f2d-8f13-4ea6-8afb-ff99ff665d91\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d7f6t" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.758606 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78pb6\" (UniqueName: \"kubernetes.io/projected/a5484631-f1bd-49f7-a0bd-46e783e44095-kube-api-access-78pb6\") pod \"catalog-operator-68c6474976-n6hjp\" (UID: \"a5484631-f1bd-49f7-a0bd-46e783e44095\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n6hjp" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.758620 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe999f2d-8f13-4ea6-8afb-ff99ff665d91-config\") pod \"etcd-operator-b45778765-d7f6t\" (UID: \"fe999f2d-8f13-4ea6-8afb-ff99ff665d91\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d7f6t" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.758653 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d22bd794-105c-4c41-b1c3-46723ed0cb79-registration-dir\") pod \"csi-hostpathplugin-8hs4t\" (UID: \"d22bd794-105c-4c41-b1c3-46723ed0cb79\") " pod="hostpath-provisioner/csi-hostpathplugin-8hs4t" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.758678 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9fjf\" (UniqueName: \"kubernetes.io/projected/4cd4de5c-8c29-4f8e-bde8-1c21bd833d80-kube-api-access-t9fjf\") pod \"machine-config-controller-84d6567774-2tq6k\" (UID: \"4cd4de5c-8c29-4f8e-bde8-1c21bd833d80\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2tq6k" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.758692 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/d22bd794-105c-4c41-b1c3-46723ed0cb79-csi-data-dir\") pod \"csi-hostpathplugin-8hs4t\" (UID: \"d22bd794-105c-4c41-b1c3-46723ed0cb79\") " pod="hostpath-provisioner/csi-hostpathplugin-8hs4t" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.758716 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/869ead05-05ad-4705-8193-0f5cf6987257-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-6k9lf\" (UID: \"869ead05-05ad-4705-8193-0f5cf6987257\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6k9lf" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.758756 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a5484631-f1bd-49f7-a0bd-46e783e44095-srv-cert\") pod \"catalog-operator-68c6474976-n6hjp\" (UID: \"a5484631-f1bd-49f7-a0bd-46e783e44095\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n6hjp" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.758771 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d22bd794-105c-4c41-b1c3-46723ed0cb79-socket-dir\") pod \"csi-hostpathplugin-8hs4t\" (UID: \"d22bd794-105c-4c41-b1c3-46723ed0cb79\") " pod="hostpath-provisioner/csi-hostpathplugin-8hs4t" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.758788 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rdpr\" (UniqueName: \"kubernetes.io/projected/05a3b920-eb04-4864-81ac-924ba7c63d4e-kube-api-access-7rdpr\") pod \"control-plane-machine-set-operator-78cbb6b69f-s6wq8\" (UID: \"05a3b920-eb04-4864-81ac-924ba7c63d4e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s6wq8" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.758803 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03a67bcc-6366-4e73-a3d9-39dcdd4f2f82-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-kx8xd\" (UID: \"03a67bcc-6366-4e73-a3d9-39dcdd4f2f82\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kx8xd" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.758847 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4cd4de5c-8c29-4f8e-bde8-1c21bd833d80-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-2tq6k\" (UID: \"4cd4de5c-8c29-4f8e-bde8-1c21bd833d80\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2tq6k" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.758868 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/68f994ae-dec7-4be6-b4e7-7a1875b40f4f-profile-collector-cert\") pod \"olm-operator-6b444d44fb-575sd\" (UID: \"68f994ae-dec7-4be6-b4e7-7a1875b40f4f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-575sd" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.758901 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/03a67bcc-6366-4e73-a3d9-39dcdd4f2f82-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-kx8xd\" (UID: \"03a67bcc-6366-4e73-a3d9-39dcdd4f2f82\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kx8xd" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.758975 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6645982e-cae8-46d2-8b4d-e5d2a01ad127-serving-cert\") pod \"service-ca-operator-777779d784-rwrf2\" (UID: \"6645982e-cae8-46d2-8b4d-e5d2a01ad127\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rwrf2" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.759029 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06ff5ccd-6dfa-4826-8a1d-929ac5deadf4-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-7xp9j\" (UID: \"06ff5ccd-6dfa-4826-8a1d-929ac5deadf4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7xp9j" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.759057 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tb4m9\" (UniqueName: \"kubernetes.io/projected/2ad8791b-88af-4c86-8ed4-999a3357c3b7-kube-api-access-tb4m9\") pod \"migrator-59844c95c7-dqctr\" (UID: \"2ad8791b-88af-4c86-8ed4-999a3357c3b7\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-dqctr" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.759073 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/869ead05-05ad-4705-8193-0f5cf6987257-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-6k9lf\" (UID: \"869ead05-05ad-4705-8193-0f5cf6987257\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6k9lf" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.759110 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fe999f2d-8f13-4ea6-8afb-ff99ff665d91-serving-cert\") pod \"etcd-operator-b45778765-d7f6t\" (UID: \"fe999f2d-8f13-4ea6-8afb-ff99ff665d91\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d7f6t" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.759137 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fx5vt\" (UniqueName: \"kubernetes.io/projected/06ff5ccd-6dfa-4826-8a1d-929ac5deadf4-kube-api-access-fx5vt\") pod \"kube-storage-version-migrator-operator-b67b599dd-7xp9j\" (UID: \"06ff5ccd-6dfa-4826-8a1d-929ac5deadf4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7xp9j" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.779902 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.779958 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-npjwk" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.796848 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.815043 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.834593 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.844781 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vz29n"] Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.855840 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.860418 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/d22bd794-105c-4c41-b1c3-46723ed0cb79-plugins-dir\") pod \"csi-hostpathplugin-8hs4t\" (UID: \"d22bd794-105c-4c41-b1c3-46723ed0cb79\") " pod="hostpath-provisioner/csi-hostpathplugin-8hs4t" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.860454 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06ff5ccd-6dfa-4826-8a1d-929ac5deadf4-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-7xp9j\" (UID: \"06ff5ccd-6dfa-4826-8a1d-929ac5deadf4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7xp9j" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.860472 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/fe999f2d-8f13-4ea6-8afb-ff99ff665d91-etcd-ca\") pod \"etcd-operator-b45778765-d7f6t\" (UID: \"fe999f2d-8f13-4ea6-8afb-ff99ff665d91\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d7f6t" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.860488 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/89c50427-14ae-409d-89d5-a56be0ff97d1-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-zzj7p\" (UID: \"89c50427-14ae-409d-89d5-a56be0ff97d1\") " pod="openshift-marketplace/marketplace-operator-79b997595-zzj7p" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.860509 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/89c50427-14ae-409d-89d5-a56be0ff97d1-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-zzj7p\" (UID: \"89c50427-14ae-409d-89d5-a56be0ff97d1\") " pod="openshift-marketplace/marketplace-operator-79b997595-zzj7p" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.860533 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lcvf6\" (UniqueName: \"kubernetes.io/projected/89c50427-14ae-409d-89d5-a56be0ff97d1-kube-api-access-lcvf6\") pod \"marketplace-operator-79b997595-zzj7p\" (UID: \"89c50427-14ae-409d-89d5-a56be0ff97d1\") " pod="openshift-marketplace/marketplace-operator-79b997595-zzj7p" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.860556 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5zdm\" (UniqueName: \"kubernetes.io/projected/d22bd794-105c-4c41-b1c3-46723ed0cb79-kube-api-access-t5zdm\") pod \"csi-hostpathplugin-8hs4t\" (UID: \"d22bd794-105c-4c41-b1c3-46723ed0cb79\") " pod="hostpath-provisioner/csi-hostpathplugin-8hs4t" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.861047 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2cdca175-2ec1-427b-8c68-99a065f9d5d7-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-85fg2\" (UID: \"2cdca175-2ec1-427b-8c68-99a065f9d5d7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-85fg2" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.861109 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fe999f2d-8f13-4ea6-8afb-ff99ff665d91-etcd-client\") pod \"etcd-operator-b45778765-d7f6t\" (UID: \"fe999f2d-8f13-4ea6-8afb-ff99ff665d91\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d7f6t" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.861135 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d68d65c4-d762-426b-84bf-d1e05738d0ee-metrics-tls\") pod \"dns-operator-744455d44c-jwlmm\" (UID: \"d68d65c4-d762-426b-84bf-d1e05738d0ee\") " pod="openshift-dns-operator/dns-operator-744455d44c-jwlmm" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.861158 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gg4x\" (UniqueName: \"kubernetes.io/projected/7cc7f044-be33-41ca-b4af-bdf5a0ca066f-kube-api-access-6gg4x\") pod \"multus-admission-controller-857f4d67dd-vlhbw\" (UID: \"7cc7f044-be33-41ca-b4af-bdf5a0ca066f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-vlhbw" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.861181 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/05a3b920-eb04-4864-81ac-924ba7c63d4e-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-s6wq8\" (UID: \"05a3b920-eb04-4864-81ac-924ba7c63d4e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s6wq8" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.861205 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4rzv\" (UniqueName: \"kubernetes.io/projected/6645982e-cae8-46d2-8b4d-e5d2a01ad127-kube-api-access-f4rzv\") pod \"service-ca-operator-777779d784-rwrf2\" (UID: \"6645982e-cae8-46d2-8b4d-e5d2a01ad127\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rwrf2" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.861230 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03a67bcc-6366-4e73-a3d9-39dcdd4f2f82-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-kx8xd\" (UID: \"03a67bcc-6366-4e73-a3d9-39dcdd4f2f82\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kx8xd" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.861258 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxkb8\" (UniqueName: \"kubernetes.io/projected/fe999f2d-8f13-4ea6-8afb-ff99ff665d91-kube-api-access-wxkb8\") pod \"etcd-operator-b45778765-d7f6t\" (UID: \"fe999f2d-8f13-4ea6-8afb-ff99ff665d91\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d7f6t" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.861325 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78pb6\" (UniqueName: \"kubernetes.io/projected/a5484631-f1bd-49f7-a0bd-46e783e44095-kube-api-access-78pb6\") pod \"catalog-operator-68c6474976-n6hjp\" (UID: \"a5484631-f1bd-49f7-a0bd-46e783e44095\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n6hjp" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.861355 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe999f2d-8f13-4ea6-8afb-ff99ff665d91-config\") pod \"etcd-operator-b45778765-d7f6t\" (UID: \"fe999f2d-8f13-4ea6-8afb-ff99ff665d91\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d7f6t" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.861375 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d22bd794-105c-4c41-b1c3-46723ed0cb79-registration-dir\") pod \"csi-hostpathplugin-8hs4t\" (UID: \"d22bd794-105c-4c41-b1c3-46723ed0cb79\") " pod="hostpath-provisioner/csi-hostpathplugin-8hs4t" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.861399 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9fjf\" (UniqueName: \"kubernetes.io/projected/4cd4de5c-8c29-4f8e-bde8-1c21bd833d80-kube-api-access-t9fjf\") pod \"machine-config-controller-84d6567774-2tq6k\" (UID: \"4cd4de5c-8c29-4f8e-bde8-1c21bd833d80\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2tq6k" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.861423 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/d22bd794-105c-4c41-b1c3-46723ed0cb79-csi-data-dir\") pod \"csi-hostpathplugin-8hs4t\" (UID: \"d22bd794-105c-4c41-b1c3-46723ed0cb79\") " pod="hostpath-provisioner/csi-hostpathplugin-8hs4t" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.861448 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/869ead05-05ad-4705-8193-0f5cf6987257-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-6k9lf\" (UID: \"869ead05-05ad-4705-8193-0f5cf6987257\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6k9lf" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.861481 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a5484631-f1bd-49f7-a0bd-46e783e44095-srv-cert\") pod \"catalog-operator-68c6474976-n6hjp\" (UID: \"a5484631-f1bd-49f7-a0bd-46e783e44095\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n6hjp" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.861505 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d22bd794-105c-4c41-b1c3-46723ed0cb79-socket-dir\") pod \"csi-hostpathplugin-8hs4t\" (UID: \"d22bd794-105c-4c41-b1c3-46723ed0cb79\") " pod="hostpath-provisioner/csi-hostpathplugin-8hs4t" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.861528 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rdpr\" (UniqueName: \"kubernetes.io/projected/05a3b920-eb04-4864-81ac-924ba7c63d4e-kube-api-access-7rdpr\") pod \"control-plane-machine-set-operator-78cbb6b69f-s6wq8\" (UID: \"05a3b920-eb04-4864-81ac-924ba7c63d4e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s6wq8" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.861571 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03a67bcc-6366-4e73-a3d9-39dcdd4f2f82-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-kx8xd\" (UID: \"03a67bcc-6366-4e73-a3d9-39dcdd4f2f82\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kx8xd" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.861596 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4cd4de5c-8c29-4f8e-bde8-1c21bd833d80-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-2tq6k\" (UID: \"4cd4de5c-8c29-4f8e-bde8-1c21bd833d80\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2tq6k" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.861618 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/68f994ae-dec7-4be6-b4e7-7a1875b40f4f-profile-collector-cert\") pod \"olm-operator-6b444d44fb-575sd\" (UID: \"68f994ae-dec7-4be6-b4e7-7a1875b40f4f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-575sd" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.861639 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/03a67bcc-6366-4e73-a3d9-39dcdd4f2f82-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-kx8xd\" (UID: \"03a67bcc-6366-4e73-a3d9-39dcdd4f2f82\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kx8xd" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.861669 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6645982e-cae8-46d2-8b4d-e5d2a01ad127-serving-cert\") pod \"service-ca-operator-777779d784-rwrf2\" (UID: \"6645982e-cae8-46d2-8b4d-e5d2a01ad127\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rwrf2" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.861706 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06ff5ccd-6dfa-4826-8a1d-929ac5deadf4-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-7xp9j\" (UID: \"06ff5ccd-6dfa-4826-8a1d-929ac5deadf4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7xp9j" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.861729 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tb4m9\" (UniqueName: \"kubernetes.io/projected/2ad8791b-88af-4c86-8ed4-999a3357c3b7-kube-api-access-tb4m9\") pod \"migrator-59844c95c7-dqctr\" (UID: \"2ad8791b-88af-4c86-8ed4-999a3357c3b7\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-dqctr" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.861751 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/869ead05-05ad-4705-8193-0f5cf6987257-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-6k9lf\" (UID: \"869ead05-05ad-4705-8193-0f5cf6987257\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6k9lf" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.861775 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fe999f2d-8f13-4ea6-8afb-ff99ff665d91-serving-cert\") pod \"etcd-operator-b45778765-d7f6t\" (UID: \"fe999f2d-8f13-4ea6-8afb-ff99ff665d91\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d7f6t" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.861792 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/d22bd794-105c-4c41-b1c3-46723ed0cb79-plugins-dir\") pod \"csi-hostpathplugin-8hs4t\" (UID: \"d22bd794-105c-4c41-b1c3-46723ed0cb79\") " pod="hostpath-provisioner/csi-hostpathplugin-8hs4t" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.861797 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fx5vt\" (UniqueName: \"kubernetes.io/projected/06ff5ccd-6dfa-4826-8a1d-929ac5deadf4-kube-api-access-fx5vt\") pod \"kube-storage-version-migrator-operator-b67b599dd-7xp9j\" (UID: \"06ff5ccd-6dfa-4826-8a1d-929ac5deadf4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7xp9j" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.862021 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cdca175-2ec1-427b-8c68-99a065f9d5d7-config\") pod \"kube-controller-manager-operator-78b949d7b-85fg2\" (UID: \"2cdca175-2ec1-427b-8c68-99a065f9d5d7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-85fg2" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.862051 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcq64\" (UniqueName: \"kubernetes.io/projected/68f994ae-dec7-4be6-b4e7-7a1875b40f4f-kube-api-access-fcq64\") pod \"olm-operator-6b444d44fb-575sd\" (UID: \"68f994ae-dec7-4be6-b4e7-7a1875b40f4f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-575sd" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.862072 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6645982e-cae8-46d2-8b4d-e5d2a01ad127-config\") pod \"service-ca-operator-777779d784-rwrf2\" (UID: \"6645982e-cae8-46d2-8b4d-e5d2a01ad127\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rwrf2" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.862092 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/869ead05-05ad-4705-8193-0f5cf6987257-config\") pod \"kube-apiserver-operator-766d6c64bb-6k9lf\" (UID: \"869ead05-05ad-4705-8193-0f5cf6987257\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6k9lf" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.862113 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a5484631-f1bd-49f7-a0bd-46e783e44095-profile-collector-cert\") pod \"catalog-operator-68c6474976-n6hjp\" (UID: \"a5484631-f1bd-49f7-a0bd-46e783e44095\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n6hjp" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.862146 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/fe999f2d-8f13-4ea6-8afb-ff99ff665d91-etcd-service-ca\") pod \"etcd-operator-b45778765-d7f6t\" (UID: \"fe999f2d-8f13-4ea6-8afb-ff99ff665d91\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d7f6t" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.862164 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqnrb\" (UniqueName: \"kubernetes.io/projected/d68d65c4-d762-426b-84bf-d1e05738d0ee-kube-api-access-zqnrb\") pod \"dns-operator-744455d44c-jwlmm\" (UID: \"d68d65c4-d762-426b-84bf-d1e05738d0ee\") " pod="openshift-dns-operator/dns-operator-744455d44c-jwlmm" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.862189 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4cd4de5c-8c29-4f8e-bde8-1c21bd833d80-proxy-tls\") pod \"machine-config-controller-84d6567774-2tq6k\" (UID: \"4cd4de5c-8c29-4f8e-bde8-1c21bd833d80\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2tq6k" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.862209 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/68f994ae-dec7-4be6-b4e7-7a1875b40f4f-srv-cert\") pod \"olm-operator-6b444d44fb-575sd\" (UID: \"68f994ae-dec7-4be6-b4e7-7a1875b40f4f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-575sd" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.862235 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2cdca175-2ec1-427b-8c68-99a065f9d5d7-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-85fg2\" (UID: \"2cdca175-2ec1-427b-8c68-99a065f9d5d7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-85fg2" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.862256 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/d22bd794-105c-4c41-b1c3-46723ed0cb79-mountpoint-dir\") pod \"csi-hostpathplugin-8hs4t\" (UID: \"d22bd794-105c-4c41-b1c3-46723ed0cb79\") " pod="hostpath-provisioner/csi-hostpathplugin-8hs4t" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.862711 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe999f2d-8f13-4ea6-8afb-ff99ff665d91-config\") pod \"etcd-operator-b45778765-d7f6t\" (UID: \"fe999f2d-8f13-4ea6-8afb-ff99ff665d91\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d7f6t" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.862790 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d22bd794-105c-4c41-b1c3-46723ed0cb79-registration-dir\") pod \"csi-hostpathplugin-8hs4t\" (UID: \"d22bd794-105c-4c41-b1c3-46723ed0cb79\") " pod="hostpath-provisioner/csi-hostpathplugin-8hs4t" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.862898 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/d22bd794-105c-4c41-b1c3-46723ed0cb79-csi-data-dir\") pod \"csi-hostpathplugin-8hs4t\" (UID: \"d22bd794-105c-4c41-b1c3-46723ed0cb79\") " pod="hostpath-provisioner/csi-hostpathplugin-8hs4t" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.863014 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d22bd794-105c-4c41-b1c3-46723ed0cb79-socket-dir\") pod \"csi-hostpathplugin-8hs4t\" (UID: \"d22bd794-105c-4c41-b1c3-46723ed0cb79\") " pod="hostpath-provisioner/csi-hostpathplugin-8hs4t" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.864547 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/fe999f2d-8f13-4ea6-8afb-ff99ff665d91-etcd-service-ca\") pod \"etcd-operator-b45778765-d7f6t\" (UID: \"fe999f2d-8f13-4ea6-8afb-ff99ff665d91\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d7f6t" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.864844 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4cd4de5c-8c29-4f8e-bde8-1c21bd833d80-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-2tq6k\" (UID: \"4cd4de5c-8c29-4f8e-bde8-1c21bd833d80\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2tq6k" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.866452 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/d22bd794-105c-4c41-b1c3-46723ed0cb79-mountpoint-dir\") pod \"csi-hostpathplugin-8hs4t\" (UID: \"d22bd794-105c-4c41-b1c3-46723ed0cb79\") " pod="hostpath-provisioner/csi-hostpathplugin-8hs4t" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.862275 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7cc7f044-be33-41ca-b4af-bdf5a0ca066f-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-vlhbw\" (UID: \"7cc7f044-be33-41ca-b4af-bdf5a0ca066f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-vlhbw" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.868589 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d68d65c4-d762-426b-84bf-d1e05738d0ee-metrics-tls\") pod \"dns-operator-744455d44c-jwlmm\" (UID: \"d68d65c4-d762-426b-84bf-d1e05738d0ee\") " pod="openshift-dns-operator/dns-operator-744455d44c-jwlmm" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.868887 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fe999f2d-8f13-4ea6-8afb-ff99ff665d91-serving-cert\") pod \"etcd-operator-b45778765-d7f6t\" (UID: \"fe999f2d-8f13-4ea6-8afb-ff99ff665d91\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d7f6t" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.868917 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03a67bcc-6366-4e73-a3d9-39dcdd4f2f82-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-kx8xd\" (UID: \"03a67bcc-6366-4e73-a3d9-39dcdd4f2f82\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kx8xd" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.869824 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fe999f2d-8f13-4ea6-8afb-ff99ff665d91-etcd-client\") pod \"etcd-operator-b45778765-d7f6t\" (UID: \"fe999f2d-8f13-4ea6-8afb-ff99ff665d91\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d7f6t" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.870814 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/fe999f2d-8f13-4ea6-8afb-ff99ff665d91-etcd-ca\") pod \"etcd-operator-b45778765-d7f6t\" (UID: \"fe999f2d-8f13-4ea6-8afb-ff99ff665d91\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d7f6t" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.872396 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03a67bcc-6366-4e73-a3d9-39dcdd4f2f82-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-kx8xd\" (UID: \"03a67bcc-6366-4e73-a3d9-39dcdd4f2f82\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kx8xd" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.877911 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.895522 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/89c50427-14ae-409d-89d5-a56be0ff97d1-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-zzj7p\" (UID: \"89c50427-14ae-409d-89d5-a56be0ff97d1\") " pod="openshift-marketplace/marketplace-operator-79b997595-zzj7p" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.895933 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.918573 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.936431 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.947837 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/869ead05-05ad-4705-8193-0f5cf6987257-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-6k9lf\" (UID: \"869ead05-05ad-4705-8193-0f5cf6987257\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6k9lf" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.954108 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.955496 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/869ead05-05ad-4705-8193-0f5cf6987257-config\") pod \"kube-apiserver-operator-766d6c64bb-6k9lf\" (UID: \"869ead05-05ad-4705-8193-0f5cf6987257\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6k9lf" Dec 03 12:58:15 crc kubenswrapper[4986]: I1203 12:58:15.977614 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 03 12:58:16 crc kubenswrapper[4986]: I1203 12:58:15.998543 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 03 12:58:16 crc kubenswrapper[4986]: I1203 12:58:16.007721 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-md588"] Dec 03 12:58:16 crc kubenswrapper[4986]: I1203 12:58:16.019764 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 03 12:58:16 crc kubenswrapper[4986]: I1203 12:58:16.043148 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 03 12:58:16 crc kubenswrapper[4986]: I1203 12:58:16.050596 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-2k592"] Dec 03 12:58:16 crc kubenswrapper[4986]: I1203 12:58:16.061667 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 03 12:58:16 crc kubenswrapper[4986]: I1203 12:58:16.079112 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 03 12:58:16 crc kubenswrapper[4986]: I1203 12:58:16.098163 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 03 12:58:16 crc kubenswrapper[4986]: I1203 12:58:16.106887 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06ff5ccd-6dfa-4826-8a1d-929ac5deadf4-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-7xp9j\" (UID: \"06ff5ccd-6dfa-4826-8a1d-929ac5deadf4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7xp9j" Dec 03 12:58:16 crc kubenswrapper[4986]: I1203 12:58:16.115542 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 03 12:58:16 crc kubenswrapper[4986]: I1203 12:58:16.124828 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-sk8ll"] Dec 03 12:58:16 crc kubenswrapper[4986]: I1203 12:58:16.124871 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2glzg"] Dec 03 12:58:16 crc kubenswrapper[4986]: I1203 12:58:16.136501 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 03 12:58:16 crc kubenswrapper[4986]: I1203 12:58:16.139829 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06ff5ccd-6dfa-4826-8a1d-929ac5deadf4-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-7xp9j\" (UID: \"06ff5ccd-6dfa-4826-8a1d-929ac5deadf4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7xp9j" Dec 03 12:58:16 crc kubenswrapper[4986]: I1203 12:58:16.158716 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 03 12:58:16 crc kubenswrapper[4986]: I1203 12:58:16.165166 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/89c50427-14ae-409d-89d5-a56be0ff97d1-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-zzj7p\" (UID: \"89c50427-14ae-409d-89d5-a56be0ff97d1\") " pod="openshift-marketplace/marketplace-operator-79b997595-zzj7p" Dec 03 12:58:16 crc kubenswrapper[4986]: I1203 12:58:16.173544 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 03 12:58:16 crc kubenswrapper[4986]: I1203 12:58:16.193595 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 03 12:58:16 crc kubenswrapper[4986]: I1203 12:58:16.214604 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 03 12:58:16 crc kubenswrapper[4986]: I1203 12:58:16.234160 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 03 12:58:16 crc kubenswrapper[4986]: I1203 12:58:16.238604 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2cdca175-2ec1-427b-8c68-99a065f9d5d7-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-85fg2\" (UID: \"2cdca175-2ec1-427b-8c68-99a065f9d5d7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-85fg2" Dec 03 12:58:16 crc kubenswrapper[4986]: I1203 12:58:16.255642 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 03 12:58:16 crc kubenswrapper[4986]: I1203 12:58:16.265565 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cdca175-2ec1-427b-8c68-99a065f9d5d7-config\") pod \"kube-controller-manager-operator-78b949d7b-85fg2\" (UID: \"2cdca175-2ec1-427b-8c68-99a065f9d5d7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-85fg2" Dec 03 12:58:16 crc kubenswrapper[4986]: I1203 12:58:16.275569 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 03 12:58:16 crc kubenswrapper[4986]: I1203 12:58:16.289424 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4cd4de5c-8c29-4f8e-bde8-1c21bd833d80-proxy-tls\") pod \"machine-config-controller-84d6567774-2tq6k\" (UID: \"4cd4de5c-8c29-4f8e-bde8-1c21bd833d80\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2tq6k" Dec 03 12:58:16 crc kubenswrapper[4986]: I1203 12:58:16.295011 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 03 12:58:16 crc kubenswrapper[4986]: I1203 12:58:16.314227 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 03 12:58:16 crc kubenswrapper[4986]: I1203 12:58:16.326783 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a5484631-f1bd-49f7-a0bd-46e783e44095-srv-cert\") pod \"catalog-operator-68c6474976-n6hjp\" (UID: \"a5484631-f1bd-49f7-a0bd-46e783e44095\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n6hjp" Dec 03 12:58:16 crc kubenswrapper[4986]: I1203 12:58:16.336412 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 03 12:58:16 crc kubenswrapper[4986]: I1203 12:58:16.338716 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c246g"] Dec 03 12:58:16 crc kubenswrapper[4986]: I1203 12:58:16.354673 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 03 12:58:16 crc kubenswrapper[4986]: I1203 12:58:16.357965 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a5484631-f1bd-49f7-a0bd-46e783e44095-profile-collector-cert\") pod \"catalog-operator-68c6474976-n6hjp\" (UID: \"a5484631-f1bd-49f7-a0bd-46e783e44095\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n6hjp" Dec 03 12:58:16 crc kubenswrapper[4986]: I1203 12:58:16.358330 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-wvzt8"] Dec 03 12:58:16 crc kubenswrapper[4986]: I1203 12:58:16.359360 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/68f994ae-dec7-4be6-b4e7-7a1875b40f4f-profile-collector-cert\") pod \"olm-operator-6b444d44fb-575sd\" (UID: \"68f994ae-dec7-4be6-b4e7-7a1875b40f4f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-575sd" Dec 03 12:58:16 crc kubenswrapper[4986]: I1203 12:58:16.363173 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-xn84j"] Dec 03 12:58:16 crc kubenswrapper[4986]: I1203 12:58:16.374378 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 03 12:58:16 crc kubenswrapper[4986]: I1203 12:58:16.393961 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 03 12:58:16 crc kubenswrapper[4986]: I1203 12:58:16.414129 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 03 12:58:16 crc kubenswrapper[4986]: I1203 12:58:16.428718 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/05a3b920-eb04-4864-81ac-924ba7c63d4e-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-s6wq8\" (UID: \"05a3b920-eb04-4864-81ac-924ba7c63d4e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s6wq8" Dec 03 12:58:16 crc kubenswrapper[4986]: I1203 12:58:16.439588 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 03 12:58:16 crc kubenswrapper[4986]: I1203 12:58:16.449631 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-hhmft"] Dec 03 12:58:16 crc kubenswrapper[4986]: I1203 12:58:16.450365 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4c7tt"] Dec 03 12:58:16 crc kubenswrapper[4986]: I1203 12:58:16.452363 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-62rd9"] Dec 03 12:58:16 crc kubenswrapper[4986]: I1203 12:58:16.454136 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 03 12:58:16 crc kubenswrapper[4986]: I1203 12:58:16.457040 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-ptgv8"] Dec 03 12:58:16 crc kubenswrapper[4986]: I1203 12:58:16.466687 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-npjwk"] Dec 03 12:58:16 crc kubenswrapper[4986]: E1203 12:58:16.476782 4986 configmap.go:193] Couldn't get configMap openshift-console-operator/trusted-ca: failed to sync configmap cache: timed out waiting for the condition Dec 03 12:58:16 crc kubenswrapper[4986]: E1203 12:58:16.476904 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/005be756-d1ff-452c-aa4e-e4df06f28839-trusted-ca podName:005be756-d1ff-452c-aa4e-e4df06f28839 nodeName:}" failed. No retries permitted until 2025-12-03 12:58:16.976876243 +0000 UTC m=+156.443307434 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/005be756-d1ff-452c-aa4e-e4df06f28839-trusted-ca") pod "console-operator-58897d9998-glvvh" (UID: "005be756-d1ff-452c-aa4e-e4df06f28839") : failed to sync configmap cache: timed out waiting for the condition Dec 03 12:58:16 crc kubenswrapper[4986]: I1203 12:58:16.478138 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 03 12:58:16 crc kubenswrapper[4986]: E1203 12:58:16.481166 4986 configmap.go:193] Couldn't get configMap openshift-apiserver/image-import-ca: failed to sync configmap cache: timed out waiting for the condition Dec 03 12:58:16 crc kubenswrapper[4986]: E1203 12:58:16.481219 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5825d5b3-a095-48d6-9365-d03d1faa63ca-image-import-ca podName:5825d5b3-a095-48d6-9365-d03d1faa63ca nodeName:}" failed. No retries permitted until 2025-12-03 12:58:16.981206256 +0000 UTC m=+156.447637677 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/5825d5b3-a095-48d6-9365-d03d1faa63ca-image-import-ca") pod "apiserver-76f77b778f-72n7g" (UID: "5825d5b3-a095-48d6-9365-d03d1faa63ca") : failed to sync configmap cache: timed out waiting for the condition Dec 03 12:58:16 crc kubenswrapper[4986]: E1203 12:58:16.482767 4986 projected.go:288] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 03 12:58:16 crc kubenswrapper[4986]: E1203 12:58:16.482767 4986 secret.go:188] Couldn't get secret openshift-console-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Dec 03 12:58:16 crc kubenswrapper[4986]: E1203 12:58:16.482862 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/005be756-d1ff-452c-aa4e-e4df06f28839-serving-cert podName:005be756-d1ff-452c-aa4e-e4df06f28839 nodeName:}" failed. No retries permitted until 2025-12-03 12:58:16.982841212 +0000 UTC m=+156.449272623 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/005be756-d1ff-452c-aa4e-e4df06f28839-serving-cert") pod "console-operator-58897d9998-glvvh" (UID: "005be756-d1ff-452c-aa4e-e4df06f28839") : failed to sync secret cache: timed out waiting for the condition Dec 03 12:58:16 crc kubenswrapper[4986]: E1203 12:58:16.482868 4986 configmap.go:193] Couldn't get configMap openshift-console-operator/console-operator-config: failed to sync configmap cache: timed out waiting for the condition Dec 03 12:58:16 crc kubenswrapper[4986]: E1203 12:58:16.483006 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/005be756-d1ff-452c-aa4e-e4df06f28839-config podName:005be756-d1ff-452c-aa4e-e4df06f28839 nodeName:}" failed. No retries permitted until 2025-12-03 12:58:16.982968885 +0000 UTC m=+156.449400277 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/005be756-d1ff-452c-aa4e-e4df06f28839-config") pod "console-operator-58897d9998-glvvh" (UID: "005be756-d1ff-452c-aa4e-e4df06f28839") : failed to sync configmap cache: timed out waiting for the condition Dec 03 12:58:16 crc kubenswrapper[4986]: I1203 12:58:16.494420 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 03 12:58:16 crc kubenswrapper[4986]: I1203 12:58:16.499516 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/68f994ae-dec7-4be6-b4e7-7a1875b40f4f-srv-cert\") pod \"olm-operator-6b444d44fb-575sd\" (UID: \"68f994ae-dec7-4be6-b4e7-7a1875b40f4f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-575sd" Dec 03 12:58:16 crc kubenswrapper[4986]: W1203 12:58:16.510582 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcee77bd5_92e1_4458_bdf2_49912954144d.slice/crio-eba26e49839f2b0ad659c4407f354d6d2c5abe1bc21708a946d302442021d41e WatchSource:0}: Error finding container eba26e49839f2b0ad659c4407f354d6d2c5abe1bc21708a946d302442021d41e: Status 404 returned error can't find the container with id eba26e49839f2b0ad659c4407f354d6d2c5abe1bc21708a946d302442021d41e Dec 03 12:58:16 crc kubenswrapper[4986]: I1203 12:58:16.514183 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 03 12:58:16 crc kubenswrapper[4986]: I1203 12:58:16.534006 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 03 12:58:16 crc kubenswrapper[4986]: I1203 12:58:16.553325 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 03 12:58:16 crc kubenswrapper[4986]: I1203 12:58:16.573855 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 03 12:58:16 crc kubenswrapper[4986]: I1203 12:58:16.593421 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 03 12:58:16 crc kubenswrapper[4986]: I1203 12:58:16.614909 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 03 12:58:16 crc kubenswrapper[4986]: I1203 12:58:16.634258 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 03 12:58:16 crc kubenswrapper[4986]: I1203 12:58:16.652606 4986 request.go:700] Waited for 1.008314775s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/hostpath-provisioner/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Dec 03 12:58:16 crc kubenswrapper[4986]: I1203 12:58:16.654880 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 03 12:58:16 crc kubenswrapper[4986]: I1203 12:58:16.674039 4986 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 03 12:58:16 crc kubenswrapper[4986]: I1203 12:58:16.693703 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 03 12:58:16 crc kubenswrapper[4986]: I1203 12:58:16.714621 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c246g" event={"ID":"1b4d169b-dd53-42aa-9b14-f9bd24e801e6","Type":"ContainerStarted","Data":"cc479625c4097ccb18e060eccc28182e899faa6c1c420892b0851610984d5d9e"} Dec 03 12:58:16 crc kubenswrapper[4986]: I1203 12:58:16.716766 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-npjwk" event={"ID":"01d4de81-8b50-4231-912d-3a65797a9754","Type":"ContainerStarted","Data":"ccfefa315ff2d6812be0e1f814af932bef15b070c7eb3fcc742d7ad17bdf3865"} Dec 03 12:58:16 crc kubenswrapper[4986]: I1203 12:58:16.717849 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 03 12:58:16 crc kubenswrapper[4986]: I1203 12:58:16.718293 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pjgnh" event={"ID":"3e7e7e0f-8493-47c5-859d-3e7046c7ddab","Type":"ContainerStarted","Data":"40fbde8651852178b15630b08466ede8f5f866d1cab8655211810a71af3a237e"} Dec 03 12:58:16 crc kubenswrapper[4986]: I1203 12:58:16.719484 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-ptgv8" event={"ID":"0725188c-6c61-4369-8247-ffdda7e830e8","Type":"ContainerStarted","Data":"255fe330a4f361e0c59fbe19ee2be463a0f8ff38bfb19a24985f3c7c7d03a2ef"} Dec 03 12:58:16 crc kubenswrapper[4986]: I1203 12:58:16.720440 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-62rd9" event={"ID":"5c9ab9a7-030c-4d45-8c75-950f457bb69c","Type":"ContainerStarted","Data":"a9af21ed83143e3ac01f8a3904561136e87005e5aa77d584cc8a3a8ea9160203"} Dec 03 12:58:16 crc kubenswrapper[4986]: I1203 12:58:16.721254 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-md588" event={"ID":"3ae56075-fe7d-4f73-9cd9-ba58ff4c6bfb","Type":"ContainerStarted","Data":"3094dad7465e53f31047300633437f40747d7680f1526c01f626cfb73f860e2c"} Dec 03 12:58:16 crc kubenswrapper[4986]: I1203 12:58:16.722397 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4c7tt" event={"ID":"cee77bd5-92e1-4458-bdf2-49912954144d","Type":"ContainerStarted","Data":"eba26e49839f2b0ad659c4407f354d6d2c5abe1bc21708a946d302442021d41e"} Dec 03 12:58:16 crc kubenswrapper[4986]: I1203 12:58:16.723621 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"001e1de754f941322abd1d180a0048eaa40f52591f82fd1037cdcc4c68eda7de"} Dec 03 12:58:16 crc kubenswrapper[4986]: I1203 12:58:16.724872 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-wvzt8" event={"ID":"d366e2f7-22ef-46e5-855f-0f26e6a9186c","Type":"ContainerStarted","Data":"ee091b9178d0834db570ae9bf5d41c851ab240acf758d28f9408abf9c94b16ea"} Dec 03 12:58:16 crc kubenswrapper[4986]: I1203 12:58:16.725651 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-sk8ll" event={"ID":"a069c6b7-9e3b-40fd-b830-0d2a82fadd9a","Type":"ContainerStarted","Data":"98098d86768289e9c18d9532fc91ef320583cf5beb823415bd1b4310f79799f9"} Dec 03 12:58:16 crc kubenswrapper[4986]: I1203 12:58:16.728157 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"c4575bea9b992504298d5e4b3266e1856d803f876219a64d250cbfc179e80467"} Dec 03 12:58:16 crc kubenswrapper[4986]: I1203 12:58:16.728470 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 12:58:16 crc kubenswrapper[4986]: I1203 12:58:16.729506 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hhmft" event={"ID":"6a077eb6-dc4e-4b2c-88be-eb4fad075e7a","Type":"ContainerStarted","Data":"ffe126f2ce1e0467419535dd4e75ee6a39df00ead0dc1a622c98fc865654f84c"} Dec 03 12:58:16 crc kubenswrapper[4986]: I1203 12:58:16.730462 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-xn84j" event={"ID":"2588cb3b-8139-4529-a6e1-c57532afdfa7","Type":"ContainerStarted","Data":"9abc94b1244ffd1b59075b67e37b072e43545697f9c7aa63bafa6515d1521648"} Dec 03 12:58:16 crc kubenswrapper[4986]: I1203 12:58:16.732132 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vz29n" event={"ID":"54da68a4-65a4-488f-a029-783cf51bdc04","Type":"ContainerStarted","Data":"6223ecc2f877f3d1307d8889f91d2c26865dec5f8d72b4b0f0e237949c58c0de"} Dec 03 12:58:16 crc kubenswrapper[4986]: I1203 12:58:16.733863 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 03 12:58:16 crc kubenswrapper[4986]: I1203 12:58:16.734066 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-2k592" event={"ID":"0636c9e2-4b00-4d01-8426-5fbfd9f9fa88","Type":"ContainerStarted","Data":"c04830faa2b790bac276081c49f98734f272d64ecbd9fac160b26c2f16ad1470"} Dec 03 12:58:16 crc kubenswrapper[4986]: I1203 12:58:16.755686 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 03 12:58:16 crc kubenswrapper[4986]: I1203 12:58:16.775496 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 03 12:58:16 crc kubenswrapper[4986]: I1203 12:58:16.790497 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6645982e-cae8-46d2-8b4d-e5d2a01ad127-serving-cert\") pod \"service-ca-operator-777779d784-rwrf2\" (UID: \"6645982e-cae8-46d2-8b4d-e5d2a01ad127\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rwrf2" Dec 03 12:58:16 crc kubenswrapper[4986]: I1203 12:58:16.796449 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 03 12:58:16 crc kubenswrapper[4986]: I1203 12:58:16.798529 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6645982e-cae8-46d2-8b4d-e5d2a01ad127-config\") pod \"service-ca-operator-777779d784-rwrf2\" (UID: \"6645982e-cae8-46d2-8b4d-e5d2a01ad127\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rwrf2" Dec 03 12:58:16 crc kubenswrapper[4986]: I1203 12:58:16.814650 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 03 12:58:16 crc kubenswrapper[4986]: I1203 12:58:16.834236 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 03 12:58:16 crc kubenswrapper[4986]: I1203 12:58:16.840834 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7cc7f044-be33-41ca-b4af-bdf5a0ca066f-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-vlhbw\" (UID: \"7cc7f044-be33-41ca-b4af-bdf5a0ca066f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-vlhbw" Dec 03 12:58:16 crc kubenswrapper[4986]: I1203 12:58:16.854965 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 03 12:58:16 crc kubenswrapper[4986]: I1203 12:58:16.874538 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 03 12:58:16 crc kubenswrapper[4986]: I1203 12:58:16.894458 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 03 12:58:16 crc kubenswrapper[4986]: I1203 12:58:16.914326 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 03 12:58:16 crc kubenswrapper[4986]: I1203 12:58:16.934758 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 03 12:58:16 crc kubenswrapper[4986]: I1203 12:58:16.954886 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 03 12:58:16 crc kubenswrapper[4986]: I1203 12:58:16.974569 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 03 12:58:16 crc kubenswrapper[4986]: I1203 12:58:16.984754 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/5825d5b3-a095-48d6-9365-d03d1faa63ca-image-import-ca\") pod \"apiserver-76f77b778f-72n7g\" (UID: \"5825d5b3-a095-48d6-9365-d03d1faa63ca\") " pod="openshift-apiserver/apiserver-76f77b778f-72n7g" Dec 03 12:58:16 crc kubenswrapper[4986]: I1203 12:58:16.984971 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/005be756-d1ff-452c-aa4e-e4df06f28839-config\") pod \"console-operator-58897d9998-glvvh\" (UID: \"005be756-d1ff-452c-aa4e-e4df06f28839\") " pod="openshift-console-operator/console-operator-58897d9998-glvvh" Dec 03 12:58:16 crc kubenswrapper[4986]: I1203 12:58:16.985031 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/005be756-d1ff-452c-aa4e-e4df06f28839-serving-cert\") pod \"console-operator-58897d9998-glvvh\" (UID: \"005be756-d1ff-452c-aa4e-e4df06f28839\") " pod="openshift-console-operator/console-operator-58897d9998-glvvh" Dec 03 12:58:16 crc kubenswrapper[4986]: I1203 12:58:16.985146 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/005be756-d1ff-452c-aa4e-e4df06f28839-trusted-ca\") pod \"console-operator-58897d9998-glvvh\" (UID: \"005be756-d1ff-452c-aa4e-e4df06f28839\") " pod="openshift-console-operator/console-operator-58897d9998-glvvh" Dec 03 12:58:16 crc kubenswrapper[4986]: I1203 12:58:16.998545 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.015099 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.035199 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.094189 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.114066 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.133227 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.152933 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.173885 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.193944 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.214535 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.233930 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.254553 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.274781 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.294885 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 03 12:58:17 crc kubenswrapper[4986]: E1203 12:58:17.303156 4986 projected.go:194] Error preparing data for projected volume kube-api-access-9nlqq for pod openshift-console-operator/console-operator-58897d9998-glvvh: failed to sync configmap cache: timed out waiting for the condition Dec 03 12:58:17 crc kubenswrapper[4986]: E1203 12:58:17.303266 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/005be756-d1ff-452c-aa4e-e4df06f28839-kube-api-access-9nlqq podName:005be756-d1ff-452c-aa4e-e4df06f28839 nodeName:}" failed. No retries permitted until 2025-12-03 12:58:17.803234257 +0000 UTC m=+157.269665478 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-9nlqq" (UniqueName: "kubernetes.io/projected/005be756-d1ff-452c-aa4e-e4df06f28839-kube-api-access-9nlqq") pod "console-operator-58897d9998-glvvh" (UID: "005be756-d1ff-452c-aa4e-e4df06f28839") : failed to sync configmap cache: timed out waiting for the condition Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.329783 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5zdm\" (UniqueName: \"kubernetes.io/projected/d22bd794-105c-4c41-b1c3-46723ed0cb79-kube-api-access-t5zdm\") pod \"csi-hostpathplugin-8hs4t\" (UID: \"d22bd794-105c-4c41-b1c3-46723ed0cb79\") " pod="hostpath-provisioner/csi-hostpathplugin-8hs4t" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.347111 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxkb8\" (UniqueName: \"kubernetes.io/projected/fe999f2d-8f13-4ea6-8afb-ff99ff665d91-kube-api-access-wxkb8\") pod \"etcd-operator-b45778765-d7f6t\" (UID: \"fe999f2d-8f13-4ea6-8afb-ff99ff665d91\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d7f6t" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.368153 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2cdca175-2ec1-427b-8c68-99a065f9d5d7-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-85fg2\" (UID: \"2cdca175-2ec1-427b-8c68-99a065f9d5d7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-85fg2" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.385756 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-85fg2" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.400655 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78pb6\" (UniqueName: \"kubernetes.io/projected/a5484631-f1bd-49f7-a0bd-46e783e44095-kube-api-access-78pb6\") pod \"catalog-operator-68c6474976-n6hjp\" (UID: \"a5484631-f1bd-49f7-a0bd-46e783e44095\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n6hjp" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.408422 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fx5vt\" (UniqueName: \"kubernetes.io/projected/06ff5ccd-6dfa-4826-8a1d-929ac5deadf4-kube-api-access-fx5vt\") pod \"kube-storage-version-migrator-operator-b67b599dd-7xp9j\" (UID: \"06ff5ccd-6dfa-4826-8a1d-929ac5deadf4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7xp9j" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.417593 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n6hjp" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.438819 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9fjf\" (UniqueName: \"kubernetes.io/projected/4cd4de5c-8c29-4f8e-bde8-1c21bd833d80-kube-api-access-t9fjf\") pod \"machine-config-controller-84d6567774-2tq6k\" (UID: \"4cd4de5c-8c29-4f8e-bde8-1c21bd833d80\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2tq6k" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.465011 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rdpr\" (UniqueName: \"kubernetes.io/projected/05a3b920-eb04-4864-81ac-924ba7c63d4e-kube-api-access-7rdpr\") pod \"control-plane-machine-set-operator-78cbb6b69f-s6wq8\" (UID: \"05a3b920-eb04-4864-81ac-924ba7c63d4e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s6wq8" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.474155 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4rzv\" (UniqueName: \"kubernetes.io/projected/6645982e-cae8-46d2-8b4d-e5d2a01ad127-kube-api-access-f4rzv\") pod \"service-ca-operator-777779d784-rwrf2\" (UID: \"6645982e-cae8-46d2-8b4d-e5d2a01ad127\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rwrf2" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.494266 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gg4x\" (UniqueName: \"kubernetes.io/projected/7cc7f044-be33-41ca-b4af-bdf5a0ca066f-kube-api-access-6gg4x\") pod \"multus-admission-controller-857f4d67dd-vlhbw\" (UID: \"7cc7f044-be33-41ca-b4af-bdf5a0ca066f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-vlhbw" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.521566 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lcvf6\" (UniqueName: \"kubernetes.io/projected/89c50427-14ae-409d-89d5-a56be0ff97d1-kube-api-access-lcvf6\") pod \"marketplace-operator-79b997595-zzj7p\" (UID: \"89c50427-14ae-409d-89d5-a56be0ff97d1\") " pod="openshift-marketplace/marketplace-operator-79b997595-zzj7p" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.542711 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcq64\" (UniqueName: \"kubernetes.io/projected/68f994ae-dec7-4be6-b4e7-7a1875b40f4f-kube-api-access-fcq64\") pod \"olm-operator-6b444d44fb-575sd\" (UID: \"68f994ae-dec7-4be6-b4e7-7a1875b40f4f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-575sd" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.549222 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqnrb\" (UniqueName: \"kubernetes.io/projected/d68d65c4-d762-426b-84bf-d1e05738d0ee-kube-api-access-zqnrb\") pod \"dns-operator-744455d44c-jwlmm\" (UID: \"d68d65c4-d762-426b-84bf-d1e05738d0ee\") " pod="openshift-dns-operator/dns-operator-744455d44c-jwlmm" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.563871 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-jwlmm" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.565465 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-8hs4t" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.569152 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/03a67bcc-6366-4e73-a3d9-39dcdd4f2f82-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-kx8xd\" (UID: \"03a67bcc-6366-4e73-a3d9-39dcdd4f2f82\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kx8xd" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.575639 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kx8xd" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.593912 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tb4m9\" (UniqueName: \"kubernetes.io/projected/2ad8791b-88af-4c86-8ed4-999a3357c3b7-kube-api-access-tb4m9\") pod \"migrator-59844c95c7-dqctr\" (UID: \"2ad8791b-88af-4c86-8ed4-999a3357c3b7\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-dqctr" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.593954 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tb4m9\" (UniqueName: \"kubernetes.io/projected/2ad8791b-88af-4c86-8ed4-999a3357c3b7-kube-api-access-tb4m9\") pod \"migrator-59844c95c7-dqctr\" (UID: \"2ad8791b-88af-4c86-8ed4-999a3357c3b7\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-dqctr" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.594249 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tb4m9\" (UniqueName: \"kubernetes.io/projected/2ad8791b-88af-4c86-8ed4-999a3357c3b7-kube-api-access-tb4m9\") pod \"migrator-59844c95c7-dqctr\" (UID: \"2ad8791b-88af-4c86-8ed4-999a3357c3b7\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-dqctr" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.596542 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-rwrf2" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.607038 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-85fg2"] Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.607199 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-d7f6t" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.610559 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-vlhbw" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.611466 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/869ead05-05ad-4705-8193-0f5cf6987257-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-6k9lf\" (UID: \"869ead05-05ad-4705-8193-0f5cf6987257\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6k9lf" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.613440 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6k9lf" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.613708 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 03 12:58:17 crc kubenswrapper[4986]: W1203 12:58:17.614522 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2cdca175_2ec1_427b_8c68_99a065f9d5d7.slice/crio-8dd86b4cc2b19bc21bdac607776968e520765684b21ab6754c610f2fef60cd76 WatchSource:0}: Error finding container 8dd86b4cc2b19bc21bdac607776968e520765684b21ab6754c610f2fef60cd76: Status 404 returned error can't find the container with id 8dd86b4cc2b19bc21bdac607776968e520765684b21ab6754c610f2fef60cd76 Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.627312 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n6hjp"] Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.641000 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 03 12:58:17 crc kubenswrapper[4986]: W1203 12:58:17.641578 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda5484631_f1bd_49f7_a0bd_46e783e44095.slice/crio-9d3f6a66427dd58daf94f0c089aa376881f75460fbaa72338c38d02da017b93b WatchSource:0}: Error finding container 9d3f6a66427dd58daf94f0c089aa376881f75460fbaa72338c38d02da017b93b: Status 404 returned error can't find the container with id 9d3f6a66427dd58daf94f0c089aa376881f75460fbaa72338c38d02da017b93b Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.648902 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/005be756-d1ff-452c-aa4e-e4df06f28839-trusted-ca\") pod \"console-operator-58897d9998-glvvh\" (UID: \"005be756-d1ff-452c-aa4e-e4df06f28839\") " pod="openshift-console-operator/console-operator-58897d9998-glvvh" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.650110 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7xp9j" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.655534 4986 request.go:700] Waited for 1.55916256s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dimage-import-ca&limit=500&resourceVersion=0 Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.661696 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.662899 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-zzj7p" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.665634 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/5825d5b3-a095-48d6-9365-d03d1faa63ca-image-import-ca\") pod \"apiserver-76f77b778f-72n7g\" (UID: \"5825d5b3-a095-48d6-9365-d03d1faa63ca\") " pod="openshift-apiserver/apiserver-76f77b778f-72n7g" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.671494 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-dqctr" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.673952 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.681707 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/005be756-d1ff-452c-aa4e-e4df06f28839-serving-cert\") pod \"console-operator-58897d9998-glvvh\" (UID: \"005be756-d1ff-452c-aa4e-e4df06f28839\") " pod="openshift-console-operator/console-operator-58897d9998-glvvh" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.681986 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-72n7g" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.695021 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.700894 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2tq6k" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.715035 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.716010 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/005be756-d1ff-452c-aa4e-e4df06f28839-config\") pod \"console-operator-58897d9998-glvvh\" (UID: \"005be756-d1ff-452c-aa4e-e4df06f28839\") " pod="openshift-console-operator/console-operator-58897d9998-glvvh" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.731258 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s6wq8" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.745253 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vz29n" event={"ID":"54da68a4-65a4-488f-a029-783cf51bdc04","Type":"ContainerStarted","Data":"1f1af5925b9c8f699068a1a9d3162267cc61ed29a23c82aaa37e61bdc38a2c31"} Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.760726 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2glzg" event={"ID":"f51b3907-2cbf-4b80-a800-411404195052","Type":"ContainerStarted","Data":"3c4112bbfff98be1941e2e2b15ebbe5e82cd5a97f4b0b1b448909a40a95128f7"} Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.763761 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-85fg2" event={"ID":"2cdca175-2ec1-427b-8c68-99a065f9d5d7","Type":"ContainerStarted","Data":"8dd86b4cc2b19bc21bdac607776968e520765684b21ab6754c610f2fef60cd76"} Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.779093 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-575sd" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.788990 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n6hjp" event={"ID":"a5484631-f1bd-49f7-a0bd-46e783e44095","Type":"ContainerStarted","Data":"9d3f6a66427dd58daf94f0c089aa376881f75460fbaa72338c38d02da017b93b"} Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.797299 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7h7n\" (UniqueName: \"kubernetes.io/projected/69bf978d-34fa-4cfa-8c78-7d0bdfa3298f-kube-api-access-x7h7n\") pod \"ingress-operator-5b745b69d9-hjt88\" (UID: \"69bf978d-34fa-4cfa-8c78-7d0bdfa3298f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hjt88" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.797361 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d4e187c5-28e2-4881-8f59-214d93c767b1-registry-tls\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.797420 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vd7x\" (UniqueName: \"kubernetes.io/projected/051204e8-a3a7-47ae-8170-55f5776cfa1e-kube-api-access-4vd7x\") pod \"package-server-manager-789f6589d5-kjctk\" (UID: \"051204e8-a3a7-47ae-8170-55f5776cfa1e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kjctk" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.797454 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.797474 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d4e187c5-28e2-4881-8f59-214d93c767b1-bound-sa-token\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.797498 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lccjk\" (UniqueName: \"kubernetes.io/projected/872b07f2-d557-4c06-a432-18b9a46fe6cc-kube-api-access-lccjk\") pod \"router-default-5444994796-9wnfz\" (UID: \"872b07f2-d557-4c06-a432-18b9a46fe6cc\") " pod="openshift-ingress/router-default-5444994796-9wnfz" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.797536 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d4e187c5-28e2-4881-8f59-214d93c767b1-registry-certificates\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.797560 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/872b07f2-d557-4c06-a432-18b9a46fe6cc-stats-auth\") pod \"router-default-5444994796-9wnfz\" (UID: \"872b07f2-d557-4c06-a432-18b9a46fe6cc\") " pod="openshift-ingress/router-default-5444994796-9wnfz" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.797583 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0c102da0-9cf4-4521-97fe-3153aa47a43e-secret-volume\") pod \"collect-profiles-29412765-29wpf\" (UID: \"0c102da0-9cf4-4521-97fe-3153aa47a43e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412765-29wpf" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.797605 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/69bf978d-34fa-4cfa-8c78-7d0bdfa3298f-metrics-tls\") pod \"ingress-operator-5b745b69d9-hjt88\" (UID: \"69bf978d-34fa-4cfa-8c78-7d0bdfa3298f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hjt88" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.797623 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/69bf978d-34fa-4cfa-8c78-7d0bdfa3298f-bound-sa-token\") pod \"ingress-operator-5b745b69d9-hjt88\" (UID: \"69bf978d-34fa-4cfa-8c78-7d0bdfa3298f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hjt88" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.797649 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0c102da0-9cf4-4521-97fe-3153aa47a43e-config-volume\") pod \"collect-profiles-29412765-29wpf\" (UID: \"0c102da0-9cf4-4521-97fe-3153aa47a43e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412765-29wpf" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.797685 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngnr6\" (UniqueName: \"kubernetes.io/projected/0c102da0-9cf4-4521-97fe-3153aa47a43e-kube-api-access-ngnr6\") pod \"collect-profiles-29412765-29wpf\" (UID: \"0c102da0-9cf4-4521-97fe-3153aa47a43e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412765-29wpf" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.797703 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/872b07f2-d557-4c06-a432-18b9a46fe6cc-service-ca-bundle\") pod \"router-default-5444994796-9wnfz\" (UID: \"872b07f2-d557-4c06-a432-18b9a46fe6cc\") " pod="openshift-ingress/router-default-5444994796-9wnfz" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.797728 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mp42m\" (UniqueName: \"kubernetes.io/projected/d4e187c5-28e2-4881-8f59-214d93c767b1-kube-api-access-mp42m\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.797751 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/69bf978d-34fa-4cfa-8c78-7d0bdfa3298f-trusted-ca\") pod \"ingress-operator-5b745b69d9-hjt88\" (UID: \"69bf978d-34fa-4cfa-8c78-7d0bdfa3298f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hjt88" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.797771 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/051204e8-a3a7-47ae-8170-55f5776cfa1e-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-kjctk\" (UID: \"051204e8-a3a7-47ae-8170-55f5776cfa1e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kjctk" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.798776 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d4e187c5-28e2-4881-8f59-214d93c767b1-ca-trust-extracted\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:17 crc kubenswrapper[4986]: E1203 12:58:17.798836 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 12:58:18.298819327 +0000 UTC m=+157.765250518 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zn5lj" (UID: "d4e187c5-28e2-4881-8f59-214d93c767b1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.798864 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/872b07f2-d557-4c06-a432-18b9a46fe6cc-metrics-certs\") pod \"router-default-5444994796-9wnfz\" (UID: \"872b07f2-d557-4c06-a432-18b9a46fe6cc\") " pod="openshift-ingress/router-default-5444994796-9wnfz" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.798904 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d4e187c5-28e2-4881-8f59-214d93c767b1-installation-pull-secrets\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.798922 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/872b07f2-d557-4c06-a432-18b9a46fe6cc-default-certificate\") pod \"router-default-5444994796-9wnfz\" (UID: \"872b07f2-d557-4c06-a432-18b9a46fe6cc\") " pod="openshift-ingress/router-default-5444994796-9wnfz" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.798967 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d4e187c5-28e2-4881-8f59-214d93c767b1-trusted-ca\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.821457 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-jwlmm"] Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.909647 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.910061 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e6f57929-5eab-4a61-88dd-e91ca735bbd5-images\") pod \"machine-config-operator-74547568cd-qxpz7\" (UID: \"e6f57929-5eab-4a61-88dd-e91ca735bbd5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qxpz7" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.910115 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d4e187c5-28e2-4881-8f59-214d93c767b1-registry-certificates\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.910156 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/872b07f2-d557-4c06-a432-18b9a46fe6cc-stats-auth\") pod \"router-default-5444994796-9wnfz\" (UID: \"872b07f2-d557-4c06-a432-18b9a46fe6cc\") " pod="openshift-ingress/router-default-5444994796-9wnfz" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.910183 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0c102da0-9cf4-4521-97fe-3153aa47a43e-secret-volume\") pod \"collect-profiles-29412765-29wpf\" (UID: \"0c102da0-9cf4-4521-97fe-3153aa47a43e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412765-29wpf" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.910223 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/69bf978d-34fa-4cfa-8c78-7d0bdfa3298f-metrics-tls\") pod \"ingress-operator-5b745b69d9-hjt88\" (UID: \"69bf978d-34fa-4cfa-8c78-7d0bdfa3298f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hjt88" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.910239 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/69bf978d-34fa-4cfa-8c78-7d0bdfa3298f-bound-sa-token\") pod \"ingress-operator-5b745b69d9-hjt88\" (UID: \"69bf978d-34fa-4cfa-8c78-7d0bdfa3298f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hjt88" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.910315 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/dd374c2f-10af-47c4-b550-d5c03c39ca45-metrics-tls\") pod \"dns-default-7wtlt\" (UID: \"dd374c2f-10af-47c4-b550-d5c03c39ca45\") " pod="openshift-dns/dns-default-7wtlt" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.910342 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0c102da0-9cf4-4521-97fe-3153aa47a43e-config-volume\") pod \"collect-profiles-29412765-29wpf\" (UID: \"0c102da0-9cf4-4521-97fe-3153aa47a43e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412765-29wpf" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.913063 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngnr6\" (UniqueName: \"kubernetes.io/projected/0c102da0-9cf4-4521-97fe-3153aa47a43e-kube-api-access-ngnr6\") pod \"collect-profiles-29412765-29wpf\" (UID: \"0c102da0-9cf4-4521-97fe-3153aa47a43e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412765-29wpf" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.913084 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/872b07f2-d557-4c06-a432-18b9a46fe6cc-service-ca-bundle\") pod \"router-default-5444994796-9wnfz\" (UID: \"872b07f2-d557-4c06-a432-18b9a46fe6cc\") " pod="openshift-ingress/router-default-5444994796-9wnfz" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.913103 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dd374c2f-10af-47c4-b550-d5c03c39ca45-config-volume\") pod \"dns-default-7wtlt\" (UID: \"dd374c2f-10af-47c4-b550-d5c03c39ca45\") " pod="openshift-dns/dns-default-7wtlt" Dec 03 12:58:17 crc kubenswrapper[4986]: E1203 12:58:17.913150 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 12:58:18.413132404 +0000 UTC m=+157.879563595 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.913207 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mp42m\" (UniqueName: \"kubernetes.io/projected/d4e187c5-28e2-4881-8f59-214d93c767b1-kube-api-access-mp42m\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.913227 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/71392224-bf56-4c1b-9f0a-bcecd393660d-webhook-cert\") pod \"packageserver-d55dfcdfc-szpmg\" (UID: \"71392224-bf56-4c1b-9f0a-bcecd393660d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-szpmg" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.913244 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbvkk\" (UniqueName: \"kubernetes.io/projected/30903761-66a3-4874-b464-8e1127bd8ed5-kube-api-access-hbvkk\") pod \"machine-config-server-r4wn7\" (UID: \"30903761-66a3-4874-b464-8e1127bd8ed5\") " pod="openshift-machine-config-operator/machine-config-server-r4wn7" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.913262 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/69bf978d-34fa-4cfa-8c78-7d0bdfa3298f-trusted-ca\") pod \"ingress-operator-5b745b69d9-hjt88\" (UID: \"69bf978d-34fa-4cfa-8c78-7d0bdfa3298f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hjt88" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.913297 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/051204e8-a3a7-47ae-8170-55f5776cfa1e-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-kjctk\" (UID: \"051204e8-a3a7-47ae-8170-55f5776cfa1e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kjctk" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.913314 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/30903761-66a3-4874-b464-8e1127bd8ed5-certs\") pod \"machine-config-server-r4wn7\" (UID: \"30903761-66a3-4874-b464-8e1127bd8ed5\") " pod="openshift-machine-config-operator/machine-config-server-r4wn7" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.913340 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/71392224-bf56-4c1b-9f0a-bcecd393660d-apiservice-cert\") pod \"packageserver-d55dfcdfc-szpmg\" (UID: \"71392224-bf56-4c1b-9f0a-bcecd393660d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-szpmg" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.913380 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nlqq\" (UniqueName: \"kubernetes.io/projected/005be756-d1ff-452c-aa4e-e4df06f28839-kube-api-access-9nlqq\") pod \"console-operator-58897d9998-glvvh\" (UID: \"005be756-d1ff-452c-aa4e-e4df06f28839\") " pod="openshift-console-operator/console-operator-58897d9998-glvvh" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.913425 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e6f57929-5eab-4a61-88dd-e91ca735bbd5-proxy-tls\") pod \"machine-config-operator-74547568cd-qxpz7\" (UID: \"e6f57929-5eab-4a61-88dd-e91ca735bbd5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qxpz7" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.913477 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d4e187c5-28e2-4881-8f59-214d93c767b1-ca-trust-extracted\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.913502 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/f465e01f-8cdd-424e-a617-ed0e4a4ae140-signing-cabundle\") pod \"service-ca-9c57cc56f-rskz9\" (UID: \"f465e01f-8cdd-424e-a617-ed0e4a4ae140\") " pod="openshift-service-ca/service-ca-9c57cc56f-rskz9" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.913608 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/872b07f2-d557-4c06-a432-18b9a46fe6cc-metrics-certs\") pod \"router-default-5444994796-9wnfz\" (UID: \"872b07f2-d557-4c06-a432-18b9a46fe6cc\") " pod="openshift-ingress/router-default-5444994796-9wnfz" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.913635 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d4e187c5-28e2-4881-8f59-214d93c767b1-installation-pull-secrets\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.913650 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/30903761-66a3-4874-b464-8e1127bd8ed5-node-bootstrap-token\") pod \"machine-config-server-r4wn7\" (UID: \"30903761-66a3-4874-b464-8e1127bd8ed5\") " pod="openshift-machine-config-operator/machine-config-server-r4wn7" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.913666 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/872b07f2-d557-4c06-a432-18b9a46fe6cc-default-certificate\") pod \"router-default-5444994796-9wnfz\" (UID: \"872b07f2-d557-4c06-a432-18b9a46fe6cc\") " pod="openshift-ingress/router-default-5444994796-9wnfz" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.913686 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqpvp\" (UniqueName: \"kubernetes.io/projected/f465e01f-8cdd-424e-a617-ed0e4a4ae140-kube-api-access-rqpvp\") pod \"service-ca-9c57cc56f-rskz9\" (UID: \"f465e01f-8cdd-424e-a617-ed0e4a4ae140\") " pod="openshift-service-ca/service-ca-9c57cc56f-rskz9" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.913716 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d4e187c5-28e2-4881-8f59-214d93c767b1-trusted-ca\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.913759 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f23fc85f-9e00-4046-8c6b-b3f10deb822b-cert\") pod \"ingress-canary-k58fj\" (UID: \"f23fc85f-9e00-4046-8c6b-b3f10deb822b\") " pod="openshift-ingress-canary/ingress-canary-k58fj" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.913783 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e6f57929-5eab-4a61-88dd-e91ca735bbd5-auth-proxy-config\") pod \"machine-config-operator-74547568cd-qxpz7\" (UID: \"e6f57929-5eab-4a61-88dd-e91ca735bbd5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qxpz7" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.913800 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7h7n\" (UniqueName: \"kubernetes.io/projected/69bf978d-34fa-4cfa-8c78-7d0bdfa3298f-kube-api-access-x7h7n\") pod \"ingress-operator-5b745b69d9-hjt88\" (UID: \"69bf978d-34fa-4cfa-8c78-7d0bdfa3298f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hjt88" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.913865 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdw2t\" (UniqueName: \"kubernetes.io/projected/dd374c2f-10af-47c4-b550-d5c03c39ca45-kube-api-access-hdw2t\") pod \"dns-default-7wtlt\" (UID: \"dd374c2f-10af-47c4-b550-d5c03c39ca45\") " pod="openshift-dns/dns-default-7wtlt" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.913919 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jwt7\" (UniqueName: \"kubernetes.io/projected/71392224-bf56-4c1b-9f0a-bcecd393660d-kube-api-access-7jwt7\") pod \"packageserver-d55dfcdfc-szpmg\" (UID: \"71392224-bf56-4c1b-9f0a-bcecd393660d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-szpmg" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.913963 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d4e187c5-28e2-4881-8f59-214d93c767b1-registry-tls\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.913978 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/71392224-bf56-4c1b-9f0a-bcecd393660d-tmpfs\") pod \"packageserver-d55dfcdfc-szpmg\" (UID: \"71392224-bf56-4c1b-9f0a-bcecd393660d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-szpmg" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.915573 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0c102da0-9cf4-4521-97fe-3153aa47a43e-config-volume\") pod \"collect-profiles-29412765-29wpf\" (UID: \"0c102da0-9cf4-4521-97fe-3153aa47a43e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412765-29wpf" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.915936 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/872b07f2-d557-4c06-a432-18b9a46fe6cc-service-ca-bundle\") pod \"router-default-5444994796-9wnfz\" (UID: \"872b07f2-d557-4c06-a432-18b9a46fe6cc\") " pod="openshift-ingress/router-default-5444994796-9wnfz" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.918875 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0c102da0-9cf4-4521-97fe-3153aa47a43e-secret-volume\") pod \"collect-profiles-29412765-29wpf\" (UID: \"0c102da0-9cf4-4521-97fe-3153aa47a43e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412765-29wpf" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.958884 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/69bf978d-34fa-4cfa-8c78-7d0bdfa3298f-trusted-ca\") pod \"ingress-operator-5b745b69d9-hjt88\" (UID: \"69bf978d-34fa-4cfa-8c78-7d0bdfa3298f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hjt88" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.959056 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/69bf978d-34fa-4cfa-8c78-7d0bdfa3298f-metrics-tls\") pod \"ingress-operator-5b745b69d9-hjt88\" (UID: \"69bf978d-34fa-4cfa-8c78-7d0bdfa3298f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hjt88" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.959702 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/872b07f2-d557-4c06-a432-18b9a46fe6cc-default-certificate\") pod \"router-default-5444994796-9wnfz\" (UID: \"872b07f2-d557-4c06-a432-18b9a46fe6cc\") " pod="openshift-ingress/router-default-5444994796-9wnfz" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.960327 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d4e187c5-28e2-4881-8f59-214d93c767b1-registry-tls\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.964766 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vd7x\" (UniqueName: \"kubernetes.io/projected/051204e8-a3a7-47ae-8170-55f5776cfa1e-kube-api-access-4vd7x\") pod \"package-server-manager-789f6589d5-kjctk\" (UID: \"051204e8-a3a7-47ae-8170-55f5776cfa1e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kjctk" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.966618 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d4e187c5-28e2-4881-8f59-214d93c767b1-ca-trust-extracted\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.969666 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/051204e8-a3a7-47ae-8170-55f5776cfa1e-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-kjctk\" (UID: \"051204e8-a3a7-47ae-8170-55f5776cfa1e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kjctk" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.974008 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nlqq\" (UniqueName: \"kubernetes.io/projected/005be756-d1ff-452c-aa4e-e4df06f28839-kube-api-access-9nlqq\") pod \"console-operator-58897d9998-glvvh\" (UID: \"005be756-d1ff-452c-aa4e-e4df06f28839\") " pod="openshift-console-operator/console-operator-58897d9998-glvvh" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.974530 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d4e187c5-28e2-4881-8f59-214d93c767b1-trusted-ca\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.975199 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbmk6\" (UniqueName: \"kubernetes.io/projected/f23fc85f-9e00-4046-8c6b-b3f10deb822b-kube-api-access-tbmk6\") pod \"ingress-canary-k58fj\" (UID: \"f23fc85f-9e00-4046-8c6b-b3f10deb822b\") " pod="openshift-ingress-canary/ingress-canary-k58fj" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.976712 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rxml\" (UniqueName: \"kubernetes.io/projected/e6f57929-5eab-4a61-88dd-e91ca735bbd5-kube-api-access-2rxml\") pod \"machine-config-operator-74547568cd-qxpz7\" (UID: \"e6f57929-5eab-4a61-88dd-e91ca735bbd5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qxpz7" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.976829 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/f465e01f-8cdd-424e-a617-ed0e4a4ae140-signing-key\") pod \"service-ca-9c57cc56f-rskz9\" (UID: \"f465e01f-8cdd-424e-a617-ed0e4a4ae140\") " pod="openshift-service-ca/service-ca-9c57cc56f-rskz9" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.976873 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lccjk\" (UniqueName: \"kubernetes.io/projected/872b07f2-d557-4c06-a432-18b9a46fe6cc-kube-api-access-lccjk\") pod \"router-default-5444994796-9wnfz\" (UID: \"872b07f2-d557-4c06-a432-18b9a46fe6cc\") " pod="openshift-ingress/router-default-5444994796-9wnfz" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.976951 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.976981 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d4e187c5-28e2-4881-8f59-214d93c767b1-bound-sa-token\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.977150 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/872b07f2-d557-4c06-a432-18b9a46fe6cc-metrics-certs\") pod \"router-default-5444994796-9wnfz\" (UID: \"872b07f2-d557-4c06-a432-18b9a46fe6cc\") " pod="openshift-ingress/router-default-5444994796-9wnfz" Dec 03 12:58:17 crc kubenswrapper[4986]: E1203 12:58:17.977627 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 12:58:18.477611079 +0000 UTC m=+157.944042270 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zn5lj" (UID: "d4e187c5-28e2-4881-8f59-214d93c767b1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.983887 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d4e187c5-28e2-4881-8f59-214d93c767b1-registry-certificates\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.997402 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/872b07f2-d557-4c06-a432-18b9a46fe6cc-stats-auth\") pod \"router-default-5444994796-9wnfz\" (UID: \"872b07f2-d557-4c06-a432-18b9a46fe6cc\") " pod="openshift-ingress/router-default-5444994796-9wnfz" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.998244 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7h7n\" (UniqueName: \"kubernetes.io/projected/69bf978d-34fa-4cfa-8c78-7d0bdfa3298f-kube-api-access-x7h7n\") pod \"ingress-operator-5b745b69d9-hjt88\" (UID: \"69bf978d-34fa-4cfa-8c78-7d0bdfa3298f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hjt88" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.998325 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d4e187c5-28e2-4881-8f59-214d93c767b1-installation-pull-secrets\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:17 crc kubenswrapper[4986]: I1203 12:58:17.998638 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngnr6\" (UniqueName: \"kubernetes.io/projected/0c102da0-9cf4-4521-97fe-3153aa47a43e-kube-api-access-ngnr6\") pod \"collect-profiles-29412765-29wpf\" (UID: \"0c102da0-9cf4-4521-97fe-3153aa47a43e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412765-29wpf" Dec 03 12:58:18 crc kubenswrapper[4986]: I1203 12:58:18.008765 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mp42m\" (UniqueName: \"kubernetes.io/projected/d4e187c5-28e2-4881-8f59-214d93c767b1-kube-api-access-mp42m\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:18 crc kubenswrapper[4986]: I1203 12:58:18.029162 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vd7x\" (UniqueName: \"kubernetes.io/projected/051204e8-a3a7-47ae-8170-55f5776cfa1e-kube-api-access-4vd7x\") pod \"package-server-manager-789f6589d5-kjctk\" (UID: \"051204e8-a3a7-47ae-8170-55f5776cfa1e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kjctk" Dec 03 12:58:18 crc kubenswrapper[4986]: I1203 12:58:18.030404 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/69bf978d-34fa-4cfa-8c78-7d0bdfa3298f-bound-sa-token\") pod \"ingress-operator-5b745b69d9-hjt88\" (UID: \"69bf978d-34fa-4cfa-8c78-7d0bdfa3298f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hjt88" Dec 03 12:58:18 crc kubenswrapper[4986]: I1203 12:58:18.066357 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d4e187c5-28e2-4881-8f59-214d93c767b1-bound-sa-token\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:18 crc kubenswrapper[4986]: I1203 12:58:18.066621 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412765-29wpf" Dec 03 12:58:18 crc kubenswrapper[4986]: I1203 12:58:18.077842 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kx8xd"] Dec 03 12:58:18 crc kubenswrapper[4986]: I1203 12:58:18.078716 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 12:58:18 crc kubenswrapper[4986]: I1203 12:58:18.078936 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dd374c2f-10af-47c4-b550-d5c03c39ca45-config-volume\") pod \"dns-default-7wtlt\" (UID: \"dd374c2f-10af-47c4-b550-d5c03c39ca45\") " pod="openshift-dns/dns-default-7wtlt" Dec 03 12:58:18 crc kubenswrapper[4986]: I1203 12:58:18.078961 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/71392224-bf56-4c1b-9f0a-bcecd393660d-webhook-cert\") pod \"packageserver-d55dfcdfc-szpmg\" (UID: \"71392224-bf56-4c1b-9f0a-bcecd393660d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-szpmg" Dec 03 12:58:18 crc kubenswrapper[4986]: I1203 12:58:18.078977 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbvkk\" (UniqueName: \"kubernetes.io/projected/30903761-66a3-4874-b464-8e1127bd8ed5-kube-api-access-hbvkk\") pod \"machine-config-server-r4wn7\" (UID: \"30903761-66a3-4874-b464-8e1127bd8ed5\") " pod="openshift-machine-config-operator/machine-config-server-r4wn7" Dec 03 12:58:18 crc kubenswrapper[4986]: I1203 12:58:18.078994 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/30903761-66a3-4874-b464-8e1127bd8ed5-certs\") pod \"machine-config-server-r4wn7\" (UID: \"30903761-66a3-4874-b464-8e1127bd8ed5\") " pod="openshift-machine-config-operator/machine-config-server-r4wn7" Dec 03 12:58:18 crc kubenswrapper[4986]: I1203 12:58:18.079019 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/71392224-bf56-4c1b-9f0a-bcecd393660d-apiservice-cert\") pod \"packageserver-d55dfcdfc-szpmg\" (UID: \"71392224-bf56-4c1b-9f0a-bcecd393660d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-szpmg" Dec 03 12:58:18 crc kubenswrapper[4986]: I1203 12:58:18.079035 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e6f57929-5eab-4a61-88dd-e91ca735bbd5-proxy-tls\") pod \"machine-config-operator-74547568cd-qxpz7\" (UID: \"e6f57929-5eab-4a61-88dd-e91ca735bbd5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qxpz7" Dec 03 12:58:18 crc kubenswrapper[4986]: I1203 12:58:18.079050 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/f465e01f-8cdd-424e-a617-ed0e4a4ae140-signing-cabundle\") pod \"service-ca-9c57cc56f-rskz9\" (UID: \"f465e01f-8cdd-424e-a617-ed0e4a4ae140\") " pod="openshift-service-ca/service-ca-9c57cc56f-rskz9" Dec 03 12:58:18 crc kubenswrapper[4986]: I1203 12:58:18.079071 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/30903761-66a3-4874-b464-8e1127bd8ed5-node-bootstrap-token\") pod \"machine-config-server-r4wn7\" (UID: \"30903761-66a3-4874-b464-8e1127bd8ed5\") " pod="openshift-machine-config-operator/machine-config-server-r4wn7" Dec 03 12:58:18 crc kubenswrapper[4986]: I1203 12:58:18.079088 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqpvp\" (UniqueName: \"kubernetes.io/projected/f465e01f-8cdd-424e-a617-ed0e4a4ae140-kube-api-access-rqpvp\") pod \"service-ca-9c57cc56f-rskz9\" (UID: \"f465e01f-8cdd-424e-a617-ed0e4a4ae140\") " pod="openshift-service-ca/service-ca-9c57cc56f-rskz9" Dec 03 12:58:18 crc kubenswrapper[4986]: I1203 12:58:18.079111 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f23fc85f-9e00-4046-8c6b-b3f10deb822b-cert\") pod \"ingress-canary-k58fj\" (UID: \"f23fc85f-9e00-4046-8c6b-b3f10deb822b\") " pod="openshift-ingress-canary/ingress-canary-k58fj" Dec 03 12:58:18 crc kubenswrapper[4986]: I1203 12:58:18.079130 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e6f57929-5eab-4a61-88dd-e91ca735bbd5-auth-proxy-config\") pod \"machine-config-operator-74547568cd-qxpz7\" (UID: \"e6f57929-5eab-4a61-88dd-e91ca735bbd5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qxpz7" Dec 03 12:58:18 crc kubenswrapper[4986]: I1203 12:58:18.079159 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdw2t\" (UniqueName: \"kubernetes.io/projected/dd374c2f-10af-47c4-b550-d5c03c39ca45-kube-api-access-hdw2t\") pod \"dns-default-7wtlt\" (UID: \"dd374c2f-10af-47c4-b550-d5c03c39ca45\") " pod="openshift-dns/dns-default-7wtlt" Dec 03 12:58:18 crc kubenswrapper[4986]: I1203 12:58:18.079191 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jwt7\" (UniqueName: \"kubernetes.io/projected/71392224-bf56-4c1b-9f0a-bcecd393660d-kube-api-access-7jwt7\") pod \"packageserver-d55dfcdfc-szpmg\" (UID: \"71392224-bf56-4c1b-9f0a-bcecd393660d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-szpmg" Dec 03 12:58:18 crc kubenswrapper[4986]: I1203 12:58:18.079210 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/71392224-bf56-4c1b-9f0a-bcecd393660d-tmpfs\") pod \"packageserver-d55dfcdfc-szpmg\" (UID: \"71392224-bf56-4c1b-9f0a-bcecd393660d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-szpmg" Dec 03 12:58:18 crc kubenswrapper[4986]: I1203 12:58:18.079247 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbmk6\" (UniqueName: \"kubernetes.io/projected/f23fc85f-9e00-4046-8c6b-b3f10deb822b-kube-api-access-tbmk6\") pod \"ingress-canary-k58fj\" (UID: \"f23fc85f-9e00-4046-8c6b-b3f10deb822b\") " pod="openshift-ingress-canary/ingress-canary-k58fj" Dec 03 12:58:18 crc kubenswrapper[4986]: I1203 12:58:18.079264 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rxml\" (UniqueName: \"kubernetes.io/projected/e6f57929-5eab-4a61-88dd-e91ca735bbd5-kube-api-access-2rxml\") pod \"machine-config-operator-74547568cd-qxpz7\" (UID: \"e6f57929-5eab-4a61-88dd-e91ca735bbd5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qxpz7" Dec 03 12:58:18 crc kubenswrapper[4986]: I1203 12:58:18.079294 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/f465e01f-8cdd-424e-a617-ed0e4a4ae140-signing-key\") pod \"service-ca-9c57cc56f-rskz9\" (UID: \"f465e01f-8cdd-424e-a617-ed0e4a4ae140\") " pod="openshift-service-ca/service-ca-9c57cc56f-rskz9" Dec 03 12:58:18 crc kubenswrapper[4986]: I1203 12:58:18.079333 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e6f57929-5eab-4a61-88dd-e91ca735bbd5-images\") pod \"machine-config-operator-74547568cd-qxpz7\" (UID: \"e6f57929-5eab-4a61-88dd-e91ca735bbd5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qxpz7" Dec 03 12:58:18 crc kubenswrapper[4986]: I1203 12:58:18.079371 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/dd374c2f-10af-47c4-b550-d5c03c39ca45-metrics-tls\") pod \"dns-default-7wtlt\" (UID: \"dd374c2f-10af-47c4-b550-d5c03c39ca45\") " pod="openshift-dns/dns-default-7wtlt" Dec 03 12:58:18 crc kubenswrapper[4986]: I1203 12:58:18.080906 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-8hs4t"] Dec 03 12:58:18 crc kubenswrapper[4986]: I1203 12:58:18.081667 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e6f57929-5eab-4a61-88dd-e91ca735bbd5-auth-proxy-config\") pod \"machine-config-operator-74547568cd-qxpz7\" (UID: \"e6f57929-5eab-4a61-88dd-e91ca735bbd5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qxpz7" Dec 03 12:58:18 crc kubenswrapper[4986]: E1203 12:58:18.081866 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 12:58:18.581741387 +0000 UTC m=+158.048172638 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:18 crc kubenswrapper[4986]: I1203 12:58:18.082534 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dd374c2f-10af-47c4-b550-d5c03c39ca45-config-volume\") pod \"dns-default-7wtlt\" (UID: \"dd374c2f-10af-47c4-b550-d5c03c39ca45\") " pod="openshift-dns/dns-default-7wtlt" Dec 03 12:58:18 crc kubenswrapper[4986]: I1203 12:58:18.088968 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/f465e01f-8cdd-424e-a617-ed0e4a4ae140-signing-cabundle\") pod \"service-ca-9c57cc56f-rskz9\" (UID: \"f465e01f-8cdd-424e-a617-ed0e4a4ae140\") " pod="openshift-service-ca/service-ca-9c57cc56f-rskz9" Dec 03 12:58:18 crc kubenswrapper[4986]: I1203 12:58:18.089216 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e6f57929-5eab-4a61-88dd-e91ca735bbd5-proxy-tls\") pod \"machine-config-operator-74547568cd-qxpz7\" (UID: \"e6f57929-5eab-4a61-88dd-e91ca735bbd5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qxpz7" Dec 03 12:58:18 crc kubenswrapper[4986]: I1203 12:58:18.089533 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/71392224-bf56-4c1b-9f0a-bcecd393660d-tmpfs\") pod \"packageserver-d55dfcdfc-szpmg\" (UID: \"71392224-bf56-4c1b-9f0a-bcecd393660d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-szpmg" Dec 03 12:58:18 crc kubenswrapper[4986]: I1203 12:58:18.089904 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lccjk\" (UniqueName: \"kubernetes.io/projected/872b07f2-d557-4c06-a432-18b9a46fe6cc-kube-api-access-lccjk\") pod \"router-default-5444994796-9wnfz\" (UID: \"872b07f2-d557-4c06-a432-18b9a46fe6cc\") " pod="openshift-ingress/router-default-5444994796-9wnfz" Dec 03 12:58:18 crc kubenswrapper[4986]: I1203 12:58:18.092945 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e6f57929-5eab-4a61-88dd-e91ca735bbd5-images\") pod \"machine-config-operator-74547568cd-qxpz7\" (UID: \"e6f57929-5eab-4a61-88dd-e91ca735bbd5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qxpz7" Dec 03 12:58:18 crc kubenswrapper[4986]: I1203 12:58:18.093821 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/71392224-bf56-4c1b-9f0a-bcecd393660d-webhook-cert\") pod \"packageserver-d55dfcdfc-szpmg\" (UID: \"71392224-bf56-4c1b-9f0a-bcecd393660d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-szpmg" Dec 03 12:58:18 crc kubenswrapper[4986]: I1203 12:58:18.095808 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/30903761-66a3-4874-b464-8e1127bd8ed5-certs\") pod \"machine-config-server-r4wn7\" (UID: \"30903761-66a3-4874-b464-8e1127bd8ed5\") " pod="openshift-machine-config-operator/machine-config-server-r4wn7" Dec 03 12:58:18 crc kubenswrapper[4986]: I1203 12:58:18.102393 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f23fc85f-9e00-4046-8c6b-b3f10deb822b-cert\") pod \"ingress-canary-k58fj\" (UID: \"f23fc85f-9e00-4046-8c6b-b3f10deb822b\") " pod="openshift-ingress-canary/ingress-canary-k58fj" Dec 03 12:58:18 crc kubenswrapper[4986]: I1203 12:58:18.104227 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/30903761-66a3-4874-b464-8e1127bd8ed5-node-bootstrap-token\") pod \"machine-config-server-r4wn7\" (UID: \"30903761-66a3-4874-b464-8e1127bd8ed5\") " pod="openshift-machine-config-operator/machine-config-server-r4wn7" Dec 03 12:58:18 crc kubenswrapper[4986]: I1203 12:58:18.106822 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/dd374c2f-10af-47c4-b550-d5c03c39ca45-metrics-tls\") pod \"dns-default-7wtlt\" (UID: \"dd374c2f-10af-47c4-b550-d5c03c39ca45\") " pod="openshift-dns/dns-default-7wtlt" Dec 03 12:58:18 crc kubenswrapper[4986]: I1203 12:58:18.107129 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/f465e01f-8cdd-424e-a617-ed0e4a4ae140-signing-key\") pod \"service-ca-9c57cc56f-rskz9\" (UID: \"f465e01f-8cdd-424e-a617-ed0e4a4ae140\") " pod="openshift-service-ca/service-ca-9c57cc56f-rskz9" Dec 03 12:58:18 crc kubenswrapper[4986]: I1203 12:58:18.109738 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/71392224-bf56-4c1b-9f0a-bcecd393660d-apiservice-cert\") pod \"packageserver-d55dfcdfc-szpmg\" (UID: \"71392224-bf56-4c1b-9f0a-bcecd393660d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-szpmg" Dec 03 12:58:18 crc kubenswrapper[4986]: I1203 12:58:18.122977 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jwt7\" (UniqueName: \"kubernetes.io/projected/71392224-bf56-4c1b-9f0a-bcecd393660d-kube-api-access-7jwt7\") pod \"packageserver-d55dfcdfc-szpmg\" (UID: \"71392224-bf56-4c1b-9f0a-bcecd393660d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-szpmg" Dec 03 12:58:18 crc kubenswrapper[4986]: I1203 12:58:18.124415 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-9wnfz" Dec 03 12:58:18 crc kubenswrapper[4986]: W1203 12:58:18.128511 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd22bd794_105c_4c41_b1c3_46723ed0cb79.slice/crio-5d56823d31329bb2230777b9fe0f9bb5c9923f1c6b0514e47fa4c8269a632ec7 WatchSource:0}: Error finding container 5d56823d31329bb2230777b9fe0f9bb5c9923f1c6b0514e47fa4c8269a632ec7: Status 404 returned error can't find the container with id 5d56823d31329bb2230777b9fe0f9bb5c9923f1c6b0514e47fa4c8269a632ec7 Dec 03 12:58:18 crc kubenswrapper[4986]: I1203 12:58:18.145647 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdw2t\" (UniqueName: \"kubernetes.io/projected/dd374c2f-10af-47c4-b550-d5c03c39ca45-kube-api-access-hdw2t\") pod \"dns-default-7wtlt\" (UID: \"dd374c2f-10af-47c4-b550-d5c03c39ca45\") " pod="openshift-dns/dns-default-7wtlt" Dec 03 12:58:18 crc kubenswrapper[4986]: I1203 12:58:18.154774 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hjt88" Dec 03 12:58:18 crc kubenswrapper[4986]: I1203 12:58:18.159165 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbmk6\" (UniqueName: \"kubernetes.io/projected/f23fc85f-9e00-4046-8c6b-b3f10deb822b-kube-api-access-tbmk6\") pod \"ingress-canary-k58fj\" (UID: \"f23fc85f-9e00-4046-8c6b-b3f10deb822b\") " pod="openshift-ingress-canary/ingress-canary-k58fj" Dec 03 12:58:18 crc kubenswrapper[4986]: I1203 12:58:18.171376 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqpvp\" (UniqueName: \"kubernetes.io/projected/f465e01f-8cdd-424e-a617-ed0e4a4ae140-kube-api-access-rqpvp\") pod \"service-ca-9c57cc56f-rskz9\" (UID: \"f465e01f-8cdd-424e-a617-ed0e4a4ae140\") " pod="openshift-service-ca/service-ca-9c57cc56f-rskz9" Dec 03 12:58:18 crc kubenswrapper[4986]: I1203 12:58:18.180696 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:18 crc kubenswrapper[4986]: E1203 12:58:18.181183 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 12:58:18.681161761 +0000 UTC m=+158.147592952 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zn5lj" (UID: "d4e187c5-28e2-4881-8f59-214d93c767b1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:18 crc kubenswrapper[4986]: I1203 12:58:18.181235 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kjctk" Dec 03 12:58:18 crc kubenswrapper[4986]: I1203 12:58:18.185102 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zzj7p"] Dec 03 12:58:18 crc kubenswrapper[4986]: I1203 12:58:18.193013 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rxml\" (UniqueName: \"kubernetes.io/projected/e6f57929-5eab-4a61-88dd-e91ca735bbd5-kube-api-access-2rxml\") pod \"machine-config-operator-74547568cd-qxpz7\" (UID: \"e6f57929-5eab-4a61-88dd-e91ca735bbd5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qxpz7" Dec 03 12:58:18 crc kubenswrapper[4986]: I1203 12:58:18.216105 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-vlhbw"] Dec 03 12:58:18 crc kubenswrapper[4986]: W1203 12:58:18.218366 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod872b07f2_d557_4c06_a432_18b9a46fe6cc.slice/crio-675695fceb85e7dbaf84e4dd650c10cdacf97a8929c09c17be76f86c394fa8d1 WatchSource:0}: Error finding container 675695fceb85e7dbaf84e4dd650c10cdacf97a8929c09c17be76f86c394fa8d1: Status 404 returned error can't find the container with id 675695fceb85e7dbaf84e4dd650c10cdacf97a8929c09c17be76f86c394fa8d1 Dec 03 12:58:18 crc kubenswrapper[4986]: I1203 12:58:18.222840 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6k9lf"] Dec 03 12:58:18 crc kubenswrapper[4986]: I1203 12:58:18.228033 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-szpmg" Dec 03 12:58:18 crc kubenswrapper[4986]: I1203 12:58:18.236385 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-rskz9" Dec 03 12:58:18 crc kubenswrapper[4986]: I1203 12:58:18.238016 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbvkk\" (UniqueName: \"kubernetes.io/projected/30903761-66a3-4874-b464-8e1127bd8ed5-kube-api-access-hbvkk\") pod \"machine-config-server-r4wn7\" (UID: \"30903761-66a3-4874-b464-8e1127bd8ed5\") " pod="openshift-machine-config-operator/machine-config-server-r4wn7" Dec 03 12:58:18 crc kubenswrapper[4986]: I1203 12:58:18.252365 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7xp9j"] Dec 03 12:58:18 crc kubenswrapper[4986]: I1203 12:58:18.252434 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-575sd"] Dec 03 12:58:18 crc kubenswrapper[4986]: I1203 12:58:18.260315 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-glvvh" Dec 03 12:58:18 crc kubenswrapper[4986]: I1203 12:58:18.268422 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qxpz7" Dec 03 12:58:18 crc kubenswrapper[4986]: I1203 12:58:18.273626 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-d7f6t"] Dec 03 12:58:18 crc kubenswrapper[4986]: I1203 12:58:18.281767 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 12:58:18 crc kubenswrapper[4986]: E1203 12:58:18.282143 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 12:58:18.7821285 +0000 UTC m=+158.248559691 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:18 crc kubenswrapper[4986]: I1203 12:58:18.282909 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-k58fj" Dec 03 12:58:18 crc kubenswrapper[4986]: I1203 12:58:18.298663 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-rwrf2"] Dec 03 12:58:18 crc kubenswrapper[4986]: I1203 12:58:18.298821 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-7wtlt" Dec 03 12:58:18 crc kubenswrapper[4986]: W1203 12:58:18.301077 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod89c50427_14ae_409d_89d5_a56be0ff97d1.slice/crio-262b154a2a06696d6dfd69537f654aa2cc7785b261d0518d2549541125e0a72b WatchSource:0}: Error finding container 262b154a2a06696d6dfd69537f654aa2cc7785b261d0518d2549541125e0a72b: Status 404 returned error can't find the container with id 262b154a2a06696d6dfd69537f654aa2cc7785b261d0518d2549541125e0a72b Dec 03 12:58:18 crc kubenswrapper[4986]: W1203 12:58:18.302139 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod869ead05_05ad_4705_8193_0f5cf6987257.slice/crio-b3db94587e562678d1246bd1791f653475020863bfc846904f2975208126a069 WatchSource:0}: Error finding container b3db94587e562678d1246bd1791f653475020863bfc846904f2975208126a069: Status 404 returned error can't find the container with id b3db94587e562678d1246bd1791f653475020863bfc846904f2975208126a069 Dec 03 12:58:18 crc kubenswrapper[4986]: I1203 12:58:18.312385 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-r4wn7" Dec 03 12:58:18 crc kubenswrapper[4986]: I1203 12:58:18.319135 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-2tq6k"] Dec 03 12:58:18 crc kubenswrapper[4986]: W1203 12:58:18.347640 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod68f994ae_dec7_4be6_b4e7_7a1875b40f4f.slice/crio-266920daa5d76addec274e7df29677128c1d18068256711d3204a007cb500661 WatchSource:0}: Error finding container 266920daa5d76addec274e7df29677128c1d18068256711d3204a007cb500661: Status 404 returned error can't find the container with id 266920daa5d76addec274e7df29677128c1d18068256711d3204a007cb500661 Dec 03 12:58:18 crc kubenswrapper[4986]: W1203 12:58:18.358527 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod06ff5ccd_6dfa_4826_8a1d_929ac5deadf4.slice/crio-e89f54c3722a6f612b6f6a75fca0edf463d68c0a0532fee425c0a323be382e25 WatchSource:0}: Error finding container e89f54c3722a6f612b6f6a75fca0edf463d68c0a0532fee425c0a323be382e25: Status 404 returned error can't find the container with id e89f54c3722a6f612b6f6a75fca0edf463d68c0a0532fee425c0a323be382e25 Dec 03 12:58:18 crc kubenswrapper[4986]: W1203 12:58:18.366674 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4cd4de5c_8c29_4f8e_bde8_1c21bd833d80.slice/crio-d6fd6d408c1fa100c07d79244ffa768752b5e3522cee0312f6c829bce49688c7 WatchSource:0}: Error finding container d6fd6d408c1fa100c07d79244ffa768752b5e3522cee0312f6c829bce49688c7: Status 404 returned error can't find the container with id d6fd6d408c1fa100c07d79244ffa768752b5e3522cee0312f6c829bce49688c7 Dec 03 12:58:18 crc kubenswrapper[4986]: I1203 12:58:18.382906 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:18 crc kubenswrapper[4986]: E1203 12:58:18.383346 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 12:58:18.883332685 +0000 UTC m=+158.349763876 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zn5lj" (UID: "d4e187c5-28e2-4881-8f59-214d93c767b1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:18 crc kubenswrapper[4986]: I1203 12:58:18.406524 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s6wq8"] Dec 03 12:58:18 crc kubenswrapper[4986]: I1203 12:58:18.452451 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412765-29wpf"] Dec 03 12:58:18 crc kubenswrapper[4986]: I1203 12:58:18.484906 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 12:58:18 crc kubenswrapper[4986]: E1203 12:58:18.485159 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 12:58:18.985107276 +0000 UTC m=+158.451538467 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:18 crc kubenswrapper[4986]: I1203 12:58:18.495100 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:18 crc kubenswrapper[4986]: E1203 12:58:18.495540 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 12:58:18.995523651 +0000 UTC m=+158.461954842 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zn5lj" (UID: "d4e187c5-28e2-4881-8f59-214d93c767b1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:18 crc kubenswrapper[4986]: I1203 12:58:18.519093 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-72n7g"] Dec 03 12:58:18 crc kubenswrapper[4986]: I1203 12:58:18.521929 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-dqctr"] Dec 03 12:58:18 crc kubenswrapper[4986]: I1203 12:58:18.581275 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-hjt88"] Dec 03 12:58:18 crc kubenswrapper[4986]: E1203 12:58:18.596146 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 12:58:19.096055848 +0000 UTC m=+158.562487039 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:18 crc kubenswrapper[4986]: I1203 12:58:18.596256 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 12:58:18 crc kubenswrapper[4986]: I1203 12:58:18.596593 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:18 crc kubenswrapper[4986]: E1203 12:58:18.597024 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 12:58:19.096997074 +0000 UTC m=+158.563428265 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zn5lj" (UID: "d4e187c5-28e2-4881-8f59-214d93c767b1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:18 crc kubenswrapper[4986]: I1203 12:58:18.697981 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 12:58:18 crc kubenswrapper[4986]: E1203 12:58:18.698140 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 12:58:19.198100476 +0000 UTC m=+158.664531667 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:18 crc kubenswrapper[4986]: I1203 12:58:18.698309 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:18 crc kubenswrapper[4986]: E1203 12:58:18.698655 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 12:58:19.198647442 +0000 UTC m=+158.665078633 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zn5lj" (UID: "d4e187c5-28e2-4881-8f59-214d93c767b1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:18 crc kubenswrapper[4986]: I1203 12:58:18.795761 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-2k592" event={"ID":"0636c9e2-4b00-4d01-8426-5fbfd9f9fa88","Type":"ContainerStarted","Data":"60fb09eaea08025ec0c51955536e27a9b7699d4e13d804c5329d1b3e5991b8ae"} Dec 03 12:58:18 crc kubenswrapper[4986]: I1203 12:58:18.796857 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-vlhbw" event={"ID":"7cc7f044-be33-41ca-b4af-bdf5a0ca066f","Type":"ContainerStarted","Data":"94f1e929c5b90c3523a0432b80d71dd2f15dc6fabf1f9f7f792a3682a854e5bf"} Dec 03 12:58:18 crc kubenswrapper[4986]: I1203 12:58:18.798301 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-8hs4t" event={"ID":"d22bd794-105c-4c41-b1c3-46723ed0cb79","Type":"ContainerStarted","Data":"5d56823d31329bb2230777b9fe0f9bb5c9923f1c6b0514e47fa4c8269a632ec7"} Dec 03 12:58:18 crc kubenswrapper[4986]: I1203 12:58:18.799203 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 12:58:18 crc kubenswrapper[4986]: I1203 12:58:18.799328 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-d7f6t" event={"ID":"fe999f2d-8f13-4ea6-8afb-ff99ff665d91","Type":"ContainerStarted","Data":"411fa04d0228fe0203f187438708564af68f55161423a4d2988950665d76f578"} Dec 03 12:58:18 crc kubenswrapper[4986]: E1203 12:58:18.799403 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 12:58:19.299377563 +0000 UTC m=+158.765808754 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:18 crc kubenswrapper[4986]: I1203 12:58:18.799470 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:18 crc kubenswrapper[4986]: E1203 12:58:18.799977 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 12:58:19.299960349 +0000 UTC m=+158.766391540 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zn5lj" (UID: "d4e187c5-28e2-4881-8f59-214d93c767b1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:18 crc kubenswrapper[4986]: I1203 12:58:18.800134 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-jwlmm" event={"ID":"d68d65c4-d762-426b-84bf-d1e05738d0ee","Type":"ContainerStarted","Data":"c07907b391b45c428d36b9e9ab95554c18e66dcad2c5dc6dff2e9ac262ca83ed"} Dec 03 12:58:18 crc kubenswrapper[4986]: I1203 12:58:18.801404 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-575sd" event={"ID":"68f994ae-dec7-4be6-b4e7-7a1875b40f4f","Type":"ContainerStarted","Data":"266920daa5d76addec274e7df29677128c1d18068256711d3204a007cb500661"} Dec 03 12:58:18 crc kubenswrapper[4986]: I1203 12:58:18.806744 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-rwrf2" event={"ID":"6645982e-cae8-46d2-8b4d-e5d2a01ad127","Type":"ContainerStarted","Data":"e371782de332bea653efa2bc0b66c0c23be902b93498eb3594bbef35e1f2a6a6"} Dec 03 12:58:18 crc kubenswrapper[4986]: I1203 12:58:18.808137 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c246g" event={"ID":"1b4d169b-dd53-42aa-9b14-f9bd24e801e6","Type":"ContainerStarted","Data":"e41ecdccae7ffc0ce8930c7588fe3c0417b734689835c4ff5ef75725b821c792"} Dec 03 12:58:18 crc kubenswrapper[4986]: I1203 12:58:18.809048 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s6wq8" event={"ID":"05a3b920-eb04-4864-81ac-924ba7c63d4e","Type":"ContainerStarted","Data":"e00c5903e42a69103b913839fcee29bcbd162b08877efb66b2d3dbfb1c0c929a"} Dec 03 12:58:18 crc kubenswrapper[4986]: I1203 12:58:18.809976 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zzj7p" event={"ID":"89c50427-14ae-409d-89d5-a56be0ff97d1","Type":"ContainerStarted","Data":"262b154a2a06696d6dfd69537f654aa2cc7785b261d0518d2549541125e0a72b"} Dec 03 12:58:18 crc kubenswrapper[4986]: I1203 12:58:18.811124 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4c7tt" event={"ID":"cee77bd5-92e1-4458-bdf2-49912954144d","Type":"ContainerStarted","Data":"c6255b4ba18f4a0667df7c555eeeaa89d17b5f91a81ff2937b54a0804ace1d22"} Dec 03 12:58:18 crc kubenswrapper[4986]: I1203 12:58:18.812233 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6k9lf" event={"ID":"869ead05-05ad-4705-8193-0f5cf6987257","Type":"ContainerStarted","Data":"b3db94587e562678d1246bd1791f653475020863bfc846904f2975208126a069"} Dec 03 12:58:18 crc kubenswrapper[4986]: I1203 12:58:18.813218 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-9wnfz" event={"ID":"872b07f2-d557-4c06-a432-18b9a46fe6cc","Type":"ContainerStarted","Data":"675695fceb85e7dbaf84e4dd650c10cdacf97a8929c09c17be76f86c394fa8d1"} Dec 03 12:58:18 crc kubenswrapper[4986]: I1203 12:58:18.814763 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pjgnh" event={"ID":"3e7e7e0f-8493-47c5-859d-3e7046c7ddab","Type":"ContainerStarted","Data":"5b440cd121b1714dc76c8a53a3bfee20301e88bc48adc64ef5310623ec9ec961"} Dec 03 12:58:18 crc kubenswrapper[4986]: I1203 12:58:18.816328 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hhmft" event={"ID":"6a077eb6-dc4e-4b2c-88be-eb4fad075e7a","Type":"ContainerStarted","Data":"356d6215ecbc40427f59b8fb9337bcfbbc37b785929ab139c5fd11f57c13ac69"} Dec 03 12:58:18 crc kubenswrapper[4986]: I1203 12:58:18.817330 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2tq6k" event={"ID":"4cd4de5c-8c29-4f8e-bde8-1c21bd833d80","Type":"ContainerStarted","Data":"d6fd6d408c1fa100c07d79244ffa768752b5e3522cee0312f6c829bce49688c7"} Dec 03 12:58:18 crc kubenswrapper[4986]: I1203 12:58:18.818844 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-md588" event={"ID":"3ae56075-fe7d-4f73-9cd9-ba58ff4c6bfb","Type":"ContainerStarted","Data":"9e3edfe89323a8789785cf5fe2e96fb03f0aff3abf5db75799f9e4be7c3b2b19"} Dec 03 12:58:18 crc kubenswrapper[4986]: I1203 12:58:18.820060 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7xp9j" event={"ID":"06ff5ccd-6dfa-4826-8a1d-929ac5deadf4","Type":"ContainerStarted","Data":"e89f54c3722a6f612b6f6a75fca0edf463d68c0a0532fee425c0a323be382e25"} Dec 03 12:58:18 crc kubenswrapper[4986]: I1203 12:58:18.821554 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kx8xd" event={"ID":"03a67bcc-6366-4e73-a3d9-39dcdd4f2f82","Type":"ContainerStarted","Data":"ad9cf3e32349bd6ce7cb34e599f6c63d65e02f92fc3e449db99dbab5227ec4e0"} Dec 03 12:58:18 crc kubenswrapper[4986]: I1203 12:58:18.823103 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-sk8ll" event={"ID":"a069c6b7-9e3b-40fd-b830-0d2a82fadd9a","Type":"ContainerStarted","Data":"252f9f7b4a1580301afe8e1df466608770a731f0677c52e1af77c465dbe25ffe"} Dec 03 12:58:18 crc kubenswrapper[4986]: I1203 12:58:18.919899 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 12:58:18 crc kubenswrapper[4986]: E1203 12:58:18.920982 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 12:58:19.420952195 +0000 UTC m=+158.887383396 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:18 crc kubenswrapper[4986]: I1203 12:58:18.921186 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:18 crc kubenswrapper[4986]: E1203 12:58:18.921548 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 12:58:19.421539561 +0000 UTC m=+158.887970752 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zn5lj" (UID: "d4e187c5-28e2-4881-8f59-214d93c767b1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:19 crc kubenswrapper[4986]: I1203 12:58:19.022798 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 12:58:19 crc kubenswrapper[4986]: E1203 12:58:19.022996 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 12:58:19.522971652 +0000 UTC m=+158.989402843 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:19 crc kubenswrapper[4986]: I1203 12:58:19.023078 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:19 crc kubenswrapper[4986]: E1203 12:58:19.023429 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 12:58:19.523422486 +0000 UTC m=+158.989853677 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zn5lj" (UID: "d4e187c5-28e2-4881-8f59-214d93c767b1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:19 crc kubenswrapper[4986]: I1203 12:58:19.124024 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 12:58:19 crc kubenswrapper[4986]: E1203 12:58:19.124389 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 12:58:19.624362413 +0000 UTC m=+159.090793604 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:19 crc kubenswrapper[4986]: I1203 12:58:19.124479 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:19 crc kubenswrapper[4986]: E1203 12:58:19.124781 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 12:58:19.624769565 +0000 UTC m=+159.091200756 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zn5lj" (UID: "d4e187c5-28e2-4881-8f59-214d93c767b1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:19 crc kubenswrapper[4986]: I1203 12:58:19.225545 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 12:58:19 crc kubenswrapper[4986]: E1203 12:58:19.225738 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 12:58:19.725694802 +0000 UTC m=+159.192126013 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:19 crc kubenswrapper[4986]: I1203 12:58:19.225814 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:19 crc kubenswrapper[4986]: E1203 12:58:19.226157 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 12:58:19.726144065 +0000 UTC m=+159.192575256 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zn5lj" (UID: "d4e187c5-28e2-4881-8f59-214d93c767b1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:19 crc kubenswrapper[4986]: I1203 12:58:19.326916 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 12:58:19 crc kubenswrapper[4986]: E1203 12:58:19.327449 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 12:58:19.82736583 +0000 UTC m=+159.293797051 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:19 crc kubenswrapper[4986]: I1203 12:58:19.428391 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:19 crc kubenswrapper[4986]: E1203 12:58:19.428930 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 12:58:19.928898754 +0000 UTC m=+159.395330015 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zn5lj" (UID: "d4e187c5-28e2-4881-8f59-214d93c767b1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:19 crc kubenswrapper[4986]: I1203 12:58:19.529891 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 12:58:19 crc kubenswrapper[4986]: E1203 12:58:19.530111 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 12:58:20.030066279 +0000 UTC m=+159.496497540 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:19 crc kubenswrapper[4986]: I1203 12:58:19.530481 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:19 crc kubenswrapper[4986]: E1203 12:58:19.530854 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 12:58:20.03084212 +0000 UTC m=+159.497273311 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zn5lj" (UID: "d4e187c5-28e2-4881-8f59-214d93c767b1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:19 crc kubenswrapper[4986]: I1203 12:58:19.631886 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 12:58:19 crc kubenswrapper[4986]: E1203 12:58:19.632092 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 12:58:20.132065006 +0000 UTC m=+159.598496187 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:19 crc kubenswrapper[4986]: I1203 12:58:19.632232 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:19 crc kubenswrapper[4986]: E1203 12:58:19.632722 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 12:58:20.132701134 +0000 UTC m=+159.599132355 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zn5lj" (UID: "d4e187c5-28e2-4881-8f59-214d93c767b1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:19 crc kubenswrapper[4986]: I1203 12:58:19.733018 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 12:58:19 crc kubenswrapper[4986]: E1203 12:58:19.734006 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 12:58:20.233970091 +0000 UTC m=+159.700401292 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:19 crc kubenswrapper[4986]: I1203 12:58:19.834542 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:19 crc kubenswrapper[4986]: E1203 12:58:19.835181 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 12:58:20.335125154 +0000 UTC m=+159.801556375 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zn5lj" (UID: "d4e187c5-28e2-4881-8f59-214d93c767b1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:19 crc kubenswrapper[4986]: I1203 12:58:19.937666 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 12:58:19 crc kubenswrapper[4986]: E1203 12:58:19.938720 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 12:58:20.438685246 +0000 UTC m=+159.905116477 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:20 crc kubenswrapper[4986]: I1203 12:58:20.039528 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:20 crc kubenswrapper[4986]: E1203 12:58:20.039907 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 12:58:20.539887551 +0000 UTC m=+160.006318752 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zn5lj" (UID: "d4e187c5-28e2-4881-8f59-214d93c767b1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:20 crc kubenswrapper[4986]: I1203 12:58:20.141012 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 12:58:20 crc kubenswrapper[4986]: E1203 12:58:20.141244 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 12:58:20.64121626 +0000 UTC m=+160.107647451 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:20 crc kubenswrapper[4986]: I1203 12:58:20.141346 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:20 crc kubenswrapper[4986]: E1203 12:58:20.141728 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 12:58:20.641709673 +0000 UTC m=+160.108140874 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zn5lj" (UID: "d4e187c5-28e2-4881-8f59-214d93c767b1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:20 crc kubenswrapper[4986]: I1203 12:58:20.242266 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 12:58:20 crc kubenswrapper[4986]: E1203 12:58:20.242448 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 12:58:20.742415604 +0000 UTC m=+160.208846805 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:20 crc kubenswrapper[4986]: I1203 12:58:20.242653 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:20 crc kubenswrapper[4986]: E1203 12:58:20.243033 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 12:58:20.743023462 +0000 UTC m=+160.209454663 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zn5lj" (UID: "d4e187c5-28e2-4881-8f59-214d93c767b1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:20 crc kubenswrapper[4986]: I1203 12:58:20.343854 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 12:58:20 crc kubenswrapper[4986]: E1203 12:58:20.343993 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 12:58:20.843964969 +0000 UTC m=+160.310396190 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:20 crc kubenswrapper[4986]: I1203 12:58:20.344235 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:20 crc kubenswrapper[4986]: E1203 12:58:20.344662 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 12:58:20.844647028 +0000 UTC m=+160.311078259 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zn5lj" (UID: "d4e187c5-28e2-4881-8f59-214d93c767b1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:20 crc kubenswrapper[4986]: I1203 12:58:20.445002 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 12:58:20 crc kubenswrapper[4986]: E1203 12:58:20.445242 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 12:58:20.945202905 +0000 UTC m=+160.411634136 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:20 crc kubenswrapper[4986]: I1203 12:58:20.445513 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:20 crc kubenswrapper[4986]: E1203 12:58:20.446122 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 12:58:20.946099891 +0000 UTC m=+160.412531122 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zn5lj" (UID: "d4e187c5-28e2-4881-8f59-214d93c767b1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:20 crc kubenswrapper[4986]: I1203 12:58:20.546692 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 12:58:20 crc kubenswrapper[4986]: E1203 12:58:20.546971 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 12:58:21.046934476 +0000 UTC m=+160.513365707 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:20 crc kubenswrapper[4986]: I1203 12:58:20.547053 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:20 crc kubenswrapper[4986]: E1203 12:58:20.547504 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 12:58:21.047483581 +0000 UTC m=+160.513914802 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zn5lj" (UID: "d4e187c5-28e2-4881-8f59-214d93c767b1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:20 crc kubenswrapper[4986]: I1203 12:58:20.648739 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 12:58:20 crc kubenswrapper[4986]: E1203 12:58:20.649023 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 12:58:21.148986385 +0000 UTC m=+160.615417616 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:20 crc kubenswrapper[4986]: I1203 12:58:20.649095 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:20 crc kubenswrapper[4986]: E1203 12:58:20.649620 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 12:58:21.149602072 +0000 UTC m=+160.616033303 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zn5lj" (UID: "d4e187c5-28e2-4881-8f59-214d93c767b1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:20 crc kubenswrapper[4986]: I1203 12:58:20.750968 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 12:58:20 crc kubenswrapper[4986]: E1203 12:58:20.751129 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 12:58:21.251099846 +0000 UTC m=+160.717531077 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:20 crc kubenswrapper[4986]: I1203 12:58:20.751442 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:20 crc kubenswrapper[4986]: E1203 12:58:20.751941 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 12:58:21.251922708 +0000 UTC m=+160.718353939 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zn5lj" (UID: "d4e187c5-28e2-4881-8f59-214d93c767b1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:20 crc kubenswrapper[4986]: I1203 12:58:20.853011 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 12:58:20 crc kubenswrapper[4986]: E1203 12:58:20.853447 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 12:58:21.353407492 +0000 UTC m=+160.819838723 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:20 crc kubenswrapper[4986]: I1203 12:58:20.955340 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:20 crc kubenswrapper[4986]: E1203 12:58:20.955963 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 12:58:21.455921024 +0000 UTC m=+160.922352255 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zn5lj" (UID: "d4e187c5-28e2-4881-8f59-214d93c767b1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:21 crc kubenswrapper[4986]: I1203 12:58:21.056421 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 12:58:21 crc kubenswrapper[4986]: E1203 12:58:21.057850 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 12:58:21.557817589 +0000 UTC m=+161.024248820 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:21 crc kubenswrapper[4986]: I1203 12:58:21.129099 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" Dec 03 12:58:21 crc kubenswrapper[4986]: I1203 12:58:21.159063 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:21 crc kubenswrapper[4986]: E1203 12:58:21.159667 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 12:58:21.65963696 +0000 UTC m=+161.126068181 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zn5lj" (UID: "d4e187c5-28e2-4881-8f59-214d93c767b1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:21 crc kubenswrapper[4986]: I1203 12:58:21.260234 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 12:58:21 crc kubenswrapper[4986]: E1203 12:58:21.260991 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 12:58:21.76096734 +0000 UTC m=+161.227398541 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:21 crc kubenswrapper[4986]: I1203 12:58:21.361504 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:21 crc kubenswrapper[4986]: E1203 12:58:21.361907 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 12:58:21.861886617 +0000 UTC m=+161.328317818 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zn5lj" (UID: "d4e187c5-28e2-4881-8f59-214d93c767b1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:21 crc kubenswrapper[4986]: I1203 12:58:21.463500 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 12:58:21 crc kubenswrapper[4986]: E1203 12:58:21.463774 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 12:58:21.96373688 +0000 UTC m=+161.430168111 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:21 crc kubenswrapper[4986]: I1203 12:58:21.464237 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:21 crc kubenswrapper[4986]: E1203 12:58:21.464763 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 12:58:21.964732388 +0000 UTC m=+161.431163629 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zn5lj" (UID: "d4e187c5-28e2-4881-8f59-214d93c767b1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:21 crc kubenswrapper[4986]: I1203 12:58:21.565480 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 12:58:21 crc kubenswrapper[4986]: E1203 12:58:21.565670 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 12:58:22.065631424 +0000 UTC m=+161.532062645 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:21 crc kubenswrapper[4986]: I1203 12:58:21.565923 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:21 crc kubenswrapper[4986]: E1203 12:58:21.566416 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 12:58:22.066391236 +0000 UTC m=+161.532822467 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zn5lj" (UID: "d4e187c5-28e2-4881-8f59-214d93c767b1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:21 crc kubenswrapper[4986]: I1203 12:58:21.667440 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 12:58:21 crc kubenswrapper[4986]: E1203 12:58:21.667914 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 12:58:22.167876899 +0000 UTC m=+161.634308130 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:21 crc kubenswrapper[4986]: I1203 12:58:21.668046 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:21 crc kubenswrapper[4986]: E1203 12:58:21.668642 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 12:58:22.1686167 +0000 UTC m=+161.635047931 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zn5lj" (UID: "d4e187c5-28e2-4881-8f59-214d93c767b1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:21 crc kubenswrapper[4986]: I1203 12:58:21.743804 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kjctk"] Dec 03 12:58:21 crc kubenswrapper[4986]: I1203 12:58:21.745988 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-rskz9"] Dec 03 12:58:21 crc kubenswrapper[4986]: I1203 12:58:21.769528 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 12:58:21 crc kubenswrapper[4986]: E1203 12:58:21.769681 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 12:58:22.269662181 +0000 UTC m=+161.736093372 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:21 crc kubenswrapper[4986]: I1203 12:58:21.769870 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:21 crc kubenswrapper[4986]: E1203 12:58:21.770181 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 12:58:22.270172925 +0000 UTC m=+161.736604126 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zn5lj" (UID: "d4e187c5-28e2-4881-8f59-214d93c767b1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:21 crc kubenswrapper[4986]: W1203 12:58:21.782989 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5825d5b3_a095_48d6_9365_d03d1faa63ca.slice/crio-67a6788b6cc7ee6ad7da489ce70899428b14b194023339499c943a28ae301902 WatchSource:0}: Error finding container 67a6788b6cc7ee6ad7da489ce70899428b14b194023339499c943a28ae301902: Status 404 returned error can't find the container with id 67a6788b6cc7ee6ad7da489ce70899428b14b194023339499c943a28ae301902 Dec 03 12:58:21 crc kubenswrapper[4986]: W1203 12:58:21.784888 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ad8791b_88af_4c86_8ed4_999a3357c3b7.slice/crio-0871bf545ab51ddcea0d0b3b0ea7d7e4ee8d1a1c78f6dffe68d983369f87c085 WatchSource:0}: Error finding container 0871bf545ab51ddcea0d0b3b0ea7d7e4ee8d1a1c78f6dffe68d983369f87c085: Status 404 returned error can't find the container with id 0871bf545ab51ddcea0d0b3b0ea7d7e4ee8d1a1c78f6dffe68d983369f87c085 Dec 03 12:58:21 crc kubenswrapper[4986]: W1203 12:58:21.785749 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69bf978d_34fa_4cfa_8c78_7d0bdfa3298f.slice/crio-28b064631b73166d703ef8780d7d4618e23831f756a0b67368fe51e5e16f6c9a WatchSource:0}: Error finding container 28b064631b73166d703ef8780d7d4618e23831f756a0b67368fe51e5e16f6c9a: Status 404 returned error can't find the container with id 28b064631b73166d703ef8780d7d4618e23831f756a0b67368fe51e5e16f6c9a Dec 03 12:58:21 crc kubenswrapper[4986]: W1203 12:58:21.805845 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod051204e8_a3a7_47ae_8170_55f5776cfa1e.slice/crio-eb29d7b75823761680881de088017138d403294604275b00d36eb55efe9f3b03 WatchSource:0}: Error finding container eb29d7b75823761680881de088017138d403294604275b00d36eb55efe9f3b03: Status 404 returned error can't find the container with id eb29d7b75823761680881de088017138d403294604275b00d36eb55efe9f3b03 Dec 03 12:58:21 crc kubenswrapper[4986]: W1203 12:58:21.806646 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf465e01f_8cdd_424e_a617_ed0e4a4ae140.slice/crio-12ab3bb6c80a0943fd101752215772eb4834b61f705984293adcedc00e765415 WatchSource:0}: Error finding container 12ab3bb6c80a0943fd101752215772eb4834b61f705984293adcedc00e765415: Status 404 returned error can't find the container with id 12ab3bb6c80a0943fd101752215772eb4834b61f705984293adcedc00e765415 Dec 03 12:58:21 crc kubenswrapper[4986]: I1203 12:58:21.842994 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-ptgv8" event={"ID":"0725188c-6c61-4369-8247-ffdda7e830e8","Type":"ContainerStarted","Data":"154e1be646011d38a9254d5e57fdc5c8c3dbc6398f950700f4a0ce4d2893f756"} Dec 03 12:58:21 crc kubenswrapper[4986]: I1203 12:58:21.844367 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-xn84j" event={"ID":"2588cb3b-8139-4529-a6e1-c57532afdfa7","Type":"ContainerStarted","Data":"a2d40c19fb9a8ad419d3727f8d53d6c5104f091ecf31c494a8dceae770fda9a2"} Dec 03 12:58:21 crc kubenswrapper[4986]: I1203 12:58:21.846019 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-npjwk" event={"ID":"01d4de81-8b50-4231-912d-3a65797a9754","Type":"ContainerStarted","Data":"f48c2e9e3f9e0aeb1d7e95a20bdfa24181b8279264127fa3af63b75e655a036c"} Dec 03 12:58:21 crc kubenswrapper[4986]: I1203 12:58:21.847628 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-rskz9" event={"ID":"f465e01f-8cdd-424e-a617-ed0e4a4ae140","Type":"ContainerStarted","Data":"12ab3bb6c80a0943fd101752215772eb4834b61f705984293adcedc00e765415"} Dec 03 12:58:21 crc kubenswrapper[4986]: I1203 12:58:21.849612 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-wvzt8" event={"ID":"d366e2f7-22ef-46e5-855f-0f26e6a9186c","Type":"ContainerStarted","Data":"c50d6c484e3640704adca08d1c9ef44b619f7aa9240715007501a204250a522b"} Dec 03 12:58:21 crc kubenswrapper[4986]: I1203 12:58:21.856528 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-dqctr" event={"ID":"2ad8791b-88af-4c86-8ed4-999a3357c3b7","Type":"ContainerStarted","Data":"0871bf545ab51ddcea0d0b3b0ea7d7e4ee8d1a1c78f6dffe68d983369f87c085"} Dec 03 12:58:21 crc kubenswrapper[4986]: I1203 12:58:21.857774 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412765-29wpf" event={"ID":"0c102da0-9cf4-4521-97fe-3153aa47a43e","Type":"ContainerStarted","Data":"9f17bc4cdd557a950075fb69821633cce4e34d2b2178e80ba4f7c9e95d01259b"} Dec 03 12:58:21 crc kubenswrapper[4986]: I1203 12:58:21.858806 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hjt88" event={"ID":"69bf978d-34fa-4cfa-8c78-7d0bdfa3298f","Type":"ContainerStarted","Data":"28b064631b73166d703ef8780d7d4618e23831f756a0b67368fe51e5e16f6c9a"} Dec 03 12:58:21 crc kubenswrapper[4986]: I1203 12:58:21.862137 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kjctk" event={"ID":"051204e8-a3a7-47ae-8170-55f5776cfa1e","Type":"ContainerStarted","Data":"eb29d7b75823761680881de088017138d403294604275b00d36eb55efe9f3b03"} Dec 03 12:58:21 crc kubenswrapper[4986]: I1203 12:58:21.874824 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 12:58:21 crc kubenswrapper[4986]: E1203 12:58:21.874988 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 12:58:22.374964212 +0000 UTC m=+161.841395413 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:21 crc kubenswrapper[4986]: I1203 12:58:21.907274 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:21 crc kubenswrapper[4986]: E1203 12:58:21.907693 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 12:58:22.407671248 +0000 UTC m=+161.874102439 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zn5lj" (UID: "d4e187c5-28e2-4881-8f59-214d93c767b1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:22 crc kubenswrapper[4986]: I1203 12:58:22.013606 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 12:58:22 crc kubenswrapper[4986]: E1203 12:58:22.013787 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 12:58:22.513754121 +0000 UTC m=+161.980185312 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:22 crc kubenswrapper[4986]: I1203 12:58:22.013845 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:22 crc kubenswrapper[4986]: E1203 12:58:22.014153 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 12:58:22.514144152 +0000 UTC m=+161.980575343 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zn5lj" (UID: "d4e187c5-28e2-4881-8f59-214d93c767b1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:22 crc kubenswrapper[4986]: I1203 12:58:22.018843 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-szpmg"] Dec 03 12:58:22 crc kubenswrapper[4986]: I1203 12:58:22.050307 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-qxpz7"] Dec 03 12:58:22 crc kubenswrapper[4986]: I1203 12:58:22.087191 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-glvvh"] Dec 03 12:58:22 crc kubenswrapper[4986]: I1203 12:58:22.115793 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 12:58:22 crc kubenswrapper[4986]: E1203 12:58:22.116226 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 12:58:22.616206502 +0000 UTC m=+162.082637693 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:22 crc kubenswrapper[4986]: I1203 12:58:22.120629 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-7wtlt"] Dec 03 12:58:22 crc kubenswrapper[4986]: I1203 12:58:22.167244 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-k58fj"] Dec 03 12:58:22 crc kubenswrapper[4986]: I1203 12:58:22.217960 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:22 crc kubenswrapper[4986]: E1203 12:58:22.218573 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 12:58:22.718558679 +0000 UTC m=+162.184989870 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zn5lj" (UID: "d4e187c5-28e2-4881-8f59-214d93c767b1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:22 crc kubenswrapper[4986]: I1203 12:58:22.319390 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 12:58:22 crc kubenswrapper[4986]: E1203 12:58:22.320119 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 12:58:22.820083673 +0000 UTC m=+162.286514864 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:22 crc kubenswrapper[4986]: I1203 12:58:22.421102 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:22 crc kubenswrapper[4986]: E1203 12:58:22.421464 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 12:58:22.921448543 +0000 UTC m=+162.387879734 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zn5lj" (UID: "d4e187c5-28e2-4881-8f59-214d93c767b1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:22 crc kubenswrapper[4986]: I1203 12:58:22.522271 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 12:58:22 crc kubenswrapper[4986]: E1203 12:58:22.522529 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 12:58:23.022503524 +0000 UTC m=+162.488934715 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:22 crc kubenswrapper[4986]: I1203 12:58:22.623500 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:22 crc kubenswrapper[4986]: E1203 12:58:22.623956 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 12:58:23.123935065 +0000 UTC m=+162.590366296 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zn5lj" (UID: "d4e187c5-28e2-4881-8f59-214d93c767b1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:22 crc kubenswrapper[4986]: I1203 12:58:22.725055 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 12:58:22 crc kubenswrapper[4986]: E1203 12:58:22.725240 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 12:58:23.225207753 +0000 UTC m=+162.691638984 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:22 crc kubenswrapper[4986]: I1203 12:58:22.725504 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:22 crc kubenswrapper[4986]: E1203 12:58:22.726512 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 12:58:23.226454617 +0000 UTC m=+162.692885848 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zn5lj" (UID: "d4e187c5-28e2-4881-8f59-214d93c767b1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:22 crc kubenswrapper[4986]: I1203 12:58:22.826793 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 12:58:22 crc kubenswrapper[4986]: E1203 12:58:22.826941 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 12:58:23.326912031 +0000 UTC m=+162.793343222 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:22 crc kubenswrapper[4986]: I1203 12:58:22.827013 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:22 crc kubenswrapper[4986]: E1203 12:58:22.827391 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 12:58:23.327377195 +0000 UTC m=+162.793808396 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zn5lj" (UID: "d4e187c5-28e2-4881-8f59-214d93c767b1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:22 crc kubenswrapper[4986]: I1203 12:58:22.868967 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-k58fj" event={"ID":"f23fc85f-9e00-4046-8c6b-b3f10deb822b","Type":"ContainerStarted","Data":"24f26efa1f254b8c18d80ef22bc3d231df3a56a4045ed9343c31f5b01e902d73"} Dec 03 12:58:22 crc kubenswrapper[4986]: I1203 12:58:22.870340 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-glvvh" event={"ID":"005be756-d1ff-452c-aa4e-e4df06f28839","Type":"ContainerStarted","Data":"a323caccf220d4e066132ce6aa648540052802e2cdff142d3df2d6c51a5b19cf"} Dec 03 12:58:22 crc kubenswrapper[4986]: I1203 12:58:22.871638 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-72n7g" event={"ID":"5825d5b3-a095-48d6-9365-d03d1faa63ca","Type":"ContainerStarted","Data":"67a6788b6cc7ee6ad7da489ce70899428b14b194023339499c943a28ae301902"} Dec 03 12:58:22 crc kubenswrapper[4986]: I1203 12:58:22.879798 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-r4wn7" event={"ID":"30903761-66a3-4874-b464-8e1127bd8ed5","Type":"ContainerStarted","Data":"d20a52b0cbde9909f9df42b975a1f3a01c5b39b7692bc251cec631e2eef600df"} Dec 03 12:58:22 crc kubenswrapper[4986]: I1203 12:58:22.882443 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-7wtlt" event={"ID":"dd374c2f-10af-47c4-b550-d5c03c39ca45","Type":"ContainerStarted","Data":"4ae2b1e6551736d69aad8c9dd8bd8eef742e7273b555597b0aac322b40430c92"} Dec 03 12:58:22 crc kubenswrapper[4986]: I1203 12:58:22.883933 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-szpmg" event={"ID":"71392224-bf56-4c1b-9f0a-bcecd393660d","Type":"ContainerStarted","Data":"4d72ecff3dd5fc3d529fabf9bf8d5eec7393a2ca653e6e251b90ffa4da1fe8df"} Dec 03 12:58:22 crc kubenswrapper[4986]: I1203 12:58:22.886119 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qxpz7" event={"ID":"e6f57929-5eab-4a61-88dd-e91ca735bbd5","Type":"ContainerStarted","Data":"f43823c8441a4242af0c1446a194958d3a9f3ade99aff282e08c0ee8de82b341"} Dec 03 12:58:22 crc kubenswrapper[4986]: I1203 12:58:22.918353 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vz29n" podStartSLOduration=140.91832642 podStartE2EDuration="2m20.91832642s" podCreationTimestamp="2025-12-03 12:56:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 12:58:22.907025609 +0000 UTC m=+162.373456820" watchObservedRunningTime="2025-12-03 12:58:22.91832642 +0000 UTC m=+162.384757651" Dec 03 12:58:22 crc kubenswrapper[4986]: I1203 12:58:22.928688 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 12:58:22 crc kubenswrapper[4986]: E1203 12:58:22.930027 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 12:58:23.42999984 +0000 UTC m=+162.896431061 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:23 crc kubenswrapper[4986]: I1203 12:58:23.029908 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:23 crc kubenswrapper[4986]: E1203 12:58:23.030340 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 12:58:23.53032132 +0000 UTC m=+162.996752521 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zn5lj" (UID: "d4e187c5-28e2-4881-8f59-214d93c767b1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:23 crc kubenswrapper[4986]: I1203 12:58:23.131081 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 12:58:23 crc kubenswrapper[4986]: E1203 12:58:23.131247 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 12:58:23.631226667 +0000 UTC m=+163.097657858 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:23 crc kubenswrapper[4986]: I1203 12:58:23.131830 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:23 crc kubenswrapper[4986]: E1203 12:58:23.132219 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 12:58:23.632210414 +0000 UTC m=+163.098641605 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zn5lj" (UID: "d4e187c5-28e2-4881-8f59-214d93c767b1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:23 crc kubenswrapper[4986]: I1203 12:58:23.233786 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 12:58:23 crc kubenswrapper[4986]: E1203 12:58:23.234159 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 12:58:23.734082929 +0000 UTC m=+163.200514160 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:23 crc kubenswrapper[4986]: I1203 12:58:23.234597 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:23 crc kubenswrapper[4986]: E1203 12:58:23.235124 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 12:58:23.735102068 +0000 UTC m=+163.201533269 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zn5lj" (UID: "d4e187c5-28e2-4881-8f59-214d93c767b1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:23 crc kubenswrapper[4986]: I1203 12:58:23.336060 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 12:58:23 crc kubenswrapper[4986]: E1203 12:58:23.336179 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 12:58:23.836153848 +0000 UTC m=+163.302585049 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:23 crc kubenswrapper[4986]: I1203 12:58:23.336254 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:23 crc kubenswrapper[4986]: E1203 12:58:23.336583 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 12:58:23.83657262 +0000 UTC m=+163.303003821 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zn5lj" (UID: "d4e187c5-28e2-4881-8f59-214d93c767b1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:23 crc kubenswrapper[4986]: I1203 12:58:23.438001 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 12:58:23 crc kubenswrapper[4986]: E1203 12:58:23.438364 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 12:58:23.938339101 +0000 UTC m=+163.404770292 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:23 crc kubenswrapper[4986]: I1203 12:58:23.540165 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:23 crc kubenswrapper[4986]: E1203 12:58:23.540524 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 12:58:24.040509294 +0000 UTC m=+163.506940485 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zn5lj" (UID: "d4e187c5-28e2-4881-8f59-214d93c767b1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:23 crc kubenswrapper[4986]: I1203 12:58:23.643517 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 12:58:23 crc kubenswrapper[4986]: E1203 12:58:23.645569 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 12:58:24.143936062 +0000 UTC m=+163.610367263 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:23 crc kubenswrapper[4986]: I1203 12:58:23.745354 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:23 crc kubenswrapper[4986]: E1203 12:58:23.745799 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 12:58:24.245779944 +0000 UTC m=+163.712211175 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zn5lj" (UID: "d4e187c5-28e2-4881-8f59-214d93c767b1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:23 crc kubenswrapper[4986]: I1203 12:58:23.846353 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 12:58:23 crc kubenswrapper[4986]: E1203 12:58:23.846674 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 12:58:24.34664007 +0000 UTC m=+163.813071291 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:23 crc kubenswrapper[4986]: I1203 12:58:23.894001 4986 generic.go:334] "Generic (PLEG): container finished" podID="5c9ab9a7-030c-4d45-8c75-950f457bb69c" containerID="f470a304a2858fd3ecc8508d69eed90ebe492f8da2d46145596bc8ea68f39a5c" exitCode=0 Dec 03 12:58:23 crc kubenswrapper[4986]: I1203 12:58:23.894072 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-62rd9" event={"ID":"5c9ab9a7-030c-4d45-8c75-950f457bb69c","Type":"ContainerDied","Data":"f470a304a2858fd3ecc8508d69eed90ebe492f8da2d46145596bc8ea68f39a5c"} Dec 03 12:58:23 crc kubenswrapper[4986]: I1203 12:58:23.896208 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n6hjp" event={"ID":"a5484631-f1bd-49f7-a0bd-46e783e44095","Type":"ContainerStarted","Data":"15ee341a54a2242dfd2fac5dd8e17df131578a9e99be5e302066421417971d31"} Dec 03 12:58:23 crc kubenswrapper[4986]: I1203 12:58:23.897997 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-85fg2" event={"ID":"2cdca175-2ec1-427b-8c68-99a065f9d5d7","Type":"ContainerStarted","Data":"404fb908319fbda63dff3b1d1666d29171b132b3a60635b380e4196e4b753a0a"} Dec 03 12:58:23 crc kubenswrapper[4986]: I1203 12:58:23.899472 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2glzg" event={"ID":"f51b3907-2cbf-4b80-a800-411404195052","Type":"ContainerStarted","Data":"0b7bc899d425ebff6822ebc70f58b7231506af98dc7bd0560bc96008d8e449a1"} Dec 03 12:58:23 crc kubenswrapper[4986]: I1203 12:58:23.902377 4986 generic.go:334] "Generic (PLEG): container finished" podID="6a077eb6-dc4e-4b2c-88be-eb4fad075e7a" containerID="356d6215ecbc40427f59b8fb9337bcfbbc37b785929ab139c5fd11f57c13ac69" exitCode=0 Dec 03 12:58:23 crc kubenswrapper[4986]: I1203 12:58:23.902519 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hhmft" event={"ID":"6a077eb6-dc4e-4b2c-88be-eb4fad075e7a","Type":"ContainerDied","Data":"356d6215ecbc40427f59b8fb9337bcfbbc37b785929ab139c5fd11f57c13ac69"} Dec 03 12:58:23 crc kubenswrapper[4986]: I1203 12:58:23.904276 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-2k592" Dec 03 12:58:23 crc kubenswrapper[4986]: I1203 12:58:23.947979 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:23 crc kubenswrapper[4986]: E1203 12:58:23.948392 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 12:58:24.448332429 +0000 UTC m=+163.914763630 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zn5lj" (UID: "d4e187c5-28e2-4881-8f59-214d93c767b1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:23 crc kubenswrapper[4986]: I1203 12:58:23.948638 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ea24f625-ded4-4e37-a23b-f96fe691b0dd-metrics-certs\") pod \"network-metrics-daemon-rl2mt\" (UID: \"ea24f625-ded4-4e37-a23b-f96fe691b0dd\") " pod="openshift-multus/network-metrics-daemon-rl2mt" Dec 03 12:58:23 crc kubenswrapper[4986]: I1203 12:58:23.957568 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-sk8ll" podStartSLOduration=141.95755048 podStartE2EDuration="2m21.95755048s" podCreationTimestamp="2025-12-03 12:56:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 12:58:23.951949072 +0000 UTC m=+163.418380273" watchObservedRunningTime="2025-12-03 12:58:23.95755048 +0000 UTC m=+163.423981671" Dec 03 12:58:23 crc kubenswrapper[4986]: I1203 12:58:23.957994 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-md588" podStartSLOduration=141.957987852 podStartE2EDuration="2m21.957987852s" podCreationTimestamp="2025-12-03 12:56:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 12:58:23.926519092 +0000 UTC m=+163.392950363" watchObservedRunningTime="2025-12-03 12:58:23.957987852 +0000 UTC m=+163.424419043" Dec 03 12:58:23 crc kubenswrapper[4986]: I1203 12:58:23.971503 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c246g" podStartSLOduration=141.971482685 podStartE2EDuration="2m21.971482685s" podCreationTimestamp="2025-12-03 12:56:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 12:58:23.970777854 +0000 UTC m=+163.437209055" watchObservedRunningTime="2025-12-03 12:58:23.971482685 +0000 UTC m=+163.437913876" Dec 03 12:58:23 crc kubenswrapper[4986]: I1203 12:58:23.975148 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ea24f625-ded4-4e37-a23b-f96fe691b0dd-metrics-certs\") pod \"network-metrics-daemon-rl2mt\" (UID: \"ea24f625-ded4-4e37-a23b-f96fe691b0dd\") " pod="openshift-multus/network-metrics-daemon-rl2mt" Dec 03 12:58:24 crc kubenswrapper[4986]: I1203 12:58:24.011580 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-2k592" podStartSLOduration=142.011561719 podStartE2EDuration="2m22.011561719s" podCreationTimestamp="2025-12-03 12:56:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 12:58:24.010381065 +0000 UTC m=+163.476812266" watchObservedRunningTime="2025-12-03 12:58:24.011561719 +0000 UTC m=+163.477992920" Dec 03 12:58:24 crc kubenswrapper[4986]: I1203 12:58:24.050075 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 12:58:24 crc kubenswrapper[4986]: E1203 12:58:24.050184 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 12:58:24.550166152 +0000 UTC m=+164.016597343 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:24 crc kubenswrapper[4986]: I1203 12:58:24.050444 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:24 crc kubenswrapper[4986]: E1203 12:58:24.051758 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 12:58:24.551740696 +0000 UTC m=+164.018171897 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zn5lj" (UID: "d4e187c5-28e2-4881-8f59-214d93c767b1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:24 crc kubenswrapper[4986]: I1203 12:58:24.064940 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl2mt" Dec 03 12:58:24 crc kubenswrapper[4986]: I1203 12:58:24.152351 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 12:58:24 crc kubenswrapper[4986]: E1203 12:58:24.152543 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 12:58:24.652508649 +0000 UTC m=+164.118939850 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:24 crc kubenswrapper[4986]: I1203 12:58:24.152585 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:24 crc kubenswrapper[4986]: E1203 12:58:24.152958 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 12:58:24.652947371 +0000 UTC m=+164.119378572 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zn5lj" (UID: "d4e187c5-28e2-4881-8f59-214d93c767b1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:24 crc kubenswrapper[4986]: I1203 12:58:24.253635 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 12:58:24 crc kubenswrapper[4986]: E1203 12:58:24.253817 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 12:58:24.753779176 +0000 UTC m=+164.220210397 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:24 crc kubenswrapper[4986]: I1203 12:58:24.254108 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:24 crc kubenswrapper[4986]: E1203 12:58:24.254549 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 12:58:24.754529627 +0000 UTC m=+164.220960858 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zn5lj" (UID: "d4e187c5-28e2-4881-8f59-214d93c767b1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:24 crc kubenswrapper[4986]: I1203 12:58:24.355957 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 12:58:24 crc kubenswrapper[4986]: E1203 12:58:24.356497 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 12:58:24.856472284 +0000 UTC m=+164.322903505 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:24 crc kubenswrapper[4986]: I1203 12:58:24.413088 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-2k592" Dec 03 12:58:24 crc kubenswrapper[4986]: I1203 12:58:24.470073 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:24 crc kubenswrapper[4986]: E1203 12:58:24.470585 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 12:58:24.970573083 +0000 UTC m=+164.437004274 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zn5lj" (UID: "d4e187c5-28e2-4881-8f59-214d93c767b1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:24 crc kubenswrapper[4986]: I1203 12:58:24.572478 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 12:58:24 crc kubenswrapper[4986]: E1203 12:58:24.572654 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 12:58:25.072629963 +0000 UTC m=+164.539061154 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:24 crc kubenswrapper[4986]: I1203 12:58:24.572912 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:24 crc kubenswrapper[4986]: E1203 12:58:24.573230 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 12:58:25.073217579 +0000 UTC m=+164.539648770 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zn5lj" (UID: "d4e187c5-28e2-4881-8f59-214d93c767b1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:24 crc kubenswrapper[4986]: I1203 12:58:24.638277 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-rl2mt"] Dec 03 12:58:24 crc kubenswrapper[4986]: W1203 12:58:24.649967 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podea24f625_ded4_4e37_a23b_f96fe691b0dd.slice/crio-6d1fbf56884933d9c6d157c5a15d1fcebaf978c6096b21877947a20732365eab WatchSource:0}: Error finding container 6d1fbf56884933d9c6d157c5a15d1fcebaf978c6096b21877947a20732365eab: Status 404 returned error can't find the container with id 6d1fbf56884933d9c6d157c5a15d1fcebaf978c6096b21877947a20732365eab Dec 03 12:58:24 crc kubenswrapper[4986]: I1203 12:58:24.677256 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 12:58:24 crc kubenswrapper[4986]: E1203 12:58:24.677531 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 12:58:25.177503612 +0000 UTC m=+164.643934803 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:24 crc kubenswrapper[4986]: I1203 12:58:24.677786 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:24 crc kubenswrapper[4986]: E1203 12:58:24.678124 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 12:58:25.178118039 +0000 UTC m=+164.644549220 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zn5lj" (UID: "d4e187c5-28e2-4881-8f59-214d93c767b1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:24 crc kubenswrapper[4986]: I1203 12:58:24.778731 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 12:58:24 crc kubenswrapper[4986]: E1203 12:58:24.778989 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 12:58:25.278975514 +0000 UTC m=+164.745406705 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:24 crc kubenswrapper[4986]: I1203 12:58:24.880056 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:24 crc kubenswrapper[4986]: E1203 12:58:24.880438 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 12:58:25.380416966 +0000 UTC m=+164.846848157 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zn5lj" (UID: "d4e187c5-28e2-4881-8f59-214d93c767b1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:24 crc kubenswrapper[4986]: I1203 12:58:24.921331 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qxpz7" event={"ID":"e6f57929-5eab-4a61-88dd-e91ca735bbd5","Type":"ContainerStarted","Data":"306688a4b29302fc881f7199fdc1d984bad1c4773898a9d1638aa302cda3414e"} Dec 03 12:58:24 crc kubenswrapper[4986]: I1203 12:58:24.927804 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-rwrf2" event={"ID":"6645982e-cae8-46d2-8b4d-e5d2a01ad127","Type":"ContainerStarted","Data":"39ec293c5cd274a5f45b681ed04ef3c52bd708ec9e11d4fe5061b1ae4390a48e"} Dec 03 12:58:24 crc kubenswrapper[4986]: I1203 12:58:24.931115 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6k9lf" event={"ID":"869ead05-05ad-4705-8193-0f5cf6987257","Type":"ContainerStarted","Data":"9ba6dc0f640e25a41cc332bcf8e0f97cb622568fd3f6568405cf4761e4331ca0"} Dec 03 12:58:24 crc kubenswrapper[4986]: I1203 12:58:24.941792 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2tq6k" event={"ID":"4cd4de5c-8c29-4f8e-bde8-1c21bd833d80","Type":"ContainerStarted","Data":"b0a456d29f5435c303899dca08e10a5ecb28cc692334bbd6b0026c540142da08"} Dec 03 12:58:24 crc kubenswrapper[4986]: I1203 12:58:24.951527 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-d7f6t" event={"ID":"fe999f2d-8f13-4ea6-8afb-ff99ff665d91","Type":"ContainerStarted","Data":"64e42c2bce4b1d880a54bcadc731c304ebff86ea7f057ae5f6fe5bfe692dd36f"} Dec 03 12:58:24 crc kubenswrapper[4986]: I1203 12:58:24.952076 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-rl2mt" event={"ID":"ea24f625-ded4-4e37-a23b-f96fe691b0dd","Type":"ContainerStarted","Data":"6d1fbf56884933d9c6d157c5a15d1fcebaf978c6096b21877947a20732365eab"} Dec 03 12:58:24 crc kubenswrapper[4986]: I1203 12:58:24.957206 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412765-29wpf" event={"ID":"0c102da0-9cf4-4521-97fe-3153aa47a43e","Type":"ContainerStarted","Data":"cb33e8b1845714e4103b5ef1ded54eac8e97493881d1aa910f7df5593c874962"} Dec 03 12:58:24 crc kubenswrapper[4986]: I1203 12:58:24.974456 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s6wq8" event={"ID":"05a3b920-eb04-4864-81ac-924ba7c63d4e","Type":"ContainerStarted","Data":"965164f703fefb2e8cc98f5eebbf856dfe044580897a839d8df04aec7e8524cb"} Dec 03 12:58:24 crc kubenswrapper[4986]: I1203 12:58:24.980884 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kx8xd" event={"ID":"03a67bcc-6366-4e73-a3d9-39dcdd4f2f82","Type":"ContainerStarted","Data":"0b308d55539498bf5d6385e43b9451650c20ca2fa896cba7c9e37875c330b664"} Dec 03 12:58:24 crc kubenswrapper[4986]: I1203 12:58:24.982678 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kjctk" event={"ID":"051204e8-a3a7-47ae-8170-55f5776cfa1e","Type":"ContainerStarted","Data":"813e37c512ffd7816f070f38c608f353c3df7fec0e66e3da1606fc9f74f938ed"} Dec 03 12:58:24 crc kubenswrapper[4986]: I1203 12:58:24.996663 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 12:58:24 crc kubenswrapper[4986]: E1203 12:58:24.997119 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 12:58:25.497103229 +0000 UTC m=+164.963534420 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:24 crc kubenswrapper[4986]: I1203 12:58:24.998156 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s6wq8" podStartSLOduration=142.998134499 podStartE2EDuration="2m22.998134499s" podCreationTimestamp="2025-12-03 12:56:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 12:58:24.996228185 +0000 UTC m=+164.462659386" watchObservedRunningTime="2025-12-03 12:58:24.998134499 +0000 UTC m=+164.464565690" Dec 03 12:58:25 crc kubenswrapper[4986]: I1203 12:58:25.000135 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-dqctr" event={"ID":"2ad8791b-88af-4c86-8ed4-999a3357c3b7","Type":"ContainerStarted","Data":"5cece73b84e40faaa8258648d020855f6f0364e6cabeb72b33744044123859c3"} Dec 03 12:58:25 crc kubenswrapper[4986]: I1203 12:58:25.001527 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-glvvh" event={"ID":"005be756-d1ff-452c-aa4e-e4df06f28839","Type":"ContainerStarted","Data":"abbdc93395de27a511bde95f023df9bb5ed7b0d90d6ecfc0f72aa8474e2fe94b"} Dec 03 12:58:25 crc kubenswrapper[4986]: I1203 12:58:25.002753 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-szpmg" event={"ID":"71392224-bf56-4c1b-9f0a-bcecd393660d","Type":"ContainerStarted","Data":"566299c4712eeefdbbfdabdf664d6562a29489e719744d5afce19e155215bed7"} Dec 03 12:58:25 crc kubenswrapper[4986]: I1203 12:58:25.003768 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-vlhbw" event={"ID":"7cc7f044-be33-41ca-b4af-bdf5a0ca066f","Type":"ContainerStarted","Data":"61a0327d8fcd61c7351fd6860aa4fde7ea4b6a031350ae8d363c202b5fcfc015"} Dec 03 12:58:25 crc kubenswrapper[4986]: I1203 12:58:25.004808 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zzj7p" event={"ID":"89c50427-14ae-409d-89d5-a56be0ff97d1","Type":"ContainerStarted","Data":"cbe598634a3b4496336e84f38190312fc9b1e457486521a5b8162e4810fe4b5b"} Dec 03 12:58:25 crc kubenswrapper[4986]: I1203 12:58:25.005768 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7xp9j" event={"ID":"06ff5ccd-6dfa-4826-8a1d-929ac5deadf4","Type":"ContainerStarted","Data":"f584cfb5384b0fcad4693cc87e4288f46b1327155a12926e735da2e6a2fc6494"} Dec 03 12:58:25 crc kubenswrapper[4986]: I1203 12:58:25.007086 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-575sd" event={"ID":"68f994ae-dec7-4be6-b4e7-7a1875b40f4f","Type":"ContainerStarted","Data":"f5ee2e0634e78046ef32722f94641cf3125cd767ac9a7857fac2c7cc4903ecde"} Dec 03 12:58:25 crc kubenswrapper[4986]: I1203 12:58:25.008068 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-jwlmm" event={"ID":"d68d65c4-d762-426b-84bf-d1e05738d0ee","Type":"ContainerStarted","Data":"447520e6ad0483a5ba4b6c25cfe39377cab454fc7ec0c8c073122ced7ff6cd96"} Dec 03 12:58:25 crc kubenswrapper[4986]: I1203 12:58:25.009002 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-9wnfz" event={"ID":"872b07f2-d557-4c06-a432-18b9a46fe6cc","Type":"ContainerStarted","Data":"499cccb84f7c2c596e62fb6f2a8f2dd25b239cf1848f1cf07a42cac9360e9e03"} Dec 03 12:58:25 crc kubenswrapper[4986]: I1203 12:58:25.013300 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-rskz9" event={"ID":"f465e01f-8cdd-424e-a617-ed0e4a4ae140","Type":"ContainerStarted","Data":"1b09a1538b3354b9cebf0402908ef11d3a0dc50681d41be2817c626c48905917"} Dec 03 12:58:25 crc kubenswrapper[4986]: I1203 12:58:25.020710 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-wvzt8" Dec 03 12:58:25 crc kubenswrapper[4986]: I1203 12:58:25.020747 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-npjwk" Dec 03 12:58:25 crc kubenswrapper[4986]: I1203 12:58:25.020758 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-ptgv8" Dec 03 12:58:25 crc kubenswrapper[4986]: I1203 12:58:25.020768 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n6hjp" Dec 03 12:58:25 crc kubenswrapper[4986]: I1203 12:58:25.024968 4986 patch_prober.go:28] interesting pod/downloads-7954f5f757-wvzt8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Dec 03 12:58:25 crc kubenswrapper[4986]: I1203 12:58:25.025139 4986 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-wvzt8" podUID="d366e2f7-22ef-46e5-855f-0f26e6a9186c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Dec 03 12:58:25 crc kubenswrapper[4986]: I1203 12:58:25.025525 4986 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-n6hjp container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Dec 03 12:58:25 crc kubenswrapper[4986]: I1203 12:58:25.025637 4986 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n6hjp" podUID="a5484631-f1bd-49f7-a0bd-46e783e44095" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" Dec 03 12:58:25 crc kubenswrapper[4986]: I1203 12:58:25.030648 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-ptgv8" Dec 03 12:58:25 crc kubenswrapper[4986]: I1203 12:58:25.034905 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n6hjp" podStartSLOduration=142.034885289 podStartE2EDuration="2m22.034885289s" podCreationTimestamp="2025-12-03 12:56:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 12:58:25.032402658 +0000 UTC m=+164.498833849" watchObservedRunningTime="2025-12-03 12:58:25.034885289 +0000 UTC m=+164.501316480" Dec 03 12:58:25 crc kubenswrapper[4986]: I1203 12:58:25.051642 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-npjwk" podStartSLOduration=142.051626603 podStartE2EDuration="2m22.051626603s" podCreationTimestamp="2025-12-03 12:56:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 12:58:25.051493159 +0000 UTC m=+164.517924340" watchObservedRunningTime="2025-12-03 12:58:25.051626603 +0000 UTC m=+164.518057794" Dec 03 12:58:25 crc kubenswrapper[4986]: I1203 12:58:25.062489 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-85fg2" podStartSLOduration=143.06247794 podStartE2EDuration="2m23.06247794s" podCreationTimestamp="2025-12-03 12:56:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 12:58:25.061268156 +0000 UTC m=+164.527699347" watchObservedRunningTime="2025-12-03 12:58:25.06247794 +0000 UTC m=+164.528909131" Dec 03 12:58:25 crc kubenswrapper[4986]: I1203 12:58:25.077214 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4c7tt" podStartSLOduration=143.077200296 podStartE2EDuration="2m23.077200296s" podCreationTimestamp="2025-12-03 12:56:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 12:58:25.076857587 +0000 UTC m=+164.543288778" watchObservedRunningTime="2025-12-03 12:58:25.077200296 +0000 UTC m=+164.543631487" Dec 03 12:58:25 crc kubenswrapper[4986]: I1203 12:58:25.098189 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:25 crc kubenswrapper[4986]: I1203 12:58:25.099877 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-ptgv8" podStartSLOduration=143.099858269 podStartE2EDuration="2m23.099858269s" podCreationTimestamp="2025-12-03 12:56:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 12:58:25.097156832 +0000 UTC m=+164.563588023" watchObservedRunningTime="2025-12-03 12:58:25.099858269 +0000 UTC m=+164.566289460" Dec 03 12:58:25 crc kubenswrapper[4986]: E1203 12:58:25.100528 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 12:58:25.600517767 +0000 UTC m=+165.066948958 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zn5lj" (UID: "d4e187c5-28e2-4881-8f59-214d93c767b1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:25 crc kubenswrapper[4986]: I1203 12:58:25.130698 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-wvzt8" podStartSLOduration=143.130678051 podStartE2EDuration="2m23.130678051s" podCreationTimestamp="2025-12-03 12:56:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 12:58:25.111440706 +0000 UTC m=+164.577871897" watchObservedRunningTime="2025-12-03 12:58:25.130678051 +0000 UTC m=+164.597109252" Dec 03 12:58:25 crc kubenswrapper[4986]: I1203 12:58:25.200712 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 12:58:25 crc kubenswrapper[4986]: E1203 12:58:25.201123 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 12:58:25.701107425 +0000 UTC m=+165.167538616 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:25 crc kubenswrapper[4986]: I1203 12:58:25.301749 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:25 crc kubenswrapper[4986]: E1203 12:58:25.302159 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 12:58:25.802142115 +0000 UTC m=+165.268573306 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zn5lj" (UID: "d4e187c5-28e2-4881-8f59-214d93c767b1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:25 crc kubenswrapper[4986]: I1203 12:58:25.403869 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 12:58:25 crc kubenswrapper[4986]: E1203 12:58:25.404595 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 12:58:25.904573845 +0000 UTC m=+165.371005036 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:25 crc kubenswrapper[4986]: I1203 12:58:25.505705 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:25 crc kubenswrapper[4986]: E1203 12:58:25.506098 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 12:58:26.006083849 +0000 UTC m=+165.472515040 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zn5lj" (UID: "d4e187c5-28e2-4881-8f59-214d93c767b1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:25 crc kubenswrapper[4986]: I1203 12:58:25.606631 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 12:58:25 crc kubenswrapper[4986]: E1203 12:58:25.607372 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 12:58:26.107352685 +0000 UTC m=+165.573783876 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:25 crc kubenswrapper[4986]: I1203 12:58:25.631637 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-sk8ll" Dec 03 12:58:25 crc kubenswrapper[4986]: I1203 12:58:25.631718 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-sk8ll" Dec 03 12:58:25 crc kubenswrapper[4986]: I1203 12:58:25.635886 4986 patch_prober.go:28] interesting pod/console-f9d7485db-sk8ll container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Dec 03 12:58:25 crc kubenswrapper[4986]: I1203 12:58:25.635958 4986 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-sk8ll" podUID="a069c6b7-9e3b-40fd-b830-0d2a82fadd9a" containerName="console" probeResult="failure" output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" Dec 03 12:58:25 crc kubenswrapper[4986]: I1203 12:58:25.708702 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:25 crc kubenswrapper[4986]: E1203 12:58:25.709420 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 12:58:26.209400224 +0000 UTC m=+165.675831475 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zn5lj" (UID: "d4e187c5-28e2-4881-8f59-214d93c767b1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:25 crc kubenswrapper[4986]: I1203 12:58:25.720797 4986 patch_prober.go:28] interesting pod/downloads-7954f5f757-wvzt8 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Dec 03 12:58:25 crc kubenswrapper[4986]: I1203 12:58:25.720847 4986 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-wvzt8" podUID="d366e2f7-22ef-46e5-855f-0f26e6a9186c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Dec 03 12:58:25 crc kubenswrapper[4986]: I1203 12:58:25.721014 4986 patch_prober.go:28] interesting pod/downloads-7954f5f757-wvzt8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Dec 03 12:58:25 crc kubenswrapper[4986]: I1203 12:58:25.721077 4986 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-wvzt8" podUID="d366e2f7-22ef-46e5-855f-0f26e6a9186c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Dec 03 12:58:25 crc kubenswrapper[4986]: I1203 12:58:25.810054 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 12:58:25 crc kubenswrapper[4986]: E1203 12:58:25.810738 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 12:58:26.310717082 +0000 UTC m=+165.777148273 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:25 crc kubenswrapper[4986]: I1203 12:58:25.913222 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:25 crc kubenswrapper[4986]: E1203 12:58:25.913539 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 12:58:26.413527383 +0000 UTC m=+165.879958574 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zn5lj" (UID: "d4e187c5-28e2-4881-8f59-214d93c767b1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:25 crc kubenswrapper[4986]: I1203 12:58:25.957122 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-npjwk" Dec 03 12:58:26 crc kubenswrapper[4986]: I1203 12:58:26.018660 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 12:58:26 crc kubenswrapper[4986]: E1203 12:58:26.019639 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 12:58:26.519618516 +0000 UTC m=+165.986049707 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:26 crc kubenswrapper[4986]: I1203 12:58:26.098246 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-62rd9" event={"ID":"5c9ab9a7-030c-4d45-8c75-950f457bb69c","Type":"ContainerStarted","Data":"9fc39b0d975fc2360c2112c5a738e0cab73d9c96c91ae6e57359210112be097c"} Dec 03 12:58:26 crc kubenswrapper[4986]: I1203 12:58:26.109200 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pjgnh" event={"ID":"3e7e7e0f-8493-47c5-859d-3e7046c7ddab","Type":"ContainerStarted","Data":"99abd5933637f659bd48cdd83a78b52eb7ad1e5afa60ad71ff8247490fbe60be"} Dec 03 12:58:26 crc kubenswrapper[4986]: I1203 12:58:26.115796 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hhmft" event={"ID":"6a077eb6-dc4e-4b2c-88be-eb4fad075e7a","Type":"ContainerStarted","Data":"d80ddb5f8adee5fc7ff36f633eba6b761c761062f64fe5bc9605a8f16dc8d56c"} Dec 03 12:58:26 crc kubenswrapper[4986]: I1203 12:58:26.116507 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hhmft" Dec 03 12:58:26 crc kubenswrapper[4986]: I1203 12:58:26.120089 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:26 crc kubenswrapper[4986]: E1203 12:58:26.120549 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 12:58:26.620528884 +0000 UTC m=+166.086960075 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zn5lj" (UID: "d4e187c5-28e2-4881-8f59-214d93c767b1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:26 crc kubenswrapper[4986]: I1203 12:58:26.161917 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-rl2mt" event={"ID":"ea24f625-ded4-4e37-a23b-f96fe691b0dd","Type":"ContainerStarted","Data":"bab9d5d723acba55c81cf9833e325ef8b4117851bb0db82c4225f75358706e3b"} Dec 03 12:58:26 crc kubenswrapper[4986]: I1203 12:58:26.222729 4986 generic.go:334] "Generic (PLEG): container finished" podID="5825d5b3-a095-48d6-9365-d03d1faa63ca" containerID="ff89b4253395e709c82da821d83850dac38eda5dab9141482bee63effe03f882" exitCode=0 Dec 03 12:58:26 crc kubenswrapper[4986]: I1203 12:58:26.222823 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-72n7g" event={"ID":"5825d5b3-a095-48d6-9365-d03d1faa63ca","Type":"ContainerDied","Data":"ff89b4253395e709c82da821d83850dac38eda5dab9141482bee63effe03f882"} Dec 03 12:58:26 crc kubenswrapper[4986]: I1203 12:58:26.224254 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 12:58:26 crc kubenswrapper[4986]: E1203 12:58:26.225122 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 12:58:26.725109244 +0000 UTC m=+166.191540435 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:26 crc kubenswrapper[4986]: I1203 12:58:26.251544 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-62rd9" podStartSLOduration=143.251521742 podStartE2EDuration="2m23.251521742s" podCreationTimestamp="2025-12-03 12:56:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 12:58:26.251392998 +0000 UTC m=+165.717824189" watchObservedRunningTime="2025-12-03 12:58:26.251521742 +0000 UTC m=+165.717952933" Dec 03 12:58:26 crc kubenswrapper[4986]: I1203 12:58:26.254014 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-r4wn7" event={"ID":"30903761-66a3-4874-b464-8e1127bd8ed5","Type":"ContainerStarted","Data":"1b95ccb1d1f85d2c26512509506efdfe6509ecabca25c0844e6b358b3c5b8c58"} Dec 03 12:58:26 crc kubenswrapper[4986]: I1203 12:58:26.318569 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qxpz7" event={"ID":"e6f57929-5eab-4a61-88dd-e91ca735bbd5","Type":"ContainerStarted","Data":"c362f6190734bc0501d017d9d71ae9afe412c9e958fa87ee82cbf7bf8830810b"} Dec 03 12:58:26 crc kubenswrapper[4986]: I1203 12:58:26.327506 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:26 crc kubenswrapper[4986]: E1203 12:58:26.329425 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 12:58:26.829410947 +0000 UTC m=+166.295842138 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zn5lj" (UID: "d4e187c5-28e2-4881-8f59-214d93c767b1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:26 crc kubenswrapper[4986]: I1203 12:58:26.334180 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kjctk" event={"ID":"051204e8-a3a7-47ae-8170-55f5776cfa1e","Type":"ContainerStarted","Data":"2e36ec5946971f47b52fd942cb4cd8430548999d5016e87e819ef06c283a8a70"} Dec 03 12:58:26 crc kubenswrapper[4986]: I1203 12:58:26.335382 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kjctk" Dec 03 12:58:26 crc kubenswrapper[4986]: I1203 12:58:26.348813 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2glzg" event={"ID":"f51b3907-2cbf-4b80-a800-411404195052","Type":"ContainerStarted","Data":"91269780d0e17dcd912833f5109fec1538147a75cc86df4dcb3a2c1a6243d1de"} Dec 03 12:58:26 crc kubenswrapper[4986]: I1203 12:58:26.362926 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-7wtlt" event={"ID":"dd374c2f-10af-47c4-b550-d5c03c39ca45","Type":"ContainerStarted","Data":"6c3fc646b2d1b082b26e12ad5bfed69fc232ac8bc06ae6b58d46004338fde19a"} Dec 03 12:58:26 crc kubenswrapper[4986]: I1203 12:58:26.383200 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-8hs4t" event={"ID":"d22bd794-105c-4c41-b1c3-46723ed0cb79","Type":"ContainerStarted","Data":"7f5173348145be8978fbbd20a9d88a1240033298fbf123637a3a877e108812da"} Dec 03 12:58:26 crc kubenswrapper[4986]: I1203 12:58:26.404633 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-xn84j" event={"ID":"2588cb3b-8139-4529-a6e1-c57532afdfa7","Type":"ContainerStarted","Data":"7f0ac6289812eaaf60d7897c382866edc1f779d44606dc255f1c375a49819241"} Dec 03 12:58:26 crc kubenswrapper[4986]: I1203 12:58:26.411004 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hjt88" event={"ID":"69bf978d-34fa-4cfa-8c78-7d0bdfa3298f","Type":"ContainerStarted","Data":"dc2753e17a6ee8352010ca8f1f855c16c8bcaae2b1420bd2afc7d48cb5e25fd2"} Dec 03 12:58:26 crc kubenswrapper[4986]: I1203 12:58:26.436781 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 12:58:26 crc kubenswrapper[4986]: E1203 12:58:26.437137 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 12:58:26.937116916 +0000 UTC m=+166.403548107 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:26 crc kubenswrapper[4986]: I1203 12:58:26.446228 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2tq6k" podStartSLOduration=144.446209953 podStartE2EDuration="2m24.446209953s" podCreationTimestamp="2025-12-03 12:56:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 12:58:26.445256566 +0000 UTC m=+165.911687757" watchObservedRunningTime="2025-12-03 12:58:26.446209953 +0000 UTC m=+165.912641144" Dec 03 12:58:26 crc kubenswrapper[4986]: I1203 12:58:26.451581 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-k58fj" event={"ID":"f23fc85f-9e00-4046-8c6b-b3f10deb822b","Type":"ContainerStarted","Data":"380c045a798ca3ed4671b50bf959762e5113e0e8ed61df20e5afcba145dfde22"} Dec 03 12:58:26 crc kubenswrapper[4986]: I1203 12:58:26.451632 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-575sd" Dec 03 12:58:26 crc kubenswrapper[4986]: I1203 12:58:26.452633 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-szpmg" Dec 03 12:58:26 crc kubenswrapper[4986]: I1203 12:58:26.453219 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-zzj7p" Dec 03 12:58:26 crc kubenswrapper[4986]: I1203 12:58:26.453439 4986 patch_prober.go:28] interesting pod/downloads-7954f5f757-wvzt8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Dec 03 12:58:26 crc kubenswrapper[4986]: I1203 12:58:26.453464 4986 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-wvzt8" podUID="d366e2f7-22ef-46e5-855f-0f26e6a9186c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Dec 03 12:58:26 crc kubenswrapper[4986]: I1203 12:58:26.484709 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-575sd" Dec 03 12:58:26 crc kubenswrapper[4986]: I1203 12:58:26.484798 4986 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-zzj7p container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" start-of-body= Dec 03 12:58:26 crc kubenswrapper[4986]: I1203 12:58:26.484829 4986 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-zzj7p" podUID="89c50427-14ae-409d-89d5-a56be0ff97d1" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" Dec 03 12:58:26 crc kubenswrapper[4986]: I1203 12:58:26.487738 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pjgnh" podStartSLOduration=145.487717679 podStartE2EDuration="2m25.487717679s" podCreationTimestamp="2025-12-03 12:56:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 12:58:26.485386252 +0000 UTC m=+165.951817443" watchObservedRunningTime="2025-12-03 12:58:26.487717679 +0000 UTC m=+165.954148870" Dec 03 12:58:26 crc kubenswrapper[4986]: I1203 12:58:26.501325 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n6hjp" Dec 03 12:58:26 crc kubenswrapper[4986]: I1203 12:58:26.541992 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:26 crc kubenswrapper[4986]: E1203 12:58:26.550838 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 12:58:27.050824045 +0000 UTC m=+166.517255236 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zn5lj" (UID: "d4e187c5-28e2-4881-8f59-214d93c767b1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:26 crc kubenswrapper[4986]: I1203 12:58:26.645123 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 12:58:26 crc kubenswrapper[4986]: E1203 12:58:26.645539 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 12:58:27.145520096 +0000 UTC m=+166.611951287 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:26 crc kubenswrapper[4986]: I1203 12:58:26.645800 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hhmft" podStartSLOduration=144.645781233 podStartE2EDuration="2m24.645781233s" podCreationTimestamp="2025-12-03 12:56:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 12:58:26.568491525 +0000 UTC m=+166.034922736" watchObservedRunningTime="2025-12-03 12:58:26.645781233 +0000 UTC m=+166.112212424" Dec 03 12:58:26 crc kubenswrapper[4986]: I1203 12:58:26.747981 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:26 crc kubenswrapper[4986]: E1203 12:58:26.748273 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 12:58:27.248259934 +0000 UTC m=+166.714691125 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zn5lj" (UID: "d4e187c5-28e2-4881-8f59-214d93c767b1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:26 crc kubenswrapper[4986]: I1203 12:58:26.749156 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2glzg" podStartSLOduration=144.74914115 podStartE2EDuration="2m24.74914115s" podCreationTimestamp="2025-12-03 12:56:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 12:58:26.74775638 +0000 UTC m=+166.214187571" watchObservedRunningTime="2025-12-03 12:58:26.74914115 +0000 UTC m=+166.215572341" Dec 03 12:58:26 crc kubenswrapper[4986]: I1203 12:58:26.750408 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-rwrf2" podStartSLOduration=143.750402335 podStartE2EDuration="2m23.750402335s" podCreationTimestamp="2025-12-03 12:56:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 12:58:26.644658661 +0000 UTC m=+166.111089852" watchObservedRunningTime="2025-12-03 12:58:26.750402335 +0000 UTC m=+166.216833526" Dec 03 12:58:26 crc kubenswrapper[4986]: I1203 12:58:26.856790 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 12:58:26 crc kubenswrapper[4986]: E1203 12:58:26.857392 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 12:58:27.357376904 +0000 UTC m=+166.823808095 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:26 crc kubenswrapper[4986]: I1203 12:58:26.864779 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6k9lf" podStartSLOduration=144.864764372 podStartE2EDuration="2m24.864764372s" podCreationTimestamp="2025-12-03 12:56:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 12:58:26.812689078 +0000 UTC m=+166.279120269" watchObservedRunningTime="2025-12-03 12:58:26.864764372 +0000 UTC m=+166.331195563" Dec 03 12:58:26 crc kubenswrapper[4986]: I1203 12:58:26.906184 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-glvvh" podStartSLOduration=144.906166225 podStartE2EDuration="2m24.906166225s" podCreationTimestamp="2025-12-03 12:56:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 12:58:26.905478695 +0000 UTC m=+166.371909896" watchObservedRunningTime="2025-12-03 12:58:26.906166225 +0000 UTC m=+166.372597416" Dec 03 12:58:26 crc kubenswrapper[4986]: I1203 12:58:26.907211 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kjctk" podStartSLOduration=143.907202563 podStartE2EDuration="2m23.907202563s" podCreationTimestamp="2025-12-03 12:56:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 12:58:26.86574531 +0000 UTC m=+166.332176501" watchObservedRunningTime="2025-12-03 12:58:26.907202563 +0000 UTC m=+166.373633754" Dec 03 12:58:26 crc kubenswrapper[4986]: I1203 12:58:26.959276 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:26 crc kubenswrapper[4986]: E1203 12:58:26.959670 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 12:58:27.459658009 +0000 UTC m=+166.926089190 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zn5lj" (UID: "d4e187c5-28e2-4881-8f59-214d93c767b1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:26 crc kubenswrapper[4986]: I1203 12:58:26.982646 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-d7f6t" podStartSLOduration=144.982630769 podStartE2EDuration="2m24.982630769s" podCreationTimestamp="2025-12-03 12:56:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 12:58:26.980989193 +0000 UTC m=+166.447420384" watchObservedRunningTime="2025-12-03 12:58:26.982630769 +0000 UTC m=+166.449061960" Dec 03 12:58:27 crc kubenswrapper[4986]: I1203 12:58:27.035429 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7xp9j" podStartSLOduration=145.035412814 podStartE2EDuration="2m25.035412814s" podCreationTimestamp="2025-12-03 12:56:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 12:58:27.034153768 +0000 UTC m=+166.500584959" watchObservedRunningTime="2025-12-03 12:58:27.035412814 +0000 UTC m=+166.501844005" Dec 03 12:58:27 crc kubenswrapper[4986]: I1203 12:58:27.060164 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 12:58:27 crc kubenswrapper[4986]: E1203 12:58:27.060392 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 12:58:27.560363 +0000 UTC m=+167.026794201 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:27 crc kubenswrapper[4986]: I1203 12:58:27.060524 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:27 crc kubenswrapper[4986]: E1203 12:58:27.060849 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 12:58:27.560836833 +0000 UTC m=+167.027268024 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zn5lj" (UID: "d4e187c5-28e2-4881-8f59-214d93c767b1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:27 crc kubenswrapper[4986]: I1203 12:58:27.080405 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-9wnfz" podStartSLOduration=145.080380966 podStartE2EDuration="2m25.080380966s" podCreationTimestamp="2025-12-03 12:56:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 12:58:27.077727661 +0000 UTC m=+166.544158842" watchObservedRunningTime="2025-12-03 12:58:27.080380966 +0000 UTC m=+166.546812157" Dec 03 12:58:27 crc kubenswrapper[4986]: I1203 12:58:27.102894 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qxpz7" podStartSLOduration=145.102876603 podStartE2EDuration="2m25.102876603s" podCreationTimestamp="2025-12-03 12:56:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 12:58:27.101842344 +0000 UTC m=+166.568273545" watchObservedRunningTime="2025-12-03 12:58:27.102876603 +0000 UTC m=+166.569307784" Dec 03 12:58:27 crc kubenswrapper[4986]: I1203 12:58:27.134969 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-9wnfz" Dec 03 12:58:27 crc kubenswrapper[4986]: I1203 12:58:27.137953 4986 patch_prober.go:28] interesting pod/router-default-5444994796-9wnfz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 12:58:27 crc kubenswrapper[4986]: [-]has-synced failed: reason withheld Dec 03 12:58:27 crc kubenswrapper[4986]: [+]process-running ok Dec 03 12:58:27 crc kubenswrapper[4986]: healthz check failed Dec 03 12:58:27 crc kubenswrapper[4986]: I1203 12:58:27.138038 4986 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9wnfz" podUID="872b07f2-d557-4c06-a432-18b9a46fe6cc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 12:58:27 crc kubenswrapper[4986]: I1203 12:58:27.163338 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 12:58:27 crc kubenswrapper[4986]: E1203 12:58:27.163693 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 12:58:27.663671955 +0000 UTC m=+167.130103146 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:27 crc kubenswrapper[4986]: I1203 12:58:27.178248 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hjt88" podStartSLOduration=145.178230687 podStartE2EDuration="2m25.178230687s" podCreationTimestamp="2025-12-03 12:56:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 12:58:27.133775828 +0000 UTC m=+166.600207019" watchObservedRunningTime="2025-12-03 12:58:27.178230687 +0000 UTC m=+166.644661878" Dec 03 12:58:27 crc kubenswrapper[4986]: I1203 12:58:27.182049 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-k58fj" podStartSLOduration=13.182030055 podStartE2EDuration="13.182030055s" podCreationTimestamp="2025-12-03 12:58:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 12:58:27.177537006 +0000 UTC m=+166.643968207" watchObservedRunningTime="2025-12-03 12:58:27.182030055 +0000 UTC m=+166.648461256" Dec 03 12:58:27 crc kubenswrapper[4986]: I1203 12:58:27.212339 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kx8xd" podStartSLOduration=145.212318271 podStartE2EDuration="2m25.212318271s" podCreationTimestamp="2025-12-03 12:56:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 12:58:27.211778686 +0000 UTC m=+166.678209877" watchObservedRunningTime="2025-12-03 12:58:27.212318271 +0000 UTC m=+166.678749462" Dec 03 12:58:27 crc kubenswrapper[4986]: I1203 12:58:27.252160 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-zzj7p" podStartSLOduration=145.252147189 podStartE2EDuration="2m25.252147189s" podCreationTimestamp="2025-12-03 12:56:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 12:58:27.249959697 +0000 UTC m=+166.716390888" watchObservedRunningTime="2025-12-03 12:58:27.252147189 +0000 UTC m=+166.718578380" Dec 03 12:58:27 crc kubenswrapper[4986]: I1203 12:58:27.264493 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:27 crc kubenswrapper[4986]: E1203 12:58:27.264847 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 12:58:27.764832318 +0000 UTC m=+167.231263509 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zn5lj" (UID: "d4e187c5-28e2-4881-8f59-214d93c767b1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:27 crc kubenswrapper[4986]: I1203 12:58:27.294709 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-r4wn7" podStartSLOduration=13.294690513 podStartE2EDuration="13.294690513s" podCreationTimestamp="2025-12-03 12:58:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 12:58:27.275654815 +0000 UTC m=+166.742086006" watchObservedRunningTime="2025-12-03 12:58:27.294690513 +0000 UTC m=+166.761121704" Dec 03 12:58:27 crc kubenswrapper[4986]: I1203 12:58:27.295830 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-xn84j" podStartSLOduration=145.295821586 podStartE2EDuration="2m25.295821586s" podCreationTimestamp="2025-12-03 12:56:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 12:58:27.291139973 +0000 UTC m=+166.757571164" watchObservedRunningTime="2025-12-03 12:58:27.295821586 +0000 UTC m=+166.762252777" Dec 03 12:58:27 crc kubenswrapper[4986]: I1203 12:58:27.357317 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-b8hxm"] Dec 03 12:58:27 crc kubenswrapper[4986]: I1203 12:58:27.358438 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b8hxm" Dec 03 12:58:27 crc kubenswrapper[4986]: I1203 12:58:27.365196 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 12:58:27 crc kubenswrapper[4986]: E1203 12:58:27.365720 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 12:58:27.865700904 +0000 UTC m=+167.332132095 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:27 crc kubenswrapper[4986]: I1203 12:58:27.378138 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 03 12:58:27 crc kubenswrapper[4986]: I1203 12:58:27.385454 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-rskz9" podStartSLOduration=144.385434002 podStartE2EDuration="2m24.385434002s" podCreationTimestamp="2025-12-03 12:56:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 12:58:27.378471725 +0000 UTC m=+166.844902946" watchObservedRunningTime="2025-12-03 12:58:27.385434002 +0000 UTC m=+166.851865193" Dec 03 12:58:27 crc kubenswrapper[4986]: I1203 12:58:27.385965 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-b8hxm"] Dec 03 12:58:27 crc kubenswrapper[4986]: I1203 12:58:27.450343 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29412765-29wpf" podStartSLOduration=144.450323019 podStartE2EDuration="2m24.450323019s" podCreationTimestamp="2025-12-03 12:56:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 12:58:27.441585662 +0000 UTC m=+166.908016863" watchObservedRunningTime="2025-12-03 12:58:27.450323019 +0000 UTC m=+166.916754220" Dec 03 12:58:27 crc kubenswrapper[4986]: I1203 12:58:27.452903 4986 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-szpmg container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.41:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 03 12:58:27 crc kubenswrapper[4986]: I1203 12:58:27.453115 4986 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-szpmg" podUID="71392224-bf56-4c1b-9f0a-bcecd393660d" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.41:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 03 12:58:27 crc kubenswrapper[4986]: I1203 12:58:27.467394 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7aceb259-65a5-45a6-acd1-8f5cac430ef7-catalog-content\") pod \"community-operators-b8hxm\" (UID: \"7aceb259-65a5-45a6-acd1-8f5cac430ef7\") " pod="openshift-marketplace/community-operators-b8hxm" Dec 03 12:58:27 crc kubenswrapper[4986]: I1203 12:58:27.467486 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lfms\" (UniqueName: \"kubernetes.io/projected/7aceb259-65a5-45a6-acd1-8f5cac430ef7-kube-api-access-6lfms\") pod \"community-operators-b8hxm\" (UID: \"7aceb259-65a5-45a6-acd1-8f5cac430ef7\") " pod="openshift-marketplace/community-operators-b8hxm" Dec 03 12:58:27 crc kubenswrapper[4986]: I1203 12:58:27.467513 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7aceb259-65a5-45a6-acd1-8f5cac430ef7-utilities\") pod \"community-operators-b8hxm\" (UID: \"7aceb259-65a5-45a6-acd1-8f5cac430ef7\") " pod="openshift-marketplace/community-operators-b8hxm" Dec 03 12:58:27 crc kubenswrapper[4986]: I1203 12:58:27.467566 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:27 crc kubenswrapper[4986]: E1203 12:58:27.467853 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 12:58:27.967834605 +0000 UTC m=+167.434265856 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zn5lj" (UID: "d4e187c5-28e2-4881-8f59-214d93c767b1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:27 crc kubenswrapper[4986]: I1203 12:58:27.495960 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2tq6k" event={"ID":"4cd4de5c-8c29-4f8e-bde8-1c21bd833d80","Type":"ContainerStarted","Data":"a84a0ec11c45d1c5b09f3fdf7acb040937325527a3673a59e53af8123f4d1497"} Dec 03 12:58:27 crc kubenswrapper[4986]: I1203 12:58:27.513870 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-szpmg" podStartSLOduration=144.513855008 podStartE2EDuration="2m24.513855008s" podCreationTimestamp="2025-12-03 12:56:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 12:58:27.511705417 +0000 UTC m=+166.978136608" watchObservedRunningTime="2025-12-03 12:58:27.513855008 +0000 UTC m=+166.980286199" Dec 03 12:58:27 crc kubenswrapper[4986]: I1203 12:58:27.533547 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-jwlmm" event={"ID":"d68d65c4-d762-426b-84bf-d1e05738d0ee","Type":"ContainerStarted","Data":"77a5d6b068faed2f9dfc55573c4575806c381f35c0f33be64728a36cb1d13f74"} Dec 03 12:58:27 crc kubenswrapper[4986]: I1203 12:58:27.545778 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-rl2mt" event={"ID":"ea24f625-ded4-4e37-a23b-f96fe691b0dd","Type":"ContainerStarted","Data":"988dea4c6b024d2bbe0d7baef543383b58960ef0606cb7713abde92d6cdf1db1"} Dec 03 12:58:27 crc kubenswrapper[4986]: I1203 12:58:27.570516 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 12:58:27 crc kubenswrapper[4986]: I1203 12:58:27.570763 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lfms\" (UniqueName: \"kubernetes.io/projected/7aceb259-65a5-45a6-acd1-8f5cac430ef7-kube-api-access-6lfms\") pod \"community-operators-b8hxm\" (UID: \"7aceb259-65a5-45a6-acd1-8f5cac430ef7\") " pod="openshift-marketplace/community-operators-b8hxm" Dec 03 12:58:27 crc kubenswrapper[4986]: I1203 12:58:27.570801 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7aceb259-65a5-45a6-acd1-8f5cac430ef7-utilities\") pod \"community-operators-b8hxm\" (UID: \"7aceb259-65a5-45a6-acd1-8f5cac430ef7\") " pod="openshift-marketplace/community-operators-b8hxm" Dec 03 12:58:27 crc kubenswrapper[4986]: I1203 12:58:27.570890 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7aceb259-65a5-45a6-acd1-8f5cac430ef7-catalog-content\") pod \"community-operators-b8hxm\" (UID: \"7aceb259-65a5-45a6-acd1-8f5cac430ef7\") " pod="openshift-marketplace/community-operators-b8hxm" Dec 03 12:58:27 crc kubenswrapper[4986]: I1203 12:58:27.571421 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7aceb259-65a5-45a6-acd1-8f5cac430ef7-catalog-content\") pod \"community-operators-b8hxm\" (UID: \"7aceb259-65a5-45a6-acd1-8f5cac430ef7\") " pod="openshift-marketplace/community-operators-b8hxm" Dec 03 12:58:27 crc kubenswrapper[4986]: E1203 12:58:27.571645 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 12:58:28.071601863 +0000 UTC m=+167.538033094 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:27 crc kubenswrapper[4986]: I1203 12:58:27.571741 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7aceb259-65a5-45a6-acd1-8f5cac430ef7-utilities\") pod \"community-operators-b8hxm\" (UID: \"7aceb259-65a5-45a6-acd1-8f5cac430ef7\") " pod="openshift-marketplace/community-operators-b8hxm" Dec 03 12:58:27 crc kubenswrapper[4986]: I1203 12:58:27.572939 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-575sd" podStartSLOduration=144.57292288 podStartE2EDuration="2m24.57292288s" podCreationTimestamp="2025-12-03 12:56:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 12:58:27.569862384 +0000 UTC m=+167.036293595" watchObservedRunningTime="2025-12-03 12:58:27.57292288 +0000 UTC m=+167.039354061" Dec 03 12:58:27 crc kubenswrapper[4986]: I1203 12:58:27.587600 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-dqctr" event={"ID":"2ad8791b-88af-4c86-8ed4-999a3357c3b7","Type":"ContainerStarted","Data":"548a36bd8bd331c41bd9a110816cc078c3912a0f52114a0aefd13dfe40e7d53a"} Dec 03 12:58:27 crc kubenswrapper[4986]: I1203 12:58:27.590234 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vzt5m"] Dec 03 12:58:27 crc kubenswrapper[4986]: I1203 12:58:27.595623 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vzt5m" Dec 03 12:58:27 crc kubenswrapper[4986]: I1203 12:58:27.609061 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 03 12:58:27 crc kubenswrapper[4986]: I1203 12:58:27.627061 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-7wtlt" event={"ID":"dd374c2f-10af-47c4-b550-d5c03c39ca45","Type":"ContainerStarted","Data":"a1296c0a459d00cda771d302351a0877a3574efd2eb05b70ce54566adcf0178a"} Dec 03 12:58:27 crc kubenswrapper[4986]: I1203 12:58:27.627698 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-7wtlt" Dec 03 12:58:27 crc kubenswrapper[4986]: I1203 12:58:27.627922 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lfms\" (UniqueName: \"kubernetes.io/projected/7aceb259-65a5-45a6-acd1-8f5cac430ef7-kube-api-access-6lfms\") pod \"community-operators-b8hxm\" (UID: \"7aceb259-65a5-45a6-acd1-8f5cac430ef7\") " pod="openshift-marketplace/community-operators-b8hxm" Dec 03 12:58:27 crc kubenswrapper[4986]: I1203 12:58:27.675517 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:27 crc kubenswrapper[4986]: E1203 12:58:27.676563 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 12:58:28.176550614 +0000 UTC m=+167.642981805 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zn5lj" (UID: "d4e187c5-28e2-4881-8f59-214d93c767b1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:27 crc kubenswrapper[4986]: I1203 12:58:27.677208 4986 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-zzj7p container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" start-of-body= Dec 03 12:58:27 crc kubenswrapper[4986]: I1203 12:58:27.677258 4986 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-zzj7p" podUID="89c50427-14ae-409d-89d5-a56be0ff97d1" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" Dec 03 12:58:27 crc kubenswrapper[4986]: I1203 12:58:27.677465 4986 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-zzj7p container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" start-of-body= Dec 03 12:58:27 crc kubenswrapper[4986]: I1203 12:58:27.677529 4986 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-zzj7p" podUID="89c50427-14ae-409d-89d5-a56be0ff97d1" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" Dec 03 12:58:27 crc kubenswrapper[4986]: I1203 12:58:27.677708 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-vlhbw" event={"ID":"7cc7f044-be33-41ca-b4af-bdf5a0ca066f","Type":"ContainerStarted","Data":"5ebdf717492f5ed31653aef0d3d38dd80c8705a1111c336865095b99319d1fee"} Dec 03 12:58:27 crc kubenswrapper[4986]: I1203 12:58:27.678549 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-jwlmm" podStartSLOduration=145.67853088 podStartE2EDuration="2m25.67853088s" podCreationTimestamp="2025-12-03 12:56:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 12:58:27.656427375 +0000 UTC m=+167.122858586" watchObservedRunningTime="2025-12-03 12:58:27.67853088 +0000 UTC m=+167.144962081" Dec 03 12:58:27 crc kubenswrapper[4986]: I1203 12:58:27.683384 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vzt5m"] Dec 03 12:58:27 crc kubenswrapper[4986]: I1203 12:58:27.683651 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b8hxm" Dec 03 12:58:27 crc kubenswrapper[4986]: I1203 12:58:27.766530 4986 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-zzj7p container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" start-of-body= Dec 03 12:58:27 crc kubenswrapper[4986]: I1203 12:58:27.766596 4986 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-zzj7p" podUID="89c50427-14ae-409d-89d5-a56be0ff97d1" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" Dec 03 12:58:27 crc kubenswrapper[4986]: I1203 12:58:27.750266 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bzs74"] Dec 03 12:58:27 crc kubenswrapper[4986]: I1203 12:58:27.775216 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hjt88" event={"ID":"69bf978d-34fa-4cfa-8c78-7d0bdfa3298f","Type":"ContainerStarted","Data":"6679fc933725b86b58fd56845790c86f2b41fdd38f5123e06dc8f3aa6e95e96d"} Dec 03 12:58:27 crc kubenswrapper[4986]: I1203 12:58:27.775371 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bzs74" Dec 03 12:58:27 crc kubenswrapper[4986]: I1203 12:58:27.776186 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 12:58:27 crc kubenswrapper[4986]: I1203 12:58:27.776471 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vzb6\" (UniqueName: \"kubernetes.io/projected/d28745c8-a7a6-4a38-acb2-f7b6ff1ef54f-kube-api-access-7vzb6\") pod \"certified-operators-vzt5m\" (UID: \"d28745c8-a7a6-4a38-acb2-f7b6ff1ef54f\") " pod="openshift-marketplace/certified-operators-vzt5m" Dec 03 12:58:27 crc kubenswrapper[4986]: I1203 12:58:27.776576 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d28745c8-a7a6-4a38-acb2-f7b6ff1ef54f-utilities\") pod \"certified-operators-vzt5m\" (UID: \"d28745c8-a7a6-4a38-acb2-f7b6ff1ef54f\") " pod="openshift-marketplace/certified-operators-vzt5m" Dec 03 12:58:27 crc kubenswrapper[4986]: I1203 12:58:27.776601 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d28745c8-a7a6-4a38-acb2-f7b6ff1ef54f-catalog-content\") pod \"certified-operators-vzt5m\" (UID: \"d28745c8-a7a6-4a38-acb2-f7b6ff1ef54f\") " pod="openshift-marketplace/certified-operators-vzt5m" Dec 03 12:58:27 crc kubenswrapper[4986]: E1203 12:58:27.777600 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 12:58:28.277581584 +0000 UTC m=+167.744012775 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:27 crc kubenswrapper[4986]: I1203 12:58:27.791072 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-rl2mt" podStartSLOduration=145.791053656 podStartE2EDuration="2m25.791053656s" podCreationTimestamp="2025-12-03 12:56:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 12:58:27.79051323 +0000 UTC m=+167.256944431" watchObservedRunningTime="2025-12-03 12:58:27.791053656 +0000 UTC m=+167.257484847" Dec 03 12:58:27 crc kubenswrapper[4986]: I1203 12:58:27.812514 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bzs74"] Dec 03 12:58:27 crc kubenswrapper[4986]: I1203 12:58:27.879375 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d28745c8-a7a6-4a38-acb2-f7b6ff1ef54f-utilities\") pod \"certified-operators-vzt5m\" (UID: \"d28745c8-a7a6-4a38-acb2-f7b6ff1ef54f\") " pod="openshift-marketplace/certified-operators-vzt5m" Dec 03 12:58:27 crc kubenswrapper[4986]: I1203 12:58:27.879475 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d28745c8-a7a6-4a38-acb2-f7b6ff1ef54f-catalog-content\") pod \"certified-operators-vzt5m\" (UID: \"d28745c8-a7a6-4a38-acb2-f7b6ff1ef54f\") " pod="openshift-marketplace/certified-operators-vzt5m" Dec 03 12:58:27 crc kubenswrapper[4986]: I1203 12:58:27.879636 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98pc4\" (UniqueName: \"kubernetes.io/projected/37f6dc70-6f91-4162-a225-239a999e4320-kube-api-access-98pc4\") pod \"community-operators-bzs74\" (UID: \"37f6dc70-6f91-4162-a225-239a999e4320\") " pod="openshift-marketplace/community-operators-bzs74" Dec 03 12:58:27 crc kubenswrapper[4986]: I1203 12:58:27.879692 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37f6dc70-6f91-4162-a225-239a999e4320-utilities\") pod \"community-operators-bzs74\" (UID: \"37f6dc70-6f91-4162-a225-239a999e4320\") " pod="openshift-marketplace/community-operators-bzs74" Dec 03 12:58:27 crc kubenswrapper[4986]: I1203 12:58:27.879934 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:27 crc kubenswrapper[4986]: I1203 12:58:27.880097 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vzb6\" (UniqueName: \"kubernetes.io/projected/d28745c8-a7a6-4a38-acb2-f7b6ff1ef54f-kube-api-access-7vzb6\") pod \"certified-operators-vzt5m\" (UID: \"d28745c8-a7a6-4a38-acb2-f7b6ff1ef54f\") " pod="openshift-marketplace/certified-operators-vzt5m" Dec 03 12:58:27 crc kubenswrapper[4986]: I1203 12:58:27.880412 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37f6dc70-6f91-4162-a225-239a999e4320-catalog-content\") pod \"community-operators-bzs74\" (UID: \"37f6dc70-6f91-4162-a225-239a999e4320\") " pod="openshift-marketplace/community-operators-bzs74" Dec 03 12:58:27 crc kubenswrapper[4986]: E1203 12:58:27.886319 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 12:58:28.386303332 +0000 UTC m=+167.852734523 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zn5lj" (UID: "d4e187c5-28e2-4881-8f59-214d93c767b1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:27 crc kubenswrapper[4986]: I1203 12:58:27.887650 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d28745c8-a7a6-4a38-acb2-f7b6ff1ef54f-catalog-content\") pod \"certified-operators-vzt5m\" (UID: \"d28745c8-a7a6-4a38-acb2-f7b6ff1ef54f\") " pod="openshift-marketplace/certified-operators-vzt5m" Dec 03 12:58:27 crc kubenswrapper[4986]: I1203 12:58:27.924463 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d28745c8-a7a6-4a38-acb2-f7b6ff1ef54f-utilities\") pod \"certified-operators-vzt5m\" (UID: \"d28745c8-a7a6-4a38-acb2-f7b6ff1ef54f\") " pod="openshift-marketplace/certified-operators-vzt5m" Dec 03 12:58:27 crc kubenswrapper[4986]: I1203 12:58:27.933707 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-7wtlt" podStartSLOduration=13.933679034 podStartE2EDuration="13.933679034s" podCreationTimestamp="2025-12-03 12:58:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 12:58:27.860659926 +0000 UTC m=+167.327091127" watchObservedRunningTime="2025-12-03 12:58:27.933679034 +0000 UTC m=+167.400110225" Dec 03 12:58:27 crc kubenswrapper[4986]: I1203 12:58:27.980500 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-vlhbw" podStartSLOduration=144.980478918 podStartE2EDuration="2m24.980478918s" podCreationTimestamp="2025-12-03 12:56:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 12:58:27.937762318 +0000 UTC m=+167.404193509" watchObservedRunningTime="2025-12-03 12:58:27.980478918 +0000 UTC m=+167.446910109" Dec 03 12:58:27 crc kubenswrapper[4986]: I1203 12:58:27.982902 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 12:58:27 crc kubenswrapper[4986]: I1203 12:58:27.983299 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37f6dc70-6f91-4162-a225-239a999e4320-catalog-content\") pod \"community-operators-bzs74\" (UID: \"37f6dc70-6f91-4162-a225-239a999e4320\") " pod="openshift-marketplace/community-operators-bzs74" Dec 03 12:58:27 crc kubenswrapper[4986]: I1203 12:58:27.983347 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98pc4\" (UniqueName: \"kubernetes.io/projected/37f6dc70-6f91-4162-a225-239a999e4320-kube-api-access-98pc4\") pod \"community-operators-bzs74\" (UID: \"37f6dc70-6f91-4162-a225-239a999e4320\") " pod="openshift-marketplace/community-operators-bzs74" Dec 03 12:58:27 crc kubenswrapper[4986]: I1203 12:58:27.983371 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37f6dc70-6f91-4162-a225-239a999e4320-utilities\") pod \"community-operators-bzs74\" (UID: \"37f6dc70-6f91-4162-a225-239a999e4320\") " pod="openshift-marketplace/community-operators-bzs74" Dec 03 12:58:27 crc kubenswrapper[4986]: I1203 12:58:27.983885 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37f6dc70-6f91-4162-a225-239a999e4320-utilities\") pod \"community-operators-bzs74\" (UID: \"37f6dc70-6f91-4162-a225-239a999e4320\") " pod="openshift-marketplace/community-operators-bzs74" Dec 03 12:58:27 crc kubenswrapper[4986]: I1203 12:58:27.984046 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37f6dc70-6f91-4162-a225-239a999e4320-catalog-content\") pod \"community-operators-bzs74\" (UID: \"37f6dc70-6f91-4162-a225-239a999e4320\") " pod="openshift-marketplace/community-operators-bzs74" Dec 03 12:58:27 crc kubenswrapper[4986]: E1203 12:58:27.984845 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 12:58:28.484822431 +0000 UTC m=+167.951253672 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:27 crc kubenswrapper[4986]: I1203 12:58:27.991960 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9w6ts"] Dec 03 12:58:28 crc kubenswrapper[4986]: I1203 12:58:28.009631 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vzb6\" (UniqueName: \"kubernetes.io/projected/d28745c8-a7a6-4a38-acb2-f7b6ff1ef54f-kube-api-access-7vzb6\") pod \"certified-operators-vzt5m\" (UID: \"d28745c8-a7a6-4a38-acb2-f7b6ff1ef54f\") " pod="openshift-marketplace/certified-operators-vzt5m" Dec 03 12:58:28 crc kubenswrapper[4986]: I1203 12:58:28.012381 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9w6ts"] Dec 03 12:58:28 crc kubenswrapper[4986]: I1203 12:58:28.012503 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9w6ts" Dec 03 12:58:28 crc kubenswrapper[4986]: I1203 12:58:28.066477 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98pc4\" (UniqueName: \"kubernetes.io/projected/37f6dc70-6f91-4162-a225-239a999e4320-kube-api-access-98pc4\") pod \"community-operators-bzs74\" (UID: \"37f6dc70-6f91-4162-a225-239a999e4320\") " pod="openshift-marketplace/community-operators-bzs74" Dec 03 12:58:28 crc kubenswrapper[4986]: I1203 12:58:28.105936 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb0a57df-0313-4698-9e73-373c97e2fb72-utilities\") pod \"certified-operators-9w6ts\" (UID: \"bb0a57df-0313-4698-9e73-373c97e2fb72\") " pod="openshift-marketplace/certified-operators-9w6ts" Dec 03 12:58:28 crc kubenswrapper[4986]: I1203 12:58:28.105990 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzx5s\" (UniqueName: \"kubernetes.io/projected/bb0a57df-0313-4698-9e73-373c97e2fb72-kube-api-access-wzx5s\") pod \"certified-operators-9w6ts\" (UID: \"bb0a57df-0313-4698-9e73-373c97e2fb72\") " pod="openshift-marketplace/certified-operators-9w6ts" Dec 03 12:58:28 crc kubenswrapper[4986]: I1203 12:58:28.106064 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb0a57df-0313-4698-9e73-373c97e2fb72-catalog-content\") pod \"certified-operators-9w6ts\" (UID: \"bb0a57df-0313-4698-9e73-373c97e2fb72\") " pod="openshift-marketplace/certified-operators-9w6ts" Dec 03 12:58:28 crc kubenswrapper[4986]: I1203 12:58:28.106109 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:28 crc kubenswrapper[4986]: E1203 12:58:28.106491 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 12:58:28.606476515 +0000 UTC m=+168.072907706 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zn5lj" (UID: "d4e187c5-28e2-4881-8f59-214d93c767b1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:28 crc kubenswrapper[4986]: I1203 12:58:28.129391 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-9wnfz" Dec 03 12:58:28 crc kubenswrapper[4986]: I1203 12:58:28.129572 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bzs74" Dec 03 12:58:28 crc kubenswrapper[4986]: I1203 12:58:28.144434 4986 patch_prober.go:28] interesting pod/router-default-5444994796-9wnfz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 12:58:28 crc kubenswrapper[4986]: [-]has-synced failed: reason withheld Dec 03 12:58:28 crc kubenswrapper[4986]: [+]process-running ok Dec 03 12:58:28 crc kubenswrapper[4986]: healthz check failed Dec 03 12:58:28 crc kubenswrapper[4986]: I1203 12:58:28.144488 4986 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9wnfz" podUID="872b07f2-d557-4c06-a432-18b9a46fe6cc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 12:58:28 crc kubenswrapper[4986]: I1203 12:58:28.174888 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-dqctr" podStartSLOduration=146.174872642 podStartE2EDuration="2m26.174872642s" podCreationTimestamp="2025-12-03 12:56:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 12:58:28.173020448 +0000 UTC m=+167.639451659" watchObservedRunningTime="2025-12-03 12:58:28.174872642 +0000 UTC m=+167.641303833" Dec 03 12:58:28 crc kubenswrapper[4986]: I1203 12:58:28.211034 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 12:58:28 crc kubenswrapper[4986]: I1203 12:58:28.211331 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzx5s\" (UniqueName: \"kubernetes.io/projected/bb0a57df-0313-4698-9e73-373c97e2fb72-kube-api-access-wzx5s\") pod \"certified-operators-9w6ts\" (UID: \"bb0a57df-0313-4698-9e73-373c97e2fb72\") " pod="openshift-marketplace/certified-operators-9w6ts" Dec 03 12:58:28 crc kubenswrapper[4986]: I1203 12:58:28.211406 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb0a57df-0313-4698-9e73-373c97e2fb72-catalog-content\") pod \"certified-operators-9w6ts\" (UID: \"bb0a57df-0313-4698-9e73-373c97e2fb72\") " pod="openshift-marketplace/certified-operators-9w6ts" Dec 03 12:58:28 crc kubenswrapper[4986]: I1203 12:58:28.211502 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb0a57df-0313-4698-9e73-373c97e2fb72-utilities\") pod \"certified-operators-9w6ts\" (UID: \"bb0a57df-0313-4698-9e73-373c97e2fb72\") " pod="openshift-marketplace/certified-operators-9w6ts" Dec 03 12:58:28 crc kubenswrapper[4986]: I1203 12:58:28.211975 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb0a57df-0313-4698-9e73-373c97e2fb72-utilities\") pod \"certified-operators-9w6ts\" (UID: \"bb0a57df-0313-4698-9e73-373c97e2fb72\") " pod="openshift-marketplace/certified-operators-9w6ts" Dec 03 12:58:28 crc kubenswrapper[4986]: E1203 12:58:28.212064 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 12:58:28.712042273 +0000 UTC m=+168.178473464 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:28 crc kubenswrapper[4986]: I1203 12:58:28.212646 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb0a57df-0313-4698-9e73-373c97e2fb72-catalog-content\") pod \"certified-operators-9w6ts\" (UID: \"bb0a57df-0313-4698-9e73-373c97e2fb72\") " pod="openshift-marketplace/certified-operators-9w6ts" Dec 03 12:58:28 crc kubenswrapper[4986]: I1203 12:58:28.237982 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-szpmg" Dec 03 12:58:28 crc kubenswrapper[4986]: I1203 12:58:28.260667 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-glvvh" Dec 03 12:58:28 crc kubenswrapper[4986]: I1203 12:58:28.278133 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vzt5m" Dec 03 12:58:28 crc kubenswrapper[4986]: I1203 12:58:28.296112 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzx5s\" (UniqueName: \"kubernetes.io/projected/bb0a57df-0313-4698-9e73-373c97e2fb72-kube-api-access-wzx5s\") pod \"certified-operators-9w6ts\" (UID: \"bb0a57df-0313-4698-9e73-373c97e2fb72\") " pod="openshift-marketplace/certified-operators-9w6ts" Dec 03 12:58:28 crc kubenswrapper[4986]: I1203 12:58:28.315676 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:28 crc kubenswrapper[4986]: E1203 12:58:28.316053 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 12:58:28.816025948 +0000 UTC m=+168.282457139 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zn5lj" (UID: "d4e187c5-28e2-4881-8f59-214d93c767b1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:28 crc kubenswrapper[4986]: I1203 12:58:28.421905 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 12:58:28 crc kubenswrapper[4986]: E1203 12:58:28.422874 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 12:58:28.922855712 +0000 UTC m=+168.389286913 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:28 crc kubenswrapper[4986]: I1203 12:58:28.431599 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9w6ts" Dec 03 12:58:28 crc kubenswrapper[4986]: I1203 12:58:28.498400 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-glvvh" Dec 03 12:58:28 crc kubenswrapper[4986]: I1203 12:58:28.531170 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:28 crc kubenswrapper[4986]: E1203 12:58:28.531629 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 12:58:29.031612661 +0000 UTC m=+168.498043852 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zn5lj" (UID: "d4e187c5-28e2-4881-8f59-214d93c767b1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:28 crc kubenswrapper[4986]: I1203 12:58:28.633251 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 12:58:28 crc kubenswrapper[4986]: E1203 12:58:28.633588 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 12:58:29.133573117 +0000 UTC m=+168.600004308 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:28 crc kubenswrapper[4986]: I1203 12:58:28.737118 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:28 crc kubenswrapper[4986]: E1203 12:58:28.737549 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 12:58:29.23753284 +0000 UTC m=+168.703964031 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zn5lj" (UID: "d4e187c5-28e2-4881-8f59-214d93c767b1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:28 crc kubenswrapper[4986]: I1203 12:58:28.838866 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 12:58:28 crc kubenswrapper[4986]: E1203 12:58:28.839204 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 12:58:29.339188678 +0000 UTC m=+168.805619869 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:28 crc kubenswrapper[4986]: I1203 12:58:28.863543 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-8hs4t" event={"ID":"d22bd794-105c-4c41-b1c3-46723ed0cb79","Type":"ContainerStarted","Data":"56962b6c9a84a88b603ead1e16bdfa35e56e9c92e8f5ea36dca2b585a73f54ca"} Dec 03 12:58:28 crc kubenswrapper[4986]: I1203 12:58:28.915090 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-72n7g" event={"ID":"5825d5b3-a095-48d6-9365-d03d1faa63ca","Type":"ContainerStarted","Data":"614a11b1c12427f570999813ebeff045d5400222c1c9fa75d850e91d898cbdb4"} Dec 03 12:58:28 crc kubenswrapper[4986]: I1203 12:58:28.915141 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-72n7g" event={"ID":"5825d5b3-a095-48d6-9365-d03d1faa63ca","Type":"ContainerStarted","Data":"42afe99bba4157db3f89889b294408342c3e2993d4589e01724d45be88689b85"} Dec 03 12:58:28 crc kubenswrapper[4986]: I1203 12:58:28.936869 4986 generic.go:334] "Generic (PLEG): container finished" podID="0c102da0-9cf4-4521-97fe-3153aa47a43e" containerID="cb33e8b1845714e4103b5ef1ded54eac8e97493881d1aa910f7df5593c874962" exitCode=0 Dec 03 12:58:28 crc kubenswrapper[4986]: I1203 12:58:28.937740 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412765-29wpf" event={"ID":"0c102da0-9cf4-4521-97fe-3153aa47a43e","Type":"ContainerDied","Data":"cb33e8b1845714e4103b5ef1ded54eac8e97493881d1aa910f7df5593c874962"} Dec 03 12:58:28 crc kubenswrapper[4986]: I1203 12:58:28.942584 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:28 crc kubenswrapper[4986]: E1203 12:58:28.942950 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 12:58:29.442935555 +0000 UTC m=+168.909366746 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zn5lj" (UID: "d4e187c5-28e2-4881-8f59-214d93c767b1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:29 crc kubenswrapper[4986]: I1203 12:58:29.011975 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-72n7g" podStartSLOduration=147.011957189 podStartE2EDuration="2m27.011957189s" podCreationTimestamp="2025-12-03 12:56:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 12:58:28.99819773 +0000 UTC m=+168.464628931" watchObservedRunningTime="2025-12-03 12:58:29.011957189 +0000 UTC m=+168.478388380" Dec 03 12:58:29 crc kubenswrapper[4986]: I1203 12:58:29.017700 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hhmft" Dec 03 12:58:29 crc kubenswrapper[4986]: I1203 12:58:29.017737 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-b8hxm"] Dec 03 12:58:29 crc kubenswrapper[4986]: I1203 12:58:29.047859 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 12:58:29 crc kubenswrapper[4986]: E1203 12:58:29.049561 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 12:58:29.549541533 +0000 UTC m=+169.015972724 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:29 crc kubenswrapper[4986]: I1203 12:58:29.152642 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:29 crc kubenswrapper[4986]: E1203 12:58:29.152902 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 12:58:29.652891328 +0000 UTC m=+169.119322519 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zn5lj" (UID: "d4e187c5-28e2-4881-8f59-214d93c767b1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:29 crc kubenswrapper[4986]: I1203 12:58:29.186604 4986 patch_prober.go:28] interesting pod/router-default-5444994796-9wnfz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 12:58:29 crc kubenswrapper[4986]: [-]has-synced failed: reason withheld Dec 03 12:58:29 crc kubenswrapper[4986]: [+]process-running ok Dec 03 12:58:29 crc kubenswrapper[4986]: healthz check failed Dec 03 12:58:29 crc kubenswrapper[4986]: I1203 12:58:29.186680 4986 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9wnfz" podUID="872b07f2-d557-4c06-a432-18b9a46fe6cc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 12:58:29 crc kubenswrapper[4986]: W1203 12:58:29.205555 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7aceb259_65a5_45a6_acd1_8f5cac430ef7.slice/crio-ff6b9e42c6206e87a652f64d1ca6f411d95b6f7e31694092eb96ae93c410d4d0 WatchSource:0}: Error finding container ff6b9e42c6206e87a652f64d1ca6f411d95b6f7e31694092eb96ae93c410d4d0: Status 404 returned error can't find the container with id ff6b9e42c6206e87a652f64d1ca6f411d95b6f7e31694092eb96ae93c410d4d0 Dec 03 12:58:29 crc kubenswrapper[4986]: I1203 12:58:29.259093 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 12:58:29 crc kubenswrapper[4986]: E1203 12:58:29.259841 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 12:58:29.759824586 +0000 UTC m=+169.226255777 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:29 crc kubenswrapper[4986]: I1203 12:58:29.318736 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bzs74"] Dec 03 12:58:29 crc kubenswrapper[4986]: I1203 12:58:29.320467 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-l6bpp"] Dec 03 12:58:29 crc kubenswrapper[4986]: I1203 12:58:29.321413 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l6bpp" Dec 03 12:58:29 crc kubenswrapper[4986]: I1203 12:58:29.325454 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 03 12:58:29 crc kubenswrapper[4986]: I1203 12:58:29.361253 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:29 crc kubenswrapper[4986]: E1203 12:58:29.361669 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 12:58:29.861655368 +0000 UTC m=+169.328086559 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zn5lj" (UID: "d4e187c5-28e2-4881-8f59-214d93c767b1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:29 crc kubenswrapper[4986]: I1203 12:58:29.380411 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-l6bpp"] Dec 03 12:58:29 crc kubenswrapper[4986]: I1203 12:58:29.462581 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 12:58:29 crc kubenswrapper[4986]: I1203 12:58:29.462907 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2dbfdc4-7122-4b7f-bfdd-189396fb1c77-utilities\") pod \"redhat-marketplace-l6bpp\" (UID: \"b2dbfdc4-7122-4b7f-bfdd-189396fb1c77\") " pod="openshift-marketplace/redhat-marketplace-l6bpp" Dec 03 12:58:29 crc kubenswrapper[4986]: I1203 12:58:29.462941 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qg2bf\" (UniqueName: \"kubernetes.io/projected/b2dbfdc4-7122-4b7f-bfdd-189396fb1c77-kube-api-access-qg2bf\") pod \"redhat-marketplace-l6bpp\" (UID: \"b2dbfdc4-7122-4b7f-bfdd-189396fb1c77\") " pod="openshift-marketplace/redhat-marketplace-l6bpp" Dec 03 12:58:29 crc kubenswrapper[4986]: I1203 12:58:29.462993 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2dbfdc4-7122-4b7f-bfdd-189396fb1c77-catalog-content\") pod \"redhat-marketplace-l6bpp\" (UID: \"b2dbfdc4-7122-4b7f-bfdd-189396fb1c77\") " pod="openshift-marketplace/redhat-marketplace-l6bpp" Dec 03 12:58:29 crc kubenswrapper[4986]: E1203 12:58:29.463131 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 12:58:29.963111841 +0000 UTC m=+169.429543032 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:29 crc kubenswrapper[4986]: I1203 12:58:29.529812 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9w6ts"] Dec 03 12:58:29 crc kubenswrapper[4986]: W1203 12:58:29.548837 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb0a57df_0313_4698_9e73_373c97e2fb72.slice/crio-9aab0b119c0c4bb6641e872a4b2a9bf44406efad7a294c2ad1c9ae1913a3ff6a WatchSource:0}: Error finding container 9aab0b119c0c4bb6641e872a4b2a9bf44406efad7a294c2ad1c9ae1913a3ff6a: Status 404 returned error can't find the container with id 9aab0b119c0c4bb6641e872a4b2a9bf44406efad7a294c2ad1c9ae1913a3ff6a Dec 03 12:58:29 crc kubenswrapper[4986]: I1203 12:58:29.572990 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2dbfdc4-7122-4b7f-bfdd-189396fb1c77-utilities\") pod \"redhat-marketplace-l6bpp\" (UID: \"b2dbfdc4-7122-4b7f-bfdd-189396fb1c77\") " pod="openshift-marketplace/redhat-marketplace-l6bpp" Dec 03 12:58:29 crc kubenswrapper[4986]: I1203 12:58:29.573043 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qg2bf\" (UniqueName: \"kubernetes.io/projected/b2dbfdc4-7122-4b7f-bfdd-189396fb1c77-kube-api-access-qg2bf\") pod \"redhat-marketplace-l6bpp\" (UID: \"b2dbfdc4-7122-4b7f-bfdd-189396fb1c77\") " pod="openshift-marketplace/redhat-marketplace-l6bpp" Dec 03 12:58:29 crc kubenswrapper[4986]: I1203 12:58:29.573079 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2dbfdc4-7122-4b7f-bfdd-189396fb1c77-catalog-content\") pod \"redhat-marketplace-l6bpp\" (UID: \"b2dbfdc4-7122-4b7f-bfdd-189396fb1c77\") " pod="openshift-marketplace/redhat-marketplace-l6bpp" Dec 03 12:58:29 crc kubenswrapper[4986]: I1203 12:58:29.573116 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:29 crc kubenswrapper[4986]: I1203 12:58:29.573817 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2dbfdc4-7122-4b7f-bfdd-189396fb1c77-utilities\") pod \"redhat-marketplace-l6bpp\" (UID: \"b2dbfdc4-7122-4b7f-bfdd-189396fb1c77\") " pod="openshift-marketplace/redhat-marketplace-l6bpp" Dec 03 12:58:29 crc kubenswrapper[4986]: I1203 12:58:29.574054 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2dbfdc4-7122-4b7f-bfdd-189396fb1c77-catalog-content\") pod \"redhat-marketplace-l6bpp\" (UID: \"b2dbfdc4-7122-4b7f-bfdd-189396fb1c77\") " pod="openshift-marketplace/redhat-marketplace-l6bpp" Dec 03 12:58:29 crc kubenswrapper[4986]: E1203 12:58:29.574195 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 12:58:30.074177525 +0000 UTC m=+169.540608756 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zn5lj" (UID: "d4e187c5-28e2-4881-8f59-214d93c767b1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:29 crc kubenswrapper[4986]: I1203 12:58:29.623185 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qg2bf\" (UniqueName: \"kubernetes.io/projected/b2dbfdc4-7122-4b7f-bfdd-189396fb1c77-kube-api-access-qg2bf\") pod \"redhat-marketplace-l6bpp\" (UID: \"b2dbfdc4-7122-4b7f-bfdd-189396fb1c77\") " pod="openshift-marketplace/redhat-marketplace-l6bpp" Dec 03 12:58:29 crc kubenswrapper[4986]: I1203 12:58:29.671416 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vzt5m"] Dec 03 12:58:29 crc kubenswrapper[4986]: I1203 12:58:29.676938 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 12:58:29 crc kubenswrapper[4986]: E1203 12:58:29.677059 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 12:58:30.177031867 +0000 UTC m=+169.643463058 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:29 crc kubenswrapper[4986]: I1203 12:58:29.677423 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:29 crc kubenswrapper[4986]: E1203 12:58:29.677974 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 12:58:30.177963593 +0000 UTC m=+169.644394784 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zn5lj" (UID: "d4e187c5-28e2-4881-8f59-214d93c767b1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:29 crc kubenswrapper[4986]: I1203 12:58:29.703599 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l6bpp" Dec 03 12:58:29 crc kubenswrapper[4986]: I1203 12:58:29.721204 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6mz6x"] Dec 03 12:58:29 crc kubenswrapper[4986]: I1203 12:58:29.722379 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6mz6x" Dec 03 12:58:29 crc kubenswrapper[4986]: I1203 12:58:29.780621 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 12:58:29 crc kubenswrapper[4986]: E1203 12:58:29.781031 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 12:58:30.281002181 +0000 UTC m=+169.747433372 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:29 crc kubenswrapper[4986]: I1203 12:58:29.790104 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6mz6x"] Dec 03 12:58:29 crc kubenswrapper[4986]: I1203 12:58:29.881810 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df90b001-135e-4c75-a67e-3084e905378a-catalog-content\") pod \"redhat-marketplace-6mz6x\" (UID: \"df90b001-135e-4c75-a67e-3084e905378a\") " pod="openshift-marketplace/redhat-marketplace-6mz6x" Dec 03 12:58:29 crc kubenswrapper[4986]: I1203 12:58:29.881866 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:29 crc kubenswrapper[4986]: I1203 12:58:29.881895 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df90b001-135e-4c75-a67e-3084e905378a-utilities\") pod \"redhat-marketplace-6mz6x\" (UID: \"df90b001-135e-4c75-a67e-3084e905378a\") " pod="openshift-marketplace/redhat-marketplace-6mz6x" Dec 03 12:58:29 crc kubenswrapper[4986]: I1203 12:58:29.882044 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrrvj\" (UniqueName: \"kubernetes.io/projected/df90b001-135e-4c75-a67e-3084e905378a-kube-api-access-hrrvj\") pod \"redhat-marketplace-6mz6x\" (UID: \"df90b001-135e-4c75-a67e-3084e905378a\") " pod="openshift-marketplace/redhat-marketplace-6mz6x" Dec 03 12:58:29 crc kubenswrapper[4986]: E1203 12:58:29.882250 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 12:58:30.382232086 +0000 UTC m=+169.848663357 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zn5lj" (UID: "d4e187c5-28e2-4881-8f59-214d93c767b1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:29 crc kubenswrapper[4986]: I1203 12:58:29.949981 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9w6ts" event={"ID":"bb0a57df-0313-4698-9e73-373c97e2fb72","Type":"ContainerStarted","Data":"9aab0b119c0c4bb6641e872a4b2a9bf44406efad7a294c2ad1c9ae1913a3ff6a"} Dec 03 12:58:29 crc kubenswrapper[4986]: I1203 12:58:29.953752 4986 generic.go:334] "Generic (PLEG): container finished" podID="7aceb259-65a5-45a6-acd1-8f5cac430ef7" containerID="edd9e55fae83b81e03dbd699e8b1347bb1a6471bfe3045f1fdca49370e38d7f4" exitCode=0 Dec 03 12:58:29 crc kubenswrapper[4986]: I1203 12:58:29.953799 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b8hxm" event={"ID":"7aceb259-65a5-45a6-acd1-8f5cac430ef7","Type":"ContainerDied","Data":"edd9e55fae83b81e03dbd699e8b1347bb1a6471bfe3045f1fdca49370e38d7f4"} Dec 03 12:58:29 crc kubenswrapper[4986]: I1203 12:58:29.954017 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b8hxm" event={"ID":"7aceb259-65a5-45a6-acd1-8f5cac430ef7","Type":"ContainerStarted","Data":"ff6b9e42c6206e87a652f64d1ca6f411d95b6f7e31694092eb96ae93c410d4d0"} Dec 03 12:58:29 crc kubenswrapper[4986]: I1203 12:58:29.956169 4986 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 03 12:58:29 crc kubenswrapper[4986]: I1203 12:58:29.957247 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vzt5m" event={"ID":"d28745c8-a7a6-4a38-acb2-f7b6ff1ef54f","Type":"ContainerStarted","Data":"d95b71dee3b196c08e1cbd678bcf27d6d8ff83c64bbbfed0b029bfe39b811381"} Dec 03 12:58:29 crc kubenswrapper[4986]: I1203 12:58:29.973121 4986 generic.go:334] "Generic (PLEG): container finished" podID="37f6dc70-6f91-4162-a225-239a999e4320" containerID="51d2b301b70919f9cb92ebd21387a58082157ef5f0788c77ae18aa4112cfa056" exitCode=0 Dec 03 12:58:29 crc kubenswrapper[4986]: I1203 12:58:29.973215 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bzs74" event={"ID":"37f6dc70-6f91-4162-a225-239a999e4320","Type":"ContainerDied","Data":"51d2b301b70919f9cb92ebd21387a58082157ef5f0788c77ae18aa4112cfa056"} Dec 03 12:58:29 crc kubenswrapper[4986]: I1203 12:58:29.973262 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bzs74" event={"ID":"37f6dc70-6f91-4162-a225-239a999e4320","Type":"ContainerStarted","Data":"96bdf089baa05ec4b934cd86bc1687102d13f34c9fbd12b644b947d3fac047ce"} Dec 03 12:58:29 crc kubenswrapper[4986]: I1203 12:58:29.983430 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 12:58:29 crc kubenswrapper[4986]: E1203 12:58:29.983665 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 12:58:30.483636957 +0000 UTC m=+169.950068158 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:29 crc kubenswrapper[4986]: I1203 12:58:29.983715 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:29 crc kubenswrapper[4986]: I1203 12:58:29.983808 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df90b001-135e-4c75-a67e-3084e905378a-utilities\") pod \"redhat-marketplace-6mz6x\" (UID: \"df90b001-135e-4c75-a67e-3084e905378a\") " pod="openshift-marketplace/redhat-marketplace-6mz6x" Dec 03 12:58:29 crc kubenswrapper[4986]: I1203 12:58:29.983839 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrrvj\" (UniqueName: \"kubernetes.io/projected/df90b001-135e-4c75-a67e-3084e905378a-kube-api-access-hrrvj\") pod \"redhat-marketplace-6mz6x\" (UID: \"df90b001-135e-4c75-a67e-3084e905378a\") " pod="openshift-marketplace/redhat-marketplace-6mz6x" Dec 03 12:58:29 crc kubenswrapper[4986]: I1203 12:58:29.984000 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df90b001-135e-4c75-a67e-3084e905378a-catalog-content\") pod \"redhat-marketplace-6mz6x\" (UID: \"df90b001-135e-4c75-a67e-3084e905378a\") " pod="openshift-marketplace/redhat-marketplace-6mz6x" Dec 03 12:58:29 crc kubenswrapper[4986]: I1203 12:58:29.985243 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df90b001-135e-4c75-a67e-3084e905378a-catalog-content\") pod \"redhat-marketplace-6mz6x\" (UID: \"df90b001-135e-4c75-a67e-3084e905378a\") " pod="openshift-marketplace/redhat-marketplace-6mz6x" Dec 03 12:58:29 crc kubenswrapper[4986]: E1203 12:58:29.985541 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 12:58:30.485529921 +0000 UTC m=+169.951961112 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zn5lj" (UID: "d4e187c5-28e2-4881-8f59-214d93c767b1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:29 crc kubenswrapper[4986]: I1203 12:58:29.985831 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df90b001-135e-4c75-a67e-3084e905378a-utilities\") pod \"redhat-marketplace-6mz6x\" (UID: \"df90b001-135e-4c75-a67e-3084e905378a\") " pod="openshift-marketplace/redhat-marketplace-6mz6x" Dec 03 12:58:29 crc kubenswrapper[4986]: I1203 12:58:29.995796 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-8hs4t" event={"ID":"d22bd794-105c-4c41-b1c3-46723ed0cb79","Type":"ContainerStarted","Data":"3d791b80f24f29522b6fbe409b6022b7b1944cee94aee8f6dd4827284be63510"} Dec 03 12:58:30 crc kubenswrapper[4986]: I1203 12:58:30.008339 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 03 12:58:30 crc kubenswrapper[4986]: I1203 12:58:30.014578 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 03 12:58:30 crc kubenswrapper[4986]: I1203 12:58:30.016976 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Dec 03 12:58:30 crc kubenswrapper[4986]: I1203 12:58:30.017163 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Dec 03 12:58:30 crc kubenswrapper[4986]: I1203 12:58:30.021104 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 03 12:58:30 crc kubenswrapper[4986]: I1203 12:58:30.049146 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrrvj\" (UniqueName: \"kubernetes.io/projected/df90b001-135e-4c75-a67e-3084e905378a-kube-api-access-hrrvj\") pod \"redhat-marketplace-6mz6x\" (UID: \"df90b001-135e-4c75-a67e-3084e905378a\") " pod="openshift-marketplace/redhat-marketplace-6mz6x" Dec 03 12:58:30 crc kubenswrapper[4986]: I1203 12:58:30.093691 4986 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Dec 03 12:58:30 crc kubenswrapper[4986]: I1203 12:58:30.095349 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 12:58:30 crc kubenswrapper[4986]: I1203 12:58:30.095545 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/241c4ee0-97a4-4efd-9071-0b8b278e6960-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"241c4ee0-97a4-4efd-9071-0b8b278e6960\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 03 12:58:30 crc kubenswrapper[4986]: I1203 12:58:30.095601 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/241c4ee0-97a4-4efd-9071-0b8b278e6960-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"241c4ee0-97a4-4efd-9071-0b8b278e6960\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 03 12:58:30 crc kubenswrapper[4986]: E1203 12:58:30.097134 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 12:58:30.5971137 +0000 UTC m=+170.063544891 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:30 crc kubenswrapper[4986]: I1203 12:58:30.112192 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-l6bpp"] Dec 03 12:58:30 crc kubenswrapper[4986]: I1203 12:58:30.131022 4986 patch_prober.go:28] interesting pod/router-default-5444994796-9wnfz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 12:58:30 crc kubenswrapper[4986]: [-]has-synced failed: reason withheld Dec 03 12:58:30 crc kubenswrapper[4986]: [+]process-running ok Dec 03 12:58:30 crc kubenswrapper[4986]: healthz check failed Dec 03 12:58:30 crc kubenswrapper[4986]: I1203 12:58:30.131266 4986 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9wnfz" podUID="872b07f2-d557-4c06-a432-18b9a46fe6cc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 12:58:30 crc kubenswrapper[4986]: I1203 12:58:30.162963 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6mz6x" Dec 03 12:58:30 crc kubenswrapper[4986]: I1203 12:58:30.197197 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/241c4ee0-97a4-4efd-9071-0b8b278e6960-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"241c4ee0-97a4-4efd-9071-0b8b278e6960\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 03 12:58:30 crc kubenswrapper[4986]: I1203 12:58:30.197235 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/241c4ee0-97a4-4efd-9071-0b8b278e6960-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"241c4ee0-97a4-4efd-9071-0b8b278e6960\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 03 12:58:30 crc kubenswrapper[4986]: I1203 12:58:30.197276 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:30 crc kubenswrapper[4986]: I1203 12:58:30.197439 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/241c4ee0-97a4-4efd-9071-0b8b278e6960-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"241c4ee0-97a4-4efd-9071-0b8b278e6960\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 03 12:58:30 crc kubenswrapper[4986]: E1203 12:58:30.197589 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 12:58:30.697576903 +0000 UTC m=+170.164008094 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zn5lj" (UID: "d4e187c5-28e2-4881-8f59-214d93c767b1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:30 crc kubenswrapper[4986]: I1203 12:58:30.219875 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/241c4ee0-97a4-4efd-9071-0b8b278e6960-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"241c4ee0-97a4-4efd-9071-0b8b278e6960\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 03 12:58:30 crc kubenswrapper[4986]: I1203 12:58:30.299067 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412765-29wpf" Dec 03 12:58:30 crc kubenswrapper[4986]: I1203 12:58:30.299500 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 12:58:30 crc kubenswrapper[4986]: E1203 12:58:30.299837 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 12:58:30.799822368 +0000 UTC m=+170.266253559 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:30 crc kubenswrapper[4986]: I1203 12:58:30.354565 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 03 12:58:30 crc kubenswrapper[4986]: I1203 12:58:30.400917 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0c102da0-9cf4-4521-97fe-3153aa47a43e-secret-volume\") pod \"0c102da0-9cf4-4521-97fe-3153aa47a43e\" (UID: \"0c102da0-9cf4-4521-97fe-3153aa47a43e\") " Dec 03 12:58:30 crc kubenswrapper[4986]: I1203 12:58:30.400970 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0c102da0-9cf4-4521-97fe-3153aa47a43e-config-volume\") pod \"0c102da0-9cf4-4521-97fe-3153aa47a43e\" (UID: \"0c102da0-9cf4-4521-97fe-3153aa47a43e\") " Dec 03 12:58:30 crc kubenswrapper[4986]: I1203 12:58:30.400996 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngnr6\" (UniqueName: \"kubernetes.io/projected/0c102da0-9cf4-4521-97fe-3153aa47a43e-kube-api-access-ngnr6\") pod \"0c102da0-9cf4-4521-97fe-3153aa47a43e\" (UID: \"0c102da0-9cf4-4521-97fe-3153aa47a43e\") " Dec 03 12:58:30 crc kubenswrapper[4986]: I1203 12:58:30.402377 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c102da0-9cf4-4521-97fe-3153aa47a43e-config-volume" (OuterVolumeSpecName: "config-volume") pod "0c102da0-9cf4-4521-97fe-3153aa47a43e" (UID: "0c102da0-9cf4-4521-97fe-3153aa47a43e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 12:58:30 crc kubenswrapper[4986]: I1203 12:58:30.402491 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:30 crc kubenswrapper[4986]: I1203 12:58:30.402602 4986 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0c102da0-9cf4-4521-97fe-3153aa47a43e-config-volume\") on node \"crc\" DevicePath \"\"" Dec 03 12:58:30 crc kubenswrapper[4986]: E1203 12:58:30.402886 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 12:58:30.902875256 +0000 UTC m=+170.369306447 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zn5lj" (UID: "d4e187c5-28e2-4881-8f59-214d93c767b1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:30 crc kubenswrapper[4986]: I1203 12:58:30.407865 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c102da0-9cf4-4521-97fe-3153aa47a43e-kube-api-access-ngnr6" (OuterVolumeSpecName: "kube-api-access-ngnr6") pod "0c102da0-9cf4-4521-97fe-3153aa47a43e" (UID: "0c102da0-9cf4-4521-97fe-3153aa47a43e"). InnerVolumeSpecName "kube-api-access-ngnr6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 12:58:30 crc kubenswrapper[4986]: I1203 12:58:30.407882 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6mz6x"] Dec 03 12:58:30 crc kubenswrapper[4986]: I1203 12:58:30.409668 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c102da0-9cf4-4521-97fe-3153aa47a43e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0c102da0-9cf4-4521-97fe-3153aa47a43e" (UID: "0c102da0-9cf4-4521-97fe-3153aa47a43e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 12:58:30 crc kubenswrapper[4986]: I1203 12:58:30.506649 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 12:58:30 crc kubenswrapper[4986]: E1203 12:58:30.506768 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 12:58:31.006749926 +0000 UTC m=+170.473181117 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:30 crc kubenswrapper[4986]: I1203 12:58:30.507019 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:30 crc kubenswrapper[4986]: I1203 12:58:30.507164 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngnr6\" (UniqueName: \"kubernetes.io/projected/0c102da0-9cf4-4521-97fe-3153aa47a43e-kube-api-access-ngnr6\") on node \"crc\" DevicePath \"\"" Dec 03 12:58:30 crc kubenswrapper[4986]: I1203 12:58:30.507180 4986 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0c102da0-9cf4-4521-97fe-3153aa47a43e-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 03 12:58:30 crc kubenswrapper[4986]: E1203 12:58:30.507540 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 12:58:31.007521448 +0000 UTC m=+170.473952719 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zn5lj" (UID: "d4e187c5-28e2-4881-8f59-214d93c767b1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:30 crc kubenswrapper[4986]: I1203 12:58:30.518115 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dgwd8"] Dec 03 12:58:30 crc kubenswrapper[4986]: E1203 12:58:30.520987 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c102da0-9cf4-4521-97fe-3153aa47a43e" containerName="collect-profiles" Dec 03 12:58:30 crc kubenswrapper[4986]: I1203 12:58:30.521026 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c102da0-9cf4-4521-97fe-3153aa47a43e" containerName="collect-profiles" Dec 03 12:58:30 crc kubenswrapper[4986]: I1203 12:58:30.521215 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c102da0-9cf4-4521-97fe-3153aa47a43e" containerName="collect-profiles" Dec 03 12:58:30 crc kubenswrapper[4986]: I1203 12:58:30.522164 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dgwd8" Dec 03 12:58:30 crc kubenswrapper[4986]: I1203 12:58:30.527954 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dgwd8"] Dec 03 12:58:30 crc kubenswrapper[4986]: I1203 12:58:30.531648 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 03 12:58:30 crc kubenswrapper[4986]: I1203 12:58:30.556937 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 03 12:58:30 crc kubenswrapper[4986]: W1203 12:58:30.562075 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddf90b001_135e_4c75_a67e_3084e905378a.slice/crio-afc2dbb4e2d0914a44cc0f0d4c659d4190bffb3c318aa29447b87d3fce9f32c3 WatchSource:0}: Error finding container afc2dbb4e2d0914a44cc0f0d4c659d4190bffb3c318aa29447b87d3fce9f32c3: Status 404 returned error can't find the container with id afc2dbb4e2d0914a44cc0f0d4c659d4190bffb3c318aa29447b87d3fce9f32c3 Dec 03 12:58:30 crc kubenswrapper[4986]: I1203 12:58:30.610845 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 12:58:30 crc kubenswrapper[4986]: E1203 12:58:30.611026 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 12:58:31.110998758 +0000 UTC m=+170.577429949 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:30 crc kubenswrapper[4986]: I1203 12:58:30.611054 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:30 crc kubenswrapper[4986]: I1203 12:58:30.611081 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdc29\" (UniqueName: \"kubernetes.io/projected/49d6e247-25ce-45e1-b2fe-2e3ec70cf966-kube-api-access-zdc29\") pod \"redhat-operators-dgwd8\" (UID: \"49d6e247-25ce-45e1-b2fe-2e3ec70cf966\") " pod="openshift-marketplace/redhat-operators-dgwd8" Dec 03 12:58:30 crc kubenswrapper[4986]: I1203 12:58:30.611109 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49d6e247-25ce-45e1-b2fe-2e3ec70cf966-catalog-content\") pod \"redhat-operators-dgwd8\" (UID: \"49d6e247-25ce-45e1-b2fe-2e3ec70cf966\") " pod="openshift-marketplace/redhat-operators-dgwd8" Dec 03 12:58:30 crc kubenswrapper[4986]: I1203 12:58:30.611136 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49d6e247-25ce-45e1-b2fe-2e3ec70cf966-utilities\") pod \"redhat-operators-dgwd8\" (UID: \"49d6e247-25ce-45e1-b2fe-2e3ec70cf966\") " pod="openshift-marketplace/redhat-operators-dgwd8" Dec 03 12:58:30 crc kubenswrapper[4986]: E1203 12:58:30.611535 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 12:58:31.111515372 +0000 UTC m=+170.577946623 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zn5lj" (UID: "d4e187c5-28e2-4881-8f59-214d93c767b1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:30 crc kubenswrapper[4986]: I1203 12:58:30.639595 4986 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-12-03T12:58:30.093728424Z","Handler":null,"Name":""} Dec 03 12:58:30 crc kubenswrapper[4986]: I1203 12:58:30.712594 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 12:58:30 crc kubenswrapper[4986]: I1203 12:58:30.712774 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdc29\" (UniqueName: \"kubernetes.io/projected/49d6e247-25ce-45e1-b2fe-2e3ec70cf966-kube-api-access-zdc29\") pod \"redhat-operators-dgwd8\" (UID: \"49d6e247-25ce-45e1-b2fe-2e3ec70cf966\") " pod="openshift-marketplace/redhat-operators-dgwd8" Dec 03 12:58:30 crc kubenswrapper[4986]: I1203 12:58:30.712802 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49d6e247-25ce-45e1-b2fe-2e3ec70cf966-catalog-content\") pod \"redhat-operators-dgwd8\" (UID: \"49d6e247-25ce-45e1-b2fe-2e3ec70cf966\") " pod="openshift-marketplace/redhat-operators-dgwd8" Dec 03 12:58:30 crc kubenswrapper[4986]: I1203 12:58:30.712826 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49d6e247-25ce-45e1-b2fe-2e3ec70cf966-utilities\") pod \"redhat-operators-dgwd8\" (UID: \"49d6e247-25ce-45e1-b2fe-2e3ec70cf966\") " pod="openshift-marketplace/redhat-operators-dgwd8" Dec 03 12:58:30 crc kubenswrapper[4986]: I1203 12:58:30.713217 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49d6e247-25ce-45e1-b2fe-2e3ec70cf966-utilities\") pod \"redhat-operators-dgwd8\" (UID: \"49d6e247-25ce-45e1-b2fe-2e3ec70cf966\") " pod="openshift-marketplace/redhat-operators-dgwd8" Dec 03 12:58:30 crc kubenswrapper[4986]: E1203 12:58:30.713296 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 12:58:31.213267613 +0000 UTC m=+170.679698804 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 12:58:30 crc kubenswrapper[4986]: I1203 12:58:30.713835 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49d6e247-25ce-45e1-b2fe-2e3ec70cf966-catalog-content\") pod \"redhat-operators-dgwd8\" (UID: \"49d6e247-25ce-45e1-b2fe-2e3ec70cf966\") " pod="openshift-marketplace/redhat-operators-dgwd8" Dec 03 12:58:30 crc kubenswrapper[4986]: I1203 12:58:30.713893 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-62rd9" Dec 03 12:58:30 crc kubenswrapper[4986]: I1203 12:58:30.714705 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-62rd9" Dec 03 12:58:30 crc kubenswrapper[4986]: I1203 12:58:30.734040 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-62rd9" Dec 03 12:58:30 crc kubenswrapper[4986]: I1203 12:58:30.738052 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdc29\" (UniqueName: \"kubernetes.io/projected/49d6e247-25ce-45e1-b2fe-2e3ec70cf966-kube-api-access-zdc29\") pod \"redhat-operators-dgwd8\" (UID: \"49d6e247-25ce-45e1-b2fe-2e3ec70cf966\") " pod="openshift-marketplace/redhat-operators-dgwd8" Dec 03 12:58:30 crc kubenswrapper[4986]: I1203 12:58:30.769852 4986 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Dec 03 12:58:30 crc kubenswrapper[4986]: I1203 12:58:30.769915 4986 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Dec 03 12:58:30 crc kubenswrapper[4986]: I1203 12:58:30.815240 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:30 crc kubenswrapper[4986]: I1203 12:58:30.818038 4986 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 03 12:58:30 crc kubenswrapper[4986]: I1203 12:58:30.818079 4986 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:30 crc kubenswrapper[4986]: I1203 12:58:30.841864 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zn5lj\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:30 crc kubenswrapper[4986]: I1203 12:58:30.843650 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dgwd8" Dec 03 12:58:30 crc kubenswrapper[4986]: I1203 12:58:30.916449 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 12:58:30 crc kubenswrapper[4986]: I1203 12:58:30.917849 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wnfnb"] Dec 03 12:58:30 crc kubenswrapper[4986]: I1203 12:58:30.918816 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wnfnb" Dec 03 12:58:30 crc kubenswrapper[4986]: I1203 12:58:30.943321 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 03 12:58:30 crc kubenswrapper[4986]: I1203 12:58:30.956698 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Dec 03 12:58:30 crc kubenswrapper[4986]: I1203 12:58:30.957468 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wnfnb"] Dec 03 12:58:31 crc kubenswrapper[4986]: I1203 12:58:31.011942 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412765-29wpf" event={"ID":"0c102da0-9cf4-4521-97fe-3153aa47a43e","Type":"ContainerDied","Data":"9f17bc4cdd557a950075fb69821633cce4e34d2b2178e80ba4f7c9e95d01259b"} Dec 03 12:58:31 crc kubenswrapper[4986]: I1203 12:58:31.011982 4986 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f17bc4cdd557a950075fb69821633cce4e34d2b2178e80ba4f7c9e95d01259b" Dec 03 12:58:31 crc kubenswrapper[4986]: I1203 12:58:31.012006 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412765-29wpf" Dec 03 12:58:31 crc kubenswrapper[4986]: I1203 12:58:31.015031 4986 generic.go:334] "Generic (PLEG): container finished" podID="df90b001-135e-4c75-a67e-3084e905378a" containerID="9956c60d607c5bcb9a97bc8b02faeac53e525e67623f77c4e2114a9b17cc0707" exitCode=0 Dec 03 12:58:31 crc kubenswrapper[4986]: I1203 12:58:31.018459 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6mz6x" event={"ID":"df90b001-135e-4c75-a67e-3084e905378a","Type":"ContainerDied","Data":"9956c60d607c5bcb9a97bc8b02faeac53e525e67623f77c4e2114a9b17cc0707"} Dec 03 12:58:31 crc kubenswrapper[4986]: I1203 12:58:31.018518 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6mz6x" event={"ID":"df90b001-135e-4c75-a67e-3084e905378a","Type":"ContainerStarted","Data":"afc2dbb4e2d0914a44cc0f0d4c659d4190bffb3c318aa29447b87d3fce9f32c3"} Dec 03 12:58:31 crc kubenswrapper[4986]: I1203 12:58:31.036126 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e009819-9f9d-48db-a20b-dc29cef30887-catalog-content\") pod \"redhat-operators-wnfnb\" (UID: \"9e009819-9f9d-48db-a20b-dc29cef30887\") " pod="openshift-marketplace/redhat-operators-wnfnb" Dec 03 12:58:31 crc kubenswrapper[4986]: I1203 12:58:31.036172 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2j9q5\" (UniqueName: \"kubernetes.io/projected/9e009819-9f9d-48db-a20b-dc29cef30887-kube-api-access-2j9q5\") pod \"redhat-operators-wnfnb\" (UID: \"9e009819-9f9d-48db-a20b-dc29cef30887\") " pod="openshift-marketplace/redhat-operators-wnfnb" Dec 03 12:58:31 crc kubenswrapper[4986]: I1203 12:58:31.036207 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e009819-9f9d-48db-a20b-dc29cef30887-utilities\") pod \"redhat-operators-wnfnb\" (UID: \"9e009819-9f9d-48db-a20b-dc29cef30887\") " pod="openshift-marketplace/redhat-operators-wnfnb" Dec 03 12:58:31 crc kubenswrapper[4986]: I1203 12:58:31.044712 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-8hs4t" event={"ID":"d22bd794-105c-4c41-b1c3-46723ed0cb79","Type":"ContainerStarted","Data":"12cf11b6fcd687f0703e150a4384ac11b6c13b581ed789986f9b42d3aa8687ea"} Dec 03 12:58:31 crc kubenswrapper[4986]: I1203 12:58:31.056143 4986 generic.go:334] "Generic (PLEG): container finished" podID="bb0a57df-0313-4698-9e73-373c97e2fb72" containerID="66dc9dd79127f8956aa40c0b7eea666ffb04b1d9b0485c51919d89c460b975c6" exitCode=0 Dec 03 12:58:31 crc kubenswrapper[4986]: I1203 12:58:31.056430 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9w6ts" event={"ID":"bb0a57df-0313-4698-9e73-373c97e2fb72","Type":"ContainerDied","Data":"66dc9dd79127f8956aa40c0b7eea666ffb04b1d9b0485c51919d89c460b975c6"} Dec 03 12:58:31 crc kubenswrapper[4986]: I1203 12:58:31.064232 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"241c4ee0-97a4-4efd-9071-0b8b278e6960","Type":"ContainerStarted","Data":"61a441d3a64b0328049ea1b46d121bbbfeb9f5c1528b916238f25d7ff46ba539"} Dec 03 12:58:31 crc kubenswrapper[4986]: I1203 12:58:31.064693 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"241c4ee0-97a4-4efd-9071-0b8b278e6960","Type":"ContainerStarted","Data":"976d0cda03e0d15afba90d128b6debed93b924448e3d5020088f013ed16a4ff1"} Dec 03 12:58:31 crc kubenswrapper[4986]: I1203 12:58:31.067688 4986 generic.go:334] "Generic (PLEG): container finished" podID="d28745c8-a7a6-4a38-acb2-f7b6ff1ef54f" containerID="616e99569a0850b08628cc199c893d0e93a50f69c87cbe3eac492c0774a4d3d9" exitCode=0 Dec 03 12:58:31 crc kubenswrapper[4986]: I1203 12:58:31.067820 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vzt5m" event={"ID":"d28745c8-a7a6-4a38-acb2-f7b6ff1ef54f","Type":"ContainerDied","Data":"616e99569a0850b08628cc199c893d0e93a50f69c87cbe3eac492c0774a4d3d9"} Dec 03 12:58:31 crc kubenswrapper[4986]: I1203 12:58:31.078794 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l6bpp" event={"ID":"b2dbfdc4-7122-4b7f-bfdd-189396fb1c77","Type":"ContainerDied","Data":"a01285239b62fb2136a6a1f0e401decfca9c90870f63baff0728fd5b4d4e0ae0"} Dec 03 12:58:31 crc kubenswrapper[4986]: I1203 12:58:31.078702 4986 generic.go:334] "Generic (PLEG): container finished" podID="b2dbfdc4-7122-4b7f-bfdd-189396fb1c77" containerID="a01285239b62fb2136a6a1f0e401decfca9c90870f63baff0728fd5b4d4e0ae0" exitCode=0 Dec 03 12:58:31 crc kubenswrapper[4986]: I1203 12:58:31.079987 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l6bpp" event={"ID":"b2dbfdc4-7122-4b7f-bfdd-189396fb1c77","Type":"ContainerStarted","Data":"b762f79cfd0d25c29b9ecd73711b93125579fe42872cb3ef32070ece702a7568"} Dec 03 12:58:31 crc kubenswrapper[4986]: I1203 12:58:31.087118 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:31 crc kubenswrapper[4986]: I1203 12:58:31.105639 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-62rd9" Dec 03 12:58:31 crc kubenswrapper[4986]: I1203 12:58:31.131887 4986 patch_prober.go:28] interesting pod/router-default-5444994796-9wnfz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 12:58:31 crc kubenswrapper[4986]: [-]has-synced failed: reason withheld Dec 03 12:58:31 crc kubenswrapper[4986]: [+]process-running ok Dec 03 12:58:31 crc kubenswrapper[4986]: healthz check failed Dec 03 12:58:31 crc kubenswrapper[4986]: I1203 12:58:31.131948 4986 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9wnfz" podUID="872b07f2-d557-4c06-a432-18b9a46fe6cc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 12:58:31 crc kubenswrapper[4986]: I1203 12:58:31.140440 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2j9q5\" (UniqueName: \"kubernetes.io/projected/9e009819-9f9d-48db-a20b-dc29cef30887-kube-api-access-2j9q5\") pod \"redhat-operators-wnfnb\" (UID: \"9e009819-9f9d-48db-a20b-dc29cef30887\") " pod="openshift-marketplace/redhat-operators-wnfnb" Dec 03 12:58:31 crc kubenswrapper[4986]: I1203 12:58:31.140534 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e009819-9f9d-48db-a20b-dc29cef30887-utilities\") pod \"redhat-operators-wnfnb\" (UID: \"9e009819-9f9d-48db-a20b-dc29cef30887\") " pod="openshift-marketplace/redhat-operators-wnfnb" Dec 03 12:58:31 crc kubenswrapper[4986]: I1203 12:58:31.140653 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e009819-9f9d-48db-a20b-dc29cef30887-catalog-content\") pod \"redhat-operators-wnfnb\" (UID: \"9e009819-9f9d-48db-a20b-dc29cef30887\") " pod="openshift-marketplace/redhat-operators-wnfnb" Dec 03 12:58:31 crc kubenswrapper[4986]: I1203 12:58:31.141255 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e009819-9f9d-48db-a20b-dc29cef30887-catalog-content\") pod \"redhat-operators-wnfnb\" (UID: \"9e009819-9f9d-48db-a20b-dc29cef30887\") " pod="openshift-marketplace/redhat-operators-wnfnb" Dec 03 12:58:31 crc kubenswrapper[4986]: I1203 12:58:31.142041 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e009819-9f9d-48db-a20b-dc29cef30887-utilities\") pod \"redhat-operators-wnfnb\" (UID: \"9e009819-9f9d-48db-a20b-dc29cef30887\") " pod="openshift-marketplace/redhat-operators-wnfnb" Dec 03 12:58:31 crc kubenswrapper[4986]: I1203 12:58:31.150305 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.150266994 podStartE2EDuration="2.150266994s" podCreationTimestamp="2025-12-03 12:58:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 12:58:31.124180986 +0000 UTC m=+170.590612207" watchObservedRunningTime="2025-12-03 12:58:31.150266994 +0000 UTC m=+170.616698185" Dec 03 12:58:31 crc kubenswrapper[4986]: I1203 12:58:31.165540 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2j9q5\" (UniqueName: \"kubernetes.io/projected/9e009819-9f9d-48db-a20b-dc29cef30887-kube-api-access-2j9q5\") pod \"redhat-operators-wnfnb\" (UID: \"9e009819-9f9d-48db-a20b-dc29cef30887\") " pod="openshift-marketplace/redhat-operators-wnfnb" Dec 03 12:58:31 crc kubenswrapper[4986]: I1203 12:58:31.244905 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wnfnb" Dec 03 12:58:31 crc kubenswrapper[4986]: I1203 12:58:31.260297 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-8hs4t" podStartSLOduration=17.260262478 podStartE2EDuration="17.260262478s" podCreationTimestamp="2025-12-03 12:58:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 12:58:31.254553876 +0000 UTC m=+170.720985067" watchObservedRunningTime="2025-12-03 12:58:31.260262478 +0000 UTC m=+170.726693669" Dec 03 12:58:31 crc kubenswrapper[4986]: I1203 12:58:31.324774 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dgwd8"] Dec 03 12:58:31 crc kubenswrapper[4986]: I1203 12:58:31.595348 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-zn5lj"] Dec 03 12:58:31 crc kubenswrapper[4986]: I1203 12:58:31.819346 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 03 12:58:31 crc kubenswrapper[4986]: I1203 12:58:31.820575 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 03 12:58:31 crc kubenswrapper[4986]: I1203 12:58:31.822326 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 03 12:58:31 crc kubenswrapper[4986]: I1203 12:58:31.822913 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 03 12:58:31 crc kubenswrapper[4986]: I1203 12:58:31.830126 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 03 12:58:31 crc kubenswrapper[4986]: I1203 12:58:31.913909 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wnfnb"] Dec 03 12:58:31 crc kubenswrapper[4986]: W1203 12:58:31.945506 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e009819_9f9d_48db_a20b_dc29cef30887.slice/crio-267c6f74775a8b31d9caf0a8630fa48e20a5bcf3e5aebc5f58dbac27e9b12f38 WatchSource:0}: Error finding container 267c6f74775a8b31d9caf0a8630fa48e20a5bcf3e5aebc5f58dbac27e9b12f38: Status 404 returned error can't find the container with id 267c6f74775a8b31d9caf0a8630fa48e20a5bcf3e5aebc5f58dbac27e9b12f38 Dec 03 12:58:31 crc kubenswrapper[4986]: I1203 12:58:31.952586 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cf5a3737-3cfe-4755-882f-e1b83aded253-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"cf5a3737-3cfe-4755-882f-e1b83aded253\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 03 12:58:31 crc kubenswrapper[4986]: I1203 12:58:31.952647 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cf5a3737-3cfe-4755-882f-e1b83aded253-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"cf5a3737-3cfe-4755-882f-e1b83aded253\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 03 12:58:32 crc kubenswrapper[4986]: I1203 12:58:32.053575 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cf5a3737-3cfe-4755-882f-e1b83aded253-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"cf5a3737-3cfe-4755-882f-e1b83aded253\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 03 12:58:32 crc kubenswrapper[4986]: I1203 12:58:32.053652 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cf5a3737-3cfe-4755-882f-e1b83aded253-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"cf5a3737-3cfe-4755-882f-e1b83aded253\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 03 12:58:32 crc kubenswrapper[4986]: I1203 12:58:32.053731 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cf5a3737-3cfe-4755-882f-e1b83aded253-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"cf5a3737-3cfe-4755-882f-e1b83aded253\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 03 12:58:32 crc kubenswrapper[4986]: I1203 12:58:32.083530 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cf5a3737-3cfe-4755-882f-e1b83aded253-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"cf5a3737-3cfe-4755-882f-e1b83aded253\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 03 12:58:32 crc kubenswrapper[4986]: I1203 12:58:32.112160 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wnfnb" event={"ID":"9e009819-9f9d-48db-a20b-dc29cef30887","Type":"ContainerStarted","Data":"267c6f74775a8b31d9caf0a8630fa48e20a5bcf3e5aebc5f58dbac27e9b12f38"} Dec 03 12:58:32 crc kubenswrapper[4986]: I1203 12:58:32.128913 4986 generic.go:334] "Generic (PLEG): container finished" podID="49d6e247-25ce-45e1-b2fe-2e3ec70cf966" containerID="16316f9bdaba0473467c5d11d4b3f1a15b4fe15563aaa01dabbfa0fb2ebc2fb3" exitCode=0 Dec 03 12:58:32 crc kubenswrapper[4986]: I1203 12:58:32.129039 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dgwd8" event={"ID":"49d6e247-25ce-45e1-b2fe-2e3ec70cf966","Type":"ContainerDied","Data":"16316f9bdaba0473467c5d11d4b3f1a15b4fe15563aaa01dabbfa0fb2ebc2fb3"} Dec 03 12:58:32 crc kubenswrapper[4986]: I1203 12:58:32.129088 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dgwd8" event={"ID":"49d6e247-25ce-45e1-b2fe-2e3ec70cf966","Type":"ContainerStarted","Data":"fbdbe1453f51797ceb196b9a30e5190dcf3b02b3804433a8fb1be00b9d83f943"} Dec 03 12:58:32 crc kubenswrapper[4986]: I1203 12:58:32.131242 4986 patch_prober.go:28] interesting pod/router-default-5444994796-9wnfz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 12:58:32 crc kubenswrapper[4986]: [-]has-synced failed: reason withheld Dec 03 12:58:32 crc kubenswrapper[4986]: [+]process-running ok Dec 03 12:58:32 crc kubenswrapper[4986]: healthz check failed Dec 03 12:58:32 crc kubenswrapper[4986]: I1203 12:58:32.131330 4986 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9wnfz" podUID="872b07f2-d557-4c06-a432-18b9a46fe6cc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 12:58:32 crc kubenswrapper[4986]: I1203 12:58:32.144390 4986 generic.go:334] "Generic (PLEG): container finished" podID="241c4ee0-97a4-4efd-9071-0b8b278e6960" containerID="61a441d3a64b0328049ea1b46d121bbbfeb9f5c1528b916238f25d7ff46ba539" exitCode=0 Dec 03 12:58:32 crc kubenswrapper[4986]: I1203 12:58:32.144466 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"241c4ee0-97a4-4efd-9071-0b8b278e6960","Type":"ContainerDied","Data":"61a441d3a64b0328049ea1b46d121bbbfeb9f5c1528b916238f25d7ff46ba539"} Dec 03 12:58:32 crc kubenswrapper[4986]: I1203 12:58:32.146164 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 03 12:58:32 crc kubenswrapper[4986]: I1203 12:58:32.146776 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" event={"ID":"d4e187c5-28e2-4881-8f59-214d93c767b1","Type":"ContainerStarted","Data":"2d1144c30060327bb2ef635a2a1dc4fb446fb85f0bd067cbdfc49d7d4b9a330b"} Dec 03 12:58:32 crc kubenswrapper[4986]: I1203 12:58:32.146821 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" event={"ID":"d4e187c5-28e2-4881-8f59-214d93c767b1","Type":"ContainerStarted","Data":"3082f142ce8019029d193ba82fa33015f6598b2d78158c40f499646613e8ebcc"} Dec 03 12:58:32 crc kubenswrapper[4986]: I1203 12:58:32.147490 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:32 crc kubenswrapper[4986]: I1203 12:58:32.226648 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" podStartSLOduration=150.226626696 podStartE2EDuration="2m30.226626696s" podCreationTimestamp="2025-12-03 12:56:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 12:58:32.199543559 +0000 UTC m=+171.665974770" watchObservedRunningTime="2025-12-03 12:58:32.226626696 +0000 UTC m=+171.693057877" Dec 03 12:58:32 crc kubenswrapper[4986]: E1203 12:58:32.327697 4986 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e009819_9f9d_48db_a20b_dc29cef30887.slice/crio-conmon-ca2a128d106abd2bc59f6265ea95f5cc7b48018428284ee87f64b3458b68ffde.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e009819_9f9d_48db_a20b_dc29cef30887.slice/crio-ca2a128d106abd2bc59f6265ea95f5cc7b48018428284ee87f64b3458b68ffde.scope\": RecentStats: unable to find data in memory cache]" Dec 03 12:58:32 crc kubenswrapper[4986]: I1203 12:58:32.563863 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 03 12:58:32 crc kubenswrapper[4986]: I1203 12:58:32.682315 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-72n7g" Dec 03 12:58:32 crc kubenswrapper[4986]: I1203 12:58:32.682986 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-72n7g" Dec 03 12:58:32 crc kubenswrapper[4986]: I1203 12:58:32.694515 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-72n7g" Dec 03 12:58:33 crc kubenswrapper[4986]: I1203 12:58:33.128771 4986 patch_prober.go:28] interesting pod/router-default-5444994796-9wnfz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 12:58:33 crc kubenswrapper[4986]: [-]has-synced failed: reason withheld Dec 03 12:58:33 crc kubenswrapper[4986]: [+]process-running ok Dec 03 12:58:33 crc kubenswrapper[4986]: healthz check failed Dec 03 12:58:33 crc kubenswrapper[4986]: I1203 12:58:33.129154 4986 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9wnfz" podUID="872b07f2-d557-4c06-a432-18b9a46fe6cc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 12:58:33 crc kubenswrapper[4986]: I1203 12:58:33.157872 4986 generic.go:334] "Generic (PLEG): container finished" podID="9e009819-9f9d-48db-a20b-dc29cef30887" containerID="ca2a128d106abd2bc59f6265ea95f5cc7b48018428284ee87f64b3458b68ffde" exitCode=0 Dec 03 12:58:33 crc kubenswrapper[4986]: I1203 12:58:33.157953 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wnfnb" event={"ID":"9e009819-9f9d-48db-a20b-dc29cef30887","Type":"ContainerDied","Data":"ca2a128d106abd2bc59f6265ea95f5cc7b48018428284ee87f64b3458b68ffde"} Dec 03 12:58:33 crc kubenswrapper[4986]: I1203 12:58:33.160893 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"cf5a3737-3cfe-4755-882f-e1b83aded253","Type":"ContainerStarted","Data":"12d547445d32bea5b6d831bc88ea234c5f4934fe764221ad7b25e31cfa44795e"} Dec 03 12:58:33 crc kubenswrapper[4986]: I1203 12:58:33.169493 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-72n7g" Dec 03 12:58:33 crc kubenswrapper[4986]: I1203 12:58:33.491598 4986 patch_prober.go:28] interesting pod/machine-config-daemon-xggpj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 12:58:33 crc kubenswrapper[4986]: I1203 12:58:33.491654 4986 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 12:58:33 crc kubenswrapper[4986]: I1203 12:58:33.612805 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 03 12:58:33 crc kubenswrapper[4986]: I1203 12:58:33.804474 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/241c4ee0-97a4-4efd-9071-0b8b278e6960-kube-api-access\") pod \"241c4ee0-97a4-4efd-9071-0b8b278e6960\" (UID: \"241c4ee0-97a4-4efd-9071-0b8b278e6960\") " Dec 03 12:58:33 crc kubenswrapper[4986]: I1203 12:58:33.804551 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/241c4ee0-97a4-4efd-9071-0b8b278e6960-kubelet-dir\") pod \"241c4ee0-97a4-4efd-9071-0b8b278e6960\" (UID: \"241c4ee0-97a4-4efd-9071-0b8b278e6960\") " Dec 03 12:58:33 crc kubenswrapper[4986]: I1203 12:58:33.804805 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/241c4ee0-97a4-4efd-9071-0b8b278e6960-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "241c4ee0-97a4-4efd-9071-0b8b278e6960" (UID: "241c4ee0-97a4-4efd-9071-0b8b278e6960"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 12:58:33 crc kubenswrapper[4986]: I1203 12:58:33.815496 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/241c4ee0-97a4-4efd-9071-0b8b278e6960-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "241c4ee0-97a4-4efd-9071-0b8b278e6960" (UID: "241c4ee0-97a4-4efd-9071-0b8b278e6960"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 12:58:33 crc kubenswrapper[4986]: I1203 12:58:33.906330 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/241c4ee0-97a4-4efd-9071-0b8b278e6960-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 03 12:58:33 crc kubenswrapper[4986]: I1203 12:58:33.906366 4986 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/241c4ee0-97a4-4efd-9071-0b8b278e6960-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 03 12:58:34 crc kubenswrapper[4986]: I1203 12:58:34.127735 4986 patch_prober.go:28] interesting pod/router-default-5444994796-9wnfz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 12:58:34 crc kubenswrapper[4986]: [-]has-synced failed: reason withheld Dec 03 12:58:34 crc kubenswrapper[4986]: [+]process-running ok Dec 03 12:58:34 crc kubenswrapper[4986]: healthz check failed Dec 03 12:58:34 crc kubenswrapper[4986]: I1203 12:58:34.127832 4986 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9wnfz" podUID="872b07f2-d557-4c06-a432-18b9a46fe6cc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 12:58:34 crc kubenswrapper[4986]: I1203 12:58:34.166901 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"241c4ee0-97a4-4efd-9071-0b8b278e6960","Type":"ContainerDied","Data":"976d0cda03e0d15afba90d128b6debed93b924448e3d5020088f013ed16a4ff1"} Dec 03 12:58:34 crc kubenswrapper[4986]: I1203 12:58:34.166951 4986 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="976d0cda03e0d15afba90d128b6debed93b924448e3d5020088f013ed16a4ff1" Dec 03 12:58:34 crc kubenswrapper[4986]: I1203 12:58:34.167722 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 03 12:58:35 crc kubenswrapper[4986]: I1203 12:58:35.127438 4986 patch_prober.go:28] interesting pod/router-default-5444994796-9wnfz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 12:58:35 crc kubenswrapper[4986]: [-]has-synced failed: reason withheld Dec 03 12:58:35 crc kubenswrapper[4986]: [+]process-running ok Dec 03 12:58:35 crc kubenswrapper[4986]: healthz check failed Dec 03 12:58:35 crc kubenswrapper[4986]: I1203 12:58:35.127776 4986 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9wnfz" podUID="872b07f2-d557-4c06-a432-18b9a46fe6cc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 12:58:35 crc kubenswrapper[4986]: I1203 12:58:35.204535 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"cf5a3737-3cfe-4755-882f-e1b83aded253","Type":"ContainerStarted","Data":"1d6ccc000c5c04d8c8a4aaabc8e0644db254931c1fbb0b93c18c2bd9abefe082"} Dec 03 12:58:35 crc kubenswrapper[4986]: I1203 12:58:35.242262 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=4.242242847 podStartE2EDuration="4.242242847s" podCreationTimestamp="2025-12-03 12:58:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 12:58:35.220498932 +0000 UTC m=+174.686930133" watchObservedRunningTime="2025-12-03 12:58:35.242242847 +0000 UTC m=+174.708674038" Dec 03 12:58:35 crc kubenswrapper[4986]: I1203 12:58:35.633406 4986 patch_prober.go:28] interesting pod/console-f9d7485db-sk8ll container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Dec 03 12:58:35 crc kubenswrapper[4986]: I1203 12:58:35.633456 4986 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-sk8ll" podUID="a069c6b7-9e3b-40fd-b830-0d2a82fadd9a" containerName="console" probeResult="failure" output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" Dec 03 12:58:35 crc kubenswrapper[4986]: I1203 12:58:35.721766 4986 patch_prober.go:28] interesting pod/downloads-7954f5f757-wvzt8 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Dec 03 12:58:35 crc kubenswrapper[4986]: I1203 12:58:35.721829 4986 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-wvzt8" podUID="d366e2f7-22ef-46e5-855f-0f26e6a9186c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Dec 03 12:58:35 crc kubenswrapper[4986]: I1203 12:58:35.721768 4986 patch_prober.go:28] interesting pod/downloads-7954f5f757-wvzt8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Dec 03 12:58:35 crc kubenswrapper[4986]: I1203 12:58:35.721947 4986 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-wvzt8" podUID="d366e2f7-22ef-46e5-855f-0f26e6a9186c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Dec 03 12:58:36 crc kubenswrapper[4986]: I1203 12:58:36.128175 4986 patch_prober.go:28] interesting pod/router-default-5444994796-9wnfz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 12:58:36 crc kubenswrapper[4986]: [-]has-synced failed: reason withheld Dec 03 12:58:36 crc kubenswrapper[4986]: [+]process-running ok Dec 03 12:58:36 crc kubenswrapper[4986]: healthz check failed Dec 03 12:58:36 crc kubenswrapper[4986]: I1203 12:58:36.128374 4986 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9wnfz" podUID="872b07f2-d557-4c06-a432-18b9a46fe6cc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 12:58:36 crc kubenswrapper[4986]: I1203 12:58:36.303568 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-7wtlt" Dec 03 12:58:37 crc kubenswrapper[4986]: I1203 12:58:37.223037 4986 patch_prober.go:28] interesting pod/router-default-5444994796-9wnfz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 12:58:37 crc kubenswrapper[4986]: [-]has-synced failed: reason withheld Dec 03 12:58:37 crc kubenswrapper[4986]: [+]process-running ok Dec 03 12:58:37 crc kubenswrapper[4986]: healthz check failed Dec 03 12:58:37 crc kubenswrapper[4986]: I1203 12:58:37.223090 4986 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9wnfz" podUID="872b07f2-d557-4c06-a432-18b9a46fe6cc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 12:58:37 crc kubenswrapper[4986]: I1203 12:58:37.666417 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-zzj7p" Dec 03 12:58:38 crc kubenswrapper[4986]: I1203 12:58:38.127021 4986 patch_prober.go:28] interesting pod/router-default-5444994796-9wnfz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 12:58:38 crc kubenswrapper[4986]: [-]has-synced failed: reason withheld Dec 03 12:58:38 crc kubenswrapper[4986]: [+]process-running ok Dec 03 12:58:38 crc kubenswrapper[4986]: healthz check failed Dec 03 12:58:38 crc kubenswrapper[4986]: I1203 12:58:38.127378 4986 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9wnfz" podUID="872b07f2-d557-4c06-a432-18b9a46fe6cc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 12:58:39 crc kubenswrapper[4986]: I1203 12:58:39.128123 4986 patch_prober.go:28] interesting pod/router-default-5444994796-9wnfz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 12:58:39 crc kubenswrapper[4986]: [-]has-synced failed: reason withheld Dec 03 12:58:39 crc kubenswrapper[4986]: [+]process-running ok Dec 03 12:58:39 crc kubenswrapper[4986]: healthz check failed Dec 03 12:58:39 crc kubenswrapper[4986]: I1203 12:58:39.128184 4986 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9wnfz" podUID="872b07f2-d557-4c06-a432-18b9a46fe6cc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 12:58:39 crc kubenswrapper[4986]: I1203 12:58:39.242309 4986 generic.go:334] "Generic (PLEG): container finished" podID="cf5a3737-3cfe-4755-882f-e1b83aded253" containerID="1d6ccc000c5c04d8c8a4aaabc8e0644db254931c1fbb0b93c18c2bd9abefe082" exitCode=0 Dec 03 12:58:39 crc kubenswrapper[4986]: I1203 12:58:39.242357 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"cf5a3737-3cfe-4755-882f-e1b83aded253","Type":"ContainerDied","Data":"1d6ccc000c5c04d8c8a4aaabc8e0644db254931c1fbb0b93c18c2bd9abefe082"} Dec 03 12:58:40 crc kubenswrapper[4986]: I1203 12:58:40.128407 4986 patch_prober.go:28] interesting pod/router-default-5444994796-9wnfz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 12:58:40 crc kubenswrapper[4986]: [-]has-synced failed: reason withheld Dec 03 12:58:40 crc kubenswrapper[4986]: [+]process-running ok Dec 03 12:58:40 crc kubenswrapper[4986]: healthz check failed Dec 03 12:58:40 crc kubenswrapper[4986]: I1203 12:58:40.128474 4986 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9wnfz" podUID="872b07f2-d557-4c06-a432-18b9a46fe6cc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 12:58:40 crc kubenswrapper[4986]: I1203 12:58:40.497166 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 03 12:58:40 crc kubenswrapper[4986]: I1203 12:58:40.601007 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cf5a3737-3cfe-4755-882f-e1b83aded253-kubelet-dir\") pod \"cf5a3737-3cfe-4755-882f-e1b83aded253\" (UID: \"cf5a3737-3cfe-4755-882f-e1b83aded253\") " Dec 03 12:58:40 crc kubenswrapper[4986]: I1203 12:58:40.601063 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cf5a3737-3cfe-4755-882f-e1b83aded253-kube-api-access\") pod \"cf5a3737-3cfe-4755-882f-e1b83aded253\" (UID: \"cf5a3737-3cfe-4755-882f-e1b83aded253\") " Dec 03 12:58:40 crc kubenswrapper[4986]: I1203 12:58:40.601791 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf5a3737-3cfe-4755-882f-e1b83aded253-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "cf5a3737-3cfe-4755-882f-e1b83aded253" (UID: "cf5a3737-3cfe-4755-882f-e1b83aded253"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 12:58:40 crc kubenswrapper[4986]: I1203 12:58:40.610778 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf5a3737-3cfe-4755-882f-e1b83aded253-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "cf5a3737-3cfe-4755-882f-e1b83aded253" (UID: "cf5a3737-3cfe-4755-882f-e1b83aded253"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 12:58:40 crc kubenswrapper[4986]: I1203 12:58:40.702444 4986 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cf5a3737-3cfe-4755-882f-e1b83aded253-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 03 12:58:40 crc kubenswrapper[4986]: I1203 12:58:40.702477 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cf5a3737-3cfe-4755-882f-e1b83aded253-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 03 12:58:41 crc kubenswrapper[4986]: I1203 12:58:41.128428 4986 patch_prober.go:28] interesting pod/router-default-5444994796-9wnfz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 12:58:41 crc kubenswrapper[4986]: [-]has-synced failed: reason withheld Dec 03 12:58:41 crc kubenswrapper[4986]: [+]process-running ok Dec 03 12:58:41 crc kubenswrapper[4986]: healthz check failed Dec 03 12:58:41 crc kubenswrapper[4986]: I1203 12:58:41.128501 4986 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9wnfz" podUID="872b07f2-d557-4c06-a432-18b9a46fe6cc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 12:58:41 crc kubenswrapper[4986]: I1203 12:58:41.256471 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"cf5a3737-3cfe-4755-882f-e1b83aded253","Type":"ContainerDied","Data":"12d547445d32bea5b6d831bc88ea234c5f4934fe764221ad7b25e31cfa44795e"} Dec 03 12:58:41 crc kubenswrapper[4986]: I1203 12:58:41.257075 4986 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="12d547445d32bea5b6d831bc88ea234c5f4934fe764221ad7b25e31cfa44795e" Dec 03 12:58:41 crc kubenswrapper[4986]: I1203 12:58:41.256699 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 03 12:58:42 crc kubenswrapper[4986]: I1203 12:58:42.127641 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-9wnfz" Dec 03 12:58:42 crc kubenswrapper[4986]: I1203 12:58:42.129649 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-9wnfz" Dec 03 12:58:45 crc kubenswrapper[4986]: I1203 12:58:45.652536 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-sk8ll" Dec 03 12:58:45 crc kubenswrapper[4986]: I1203 12:58:45.656719 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-sk8ll" Dec 03 12:58:45 crc kubenswrapper[4986]: I1203 12:58:45.734151 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-wvzt8" Dec 03 12:58:51 crc kubenswrapper[4986]: I1203 12:58:51.095112 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 12:58:52 crc kubenswrapper[4986]: I1203 12:58:52.554552 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 12:58:58 crc kubenswrapper[4986]: I1203 12:58:58.185399 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kjctk" Dec 03 12:59:03 crc kubenswrapper[4986]: I1203 12:59:03.491097 4986 patch_prober.go:28] interesting pod/machine-config-daemon-xggpj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 12:59:03 crc kubenswrapper[4986]: I1203 12:59:03.491187 4986 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 12:59:07 crc kubenswrapper[4986]: I1203 12:59:07.012962 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 03 12:59:07 crc kubenswrapper[4986]: E1203 12:59:07.014817 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf5a3737-3cfe-4755-882f-e1b83aded253" containerName="pruner" Dec 03 12:59:07 crc kubenswrapper[4986]: I1203 12:59:07.014991 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf5a3737-3cfe-4755-882f-e1b83aded253" containerName="pruner" Dec 03 12:59:07 crc kubenswrapper[4986]: E1203 12:59:07.015141 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="241c4ee0-97a4-4efd-9071-0b8b278e6960" containerName="pruner" Dec 03 12:59:07 crc kubenswrapper[4986]: I1203 12:59:07.015324 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="241c4ee0-97a4-4efd-9071-0b8b278e6960" containerName="pruner" Dec 03 12:59:07 crc kubenswrapper[4986]: I1203 12:59:07.015655 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf5a3737-3cfe-4755-882f-e1b83aded253" containerName="pruner" Dec 03 12:59:07 crc kubenswrapper[4986]: I1203 12:59:07.015793 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="241c4ee0-97a4-4efd-9071-0b8b278e6960" containerName="pruner" Dec 03 12:59:07 crc kubenswrapper[4986]: I1203 12:59:07.016539 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 03 12:59:07 crc kubenswrapper[4986]: I1203 12:59:07.020653 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 03 12:59:07 crc kubenswrapper[4986]: I1203 12:59:07.022022 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 03 12:59:07 crc kubenswrapper[4986]: I1203 12:59:07.030615 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 03 12:59:07 crc kubenswrapper[4986]: I1203 12:59:07.058323 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7d5fefa4-feb5-4156-a68c-43e94e332068-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"7d5fefa4-feb5-4156-a68c-43e94e332068\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 03 12:59:07 crc kubenswrapper[4986]: I1203 12:59:07.058599 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7d5fefa4-feb5-4156-a68c-43e94e332068-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"7d5fefa4-feb5-4156-a68c-43e94e332068\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 03 12:59:07 crc kubenswrapper[4986]: I1203 12:59:07.159545 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7d5fefa4-feb5-4156-a68c-43e94e332068-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"7d5fefa4-feb5-4156-a68c-43e94e332068\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 03 12:59:07 crc kubenswrapper[4986]: I1203 12:59:07.159610 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7d5fefa4-feb5-4156-a68c-43e94e332068-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"7d5fefa4-feb5-4156-a68c-43e94e332068\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 03 12:59:07 crc kubenswrapper[4986]: I1203 12:59:07.159673 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7d5fefa4-feb5-4156-a68c-43e94e332068-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"7d5fefa4-feb5-4156-a68c-43e94e332068\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 03 12:59:07 crc kubenswrapper[4986]: I1203 12:59:07.189806 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7d5fefa4-feb5-4156-a68c-43e94e332068-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"7d5fefa4-feb5-4156-a68c-43e94e332068\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 03 12:59:07 crc kubenswrapper[4986]: I1203 12:59:07.375040 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 03 12:59:12 crc kubenswrapper[4986]: I1203 12:59:12.406062 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 03 12:59:12 crc kubenswrapper[4986]: I1203 12:59:12.406958 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 03 12:59:12 crc kubenswrapper[4986]: I1203 12:59:12.420135 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 03 12:59:12 crc kubenswrapper[4986]: I1203 12:59:12.450351 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fb60d5f6-dca0-4f66-91c4-00dd32d26bcf-kube-api-access\") pod \"installer-9-crc\" (UID: \"fb60d5f6-dca0-4f66-91c4-00dd32d26bcf\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 03 12:59:12 crc kubenswrapper[4986]: I1203 12:59:12.450405 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fb60d5f6-dca0-4f66-91c4-00dd32d26bcf-kubelet-dir\") pod \"installer-9-crc\" (UID: \"fb60d5f6-dca0-4f66-91c4-00dd32d26bcf\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 03 12:59:12 crc kubenswrapper[4986]: I1203 12:59:12.450437 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fb60d5f6-dca0-4f66-91c4-00dd32d26bcf-var-lock\") pod \"installer-9-crc\" (UID: \"fb60d5f6-dca0-4f66-91c4-00dd32d26bcf\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 03 12:59:12 crc kubenswrapper[4986]: I1203 12:59:12.552168 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fb60d5f6-dca0-4f66-91c4-00dd32d26bcf-kube-api-access\") pod \"installer-9-crc\" (UID: \"fb60d5f6-dca0-4f66-91c4-00dd32d26bcf\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 03 12:59:12 crc kubenswrapper[4986]: I1203 12:59:12.552240 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fb60d5f6-dca0-4f66-91c4-00dd32d26bcf-kubelet-dir\") pod \"installer-9-crc\" (UID: \"fb60d5f6-dca0-4f66-91c4-00dd32d26bcf\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 03 12:59:12 crc kubenswrapper[4986]: I1203 12:59:12.552291 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fb60d5f6-dca0-4f66-91c4-00dd32d26bcf-var-lock\") pod \"installer-9-crc\" (UID: \"fb60d5f6-dca0-4f66-91c4-00dd32d26bcf\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 03 12:59:12 crc kubenswrapper[4986]: I1203 12:59:12.552377 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fb60d5f6-dca0-4f66-91c4-00dd32d26bcf-var-lock\") pod \"installer-9-crc\" (UID: \"fb60d5f6-dca0-4f66-91c4-00dd32d26bcf\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 03 12:59:12 crc kubenswrapper[4986]: I1203 12:59:12.552761 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fb60d5f6-dca0-4f66-91c4-00dd32d26bcf-kubelet-dir\") pod \"installer-9-crc\" (UID: \"fb60d5f6-dca0-4f66-91c4-00dd32d26bcf\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 03 12:59:12 crc kubenswrapper[4986]: I1203 12:59:12.572641 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fb60d5f6-dca0-4f66-91c4-00dd32d26bcf-kube-api-access\") pod \"installer-9-crc\" (UID: \"fb60d5f6-dca0-4f66-91c4-00dd32d26bcf\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 03 12:59:12 crc kubenswrapper[4986]: I1203 12:59:12.725850 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 03 12:59:27 crc kubenswrapper[4986]: E1203 12:59:27.437234 4986 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Dec 03 12:59:27 crc kubenswrapper[4986]: E1203 12:59:27.437880 4986 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-98pc4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-bzs74_openshift-marketplace(37f6dc70-6f91-4162-a225-239a999e4320): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 03 12:59:27 crc kubenswrapper[4986]: E1203 12:59:27.439141 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-bzs74" podUID="37f6dc70-6f91-4162-a225-239a999e4320" Dec 03 12:59:32 crc kubenswrapper[4986]: E1203 12:59:32.055057 4986 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Dec 03 12:59:32 crc kubenswrapper[4986]: E1203 12:59:32.055704 4986 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6lfms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-b8hxm_openshift-marketplace(7aceb259-65a5-45a6-acd1-8f5cac430ef7): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 03 12:59:32 crc kubenswrapper[4986]: E1203 12:59:32.057239 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-b8hxm" podUID="7aceb259-65a5-45a6-acd1-8f5cac430ef7" Dec 03 12:59:32 crc kubenswrapper[4986]: E1203 12:59:32.891333 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-b8hxm" podUID="7aceb259-65a5-45a6-acd1-8f5cac430ef7" Dec 03 12:59:32 crc kubenswrapper[4986]: E1203 12:59:32.891751 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-bzs74" podUID="37f6dc70-6f91-4162-a225-239a999e4320" Dec 03 12:59:33 crc kubenswrapper[4986]: I1203 12:59:33.491056 4986 patch_prober.go:28] interesting pod/machine-config-daemon-xggpj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 12:59:33 crc kubenswrapper[4986]: I1203 12:59:33.491129 4986 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 12:59:33 crc kubenswrapper[4986]: I1203 12:59:33.491186 4986 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" Dec 03 12:59:33 crc kubenswrapper[4986]: I1203 12:59:33.491863 4986 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d0ab87b22153ba31fa79d2d210e695b274c2ddc878ad679c605aa8fb716534d1"} pod="openshift-machine-config-operator/machine-config-daemon-xggpj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 12:59:33 crc kubenswrapper[4986]: I1203 12:59:33.492011 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" containerName="machine-config-daemon" containerID="cri-o://d0ab87b22153ba31fa79d2d210e695b274c2ddc878ad679c605aa8fb716534d1" gracePeriod=600 Dec 03 12:59:34 crc kubenswrapper[4986]: I1203 12:59:34.594408 4986 generic.go:334] "Generic (PLEG): container finished" podID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" containerID="d0ab87b22153ba31fa79d2d210e695b274c2ddc878ad679c605aa8fb716534d1" exitCode=0 Dec 03 12:59:34 crc kubenswrapper[4986]: I1203 12:59:34.594493 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" event={"ID":"f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c","Type":"ContainerDied","Data":"d0ab87b22153ba31fa79d2d210e695b274c2ddc878ad679c605aa8fb716534d1"} Dec 03 12:59:34 crc kubenswrapper[4986]: E1203 12:59:34.644325 4986 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Dec 03 12:59:34 crc kubenswrapper[4986]: E1203 12:59:34.644501 4986 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hrrvj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-6mz6x_openshift-marketplace(df90b001-135e-4c75-a67e-3084e905378a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 03 12:59:34 crc kubenswrapper[4986]: E1203 12:59:34.645698 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-6mz6x" podUID="df90b001-135e-4c75-a67e-3084e905378a" Dec 03 12:59:35 crc kubenswrapper[4986]: E1203 12:59:35.641900 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-6mz6x" podUID="df90b001-135e-4c75-a67e-3084e905378a" Dec 03 12:59:35 crc kubenswrapper[4986]: E1203 12:59:35.751081 4986 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Dec 03 12:59:35 crc kubenswrapper[4986]: E1203 12:59:35.751204 4986 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7vzb6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-vzt5m_openshift-marketplace(d28745c8-a7a6-4a38-acb2-f7b6ff1ef54f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 03 12:59:35 crc kubenswrapper[4986]: E1203 12:59:35.752366 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-vzt5m" podUID="d28745c8-a7a6-4a38-acb2-f7b6ff1ef54f" Dec 03 12:59:35 crc kubenswrapper[4986]: E1203 12:59:35.990862 4986 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Dec 03 12:59:35 crc kubenswrapper[4986]: E1203 12:59:35.991041 4986 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wzx5s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-9w6ts_openshift-marketplace(bb0a57df-0313-4698-9e73-373c97e2fb72): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 03 12:59:35 crc kubenswrapper[4986]: E1203 12:59:35.992397 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-9w6ts" podUID="bb0a57df-0313-4698-9e73-373c97e2fb72" Dec 03 12:59:36 crc kubenswrapper[4986]: E1203 12:59:36.432776 4986 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Dec 03 12:59:36 crc kubenswrapper[4986]: E1203 12:59:36.432966 4986 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qg2bf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-l6bpp_openshift-marketplace(b2dbfdc4-7122-4b7f-bfdd-189396fb1c77): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 03 12:59:36 crc kubenswrapper[4986]: E1203 12:59:36.434224 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-l6bpp" podUID="b2dbfdc4-7122-4b7f-bfdd-189396fb1c77" Dec 03 12:59:38 crc kubenswrapper[4986]: E1203 12:59:38.786800 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-9w6ts" podUID="bb0a57df-0313-4698-9e73-373c97e2fb72" Dec 03 12:59:38 crc kubenswrapper[4986]: E1203 12:59:38.786821 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-l6bpp" podUID="b2dbfdc4-7122-4b7f-bfdd-189396fb1c77" Dec 03 12:59:38 crc kubenswrapper[4986]: E1203 12:59:38.786800 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-vzt5m" podUID="d28745c8-a7a6-4a38-acb2-f7b6ff1ef54f" Dec 03 12:59:38 crc kubenswrapper[4986]: E1203 12:59:38.862934 4986 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Dec 03 12:59:38 crc kubenswrapper[4986]: E1203 12:59:38.863109 4986 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2j9q5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-wnfnb_openshift-marketplace(9e009819-9f9d-48db-a20b-dc29cef30887): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 03 12:59:38 crc kubenswrapper[4986]: E1203 12:59:38.864500 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-wnfnb" podUID="9e009819-9f9d-48db-a20b-dc29cef30887" Dec 03 12:59:38 crc kubenswrapper[4986]: E1203 12:59:38.884575 4986 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Dec 03 12:59:38 crc kubenswrapper[4986]: E1203 12:59:38.884720 4986 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zdc29,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-dgwd8_openshift-marketplace(49d6e247-25ce-45e1-b2fe-2e3ec70cf966): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 03 12:59:38 crc kubenswrapper[4986]: E1203 12:59:38.886637 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-dgwd8" podUID="49d6e247-25ce-45e1-b2fe-2e3ec70cf966" Dec 03 12:59:39 crc kubenswrapper[4986]: I1203 12:59:39.205152 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 03 12:59:39 crc kubenswrapper[4986]: W1203 12:59:39.210763 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podfb60d5f6_dca0_4f66_91c4_00dd32d26bcf.slice/crio-dcddaf35f8441460dc577f13e9e88e0af5eb0387e55331ac2a0733c3a322c88e WatchSource:0}: Error finding container dcddaf35f8441460dc577f13e9e88e0af5eb0387e55331ac2a0733c3a322c88e: Status 404 returned error can't find the container with id dcddaf35f8441460dc577f13e9e88e0af5eb0387e55331ac2a0733c3a322c88e Dec 03 12:59:39 crc kubenswrapper[4986]: I1203 12:59:39.263726 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 03 12:59:39 crc kubenswrapper[4986]: W1203 12:59:39.273801 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod7d5fefa4_feb5_4156_a68c_43e94e332068.slice/crio-93b23f0958a994ba55049238f092492bd934bad61dee22f02c249d340ee2b44d WatchSource:0}: Error finding container 93b23f0958a994ba55049238f092492bd934bad61dee22f02c249d340ee2b44d: Status 404 returned error can't find the container with id 93b23f0958a994ba55049238f092492bd934bad61dee22f02c249d340ee2b44d Dec 03 12:59:39 crc kubenswrapper[4986]: I1203 12:59:39.622691 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"7d5fefa4-feb5-4156-a68c-43e94e332068","Type":"ContainerStarted","Data":"93b23f0958a994ba55049238f092492bd934bad61dee22f02c249d340ee2b44d"} Dec 03 12:59:39 crc kubenswrapper[4986]: I1203 12:59:39.623748 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"fb60d5f6-dca0-4f66-91c4-00dd32d26bcf","Type":"ContainerStarted","Data":"dcddaf35f8441460dc577f13e9e88e0af5eb0387e55331ac2a0733c3a322c88e"} Dec 03 12:59:39 crc kubenswrapper[4986]: I1203 12:59:39.625824 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" event={"ID":"f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c","Type":"ContainerStarted","Data":"b5b4a46f5d4e257eeb833a2af2a9b0432001e7c973833c320044c5109d1acbda"} Dec 03 12:59:39 crc kubenswrapper[4986]: E1203 12:59:39.627816 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-wnfnb" podUID="9e009819-9f9d-48db-a20b-dc29cef30887" Dec 03 12:59:39 crc kubenswrapper[4986]: E1203 12:59:39.627823 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-dgwd8" podUID="49d6e247-25ce-45e1-b2fe-2e3ec70cf966" Dec 03 12:59:40 crc kubenswrapper[4986]: I1203 12:59:40.633065 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"7d5fefa4-feb5-4156-a68c-43e94e332068","Type":"ContainerStarted","Data":"9e17c57a703e5aa2b7e73c00323162f20bfb57c9e8e696106624f99dd8d795a2"} Dec 03 12:59:40 crc kubenswrapper[4986]: I1203 12:59:40.634664 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"fb60d5f6-dca0-4f66-91c4-00dd32d26bcf","Type":"ContainerStarted","Data":"bdc6b8c80d0c7ef7a53f00d17927a687fdbc7b4793eec0a6236998a1703c7f42"} Dec 03 12:59:40 crc kubenswrapper[4986]: I1203 12:59:40.660820 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=33.660798234 podStartE2EDuration="33.660798234s" podCreationTimestamp="2025-12-03 12:59:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 12:59:40.656792203 +0000 UTC m=+240.123223394" watchObservedRunningTime="2025-12-03 12:59:40.660798234 +0000 UTC m=+240.127229425" Dec 03 12:59:40 crc kubenswrapper[4986]: I1203 12:59:40.676902 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=28.676885193 podStartE2EDuration="28.676885193s" podCreationTimestamp="2025-12-03 12:59:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 12:59:40.675637088 +0000 UTC m=+240.142068289" watchObservedRunningTime="2025-12-03 12:59:40.676885193 +0000 UTC m=+240.143316384" Dec 03 12:59:41 crc kubenswrapper[4986]: I1203 12:59:41.640720 4986 generic.go:334] "Generic (PLEG): container finished" podID="7d5fefa4-feb5-4156-a68c-43e94e332068" containerID="9e17c57a703e5aa2b7e73c00323162f20bfb57c9e8e696106624f99dd8d795a2" exitCode=0 Dec 03 12:59:41 crc kubenswrapper[4986]: I1203 12:59:41.640791 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"7d5fefa4-feb5-4156-a68c-43e94e332068","Type":"ContainerDied","Data":"9e17c57a703e5aa2b7e73c00323162f20bfb57c9e8e696106624f99dd8d795a2"} Dec 03 12:59:42 crc kubenswrapper[4986]: I1203 12:59:42.876097 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 03 12:59:42 crc kubenswrapper[4986]: I1203 12:59:42.982519 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7d5fefa4-feb5-4156-a68c-43e94e332068-kube-api-access\") pod \"7d5fefa4-feb5-4156-a68c-43e94e332068\" (UID: \"7d5fefa4-feb5-4156-a68c-43e94e332068\") " Dec 03 12:59:42 crc kubenswrapper[4986]: I1203 12:59:42.982636 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7d5fefa4-feb5-4156-a68c-43e94e332068-kubelet-dir\") pod \"7d5fefa4-feb5-4156-a68c-43e94e332068\" (UID: \"7d5fefa4-feb5-4156-a68c-43e94e332068\") " Dec 03 12:59:42 crc kubenswrapper[4986]: I1203 12:59:42.982908 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d5fefa4-feb5-4156-a68c-43e94e332068-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "7d5fefa4-feb5-4156-a68c-43e94e332068" (UID: "7d5fefa4-feb5-4156-a68c-43e94e332068"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 12:59:43 crc kubenswrapper[4986]: I1203 12:59:43.001205 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d5fefa4-feb5-4156-a68c-43e94e332068-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7d5fefa4-feb5-4156-a68c-43e94e332068" (UID: "7d5fefa4-feb5-4156-a68c-43e94e332068"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 12:59:43 crc kubenswrapper[4986]: I1203 12:59:43.084342 4986 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7d5fefa4-feb5-4156-a68c-43e94e332068-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 03 12:59:43 crc kubenswrapper[4986]: I1203 12:59:43.084389 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7d5fefa4-feb5-4156-a68c-43e94e332068-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 03 12:59:43 crc kubenswrapper[4986]: I1203 12:59:43.654144 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"7d5fefa4-feb5-4156-a68c-43e94e332068","Type":"ContainerDied","Data":"93b23f0958a994ba55049238f092492bd934bad61dee22f02c249d340ee2b44d"} Dec 03 12:59:43 crc kubenswrapper[4986]: I1203 12:59:43.654580 4986 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="93b23f0958a994ba55049238f092492bd934bad61dee22f02c249d340ee2b44d" Dec 03 12:59:43 crc kubenswrapper[4986]: I1203 12:59:43.654187 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 03 12:59:47 crc kubenswrapper[4986]: I1203 12:59:47.677045 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b8hxm" event={"ID":"7aceb259-65a5-45a6-acd1-8f5cac430ef7","Type":"ContainerStarted","Data":"1d721901368a693b8a429157c87785417e12ed63daa1ecb7ce1c7f2a2be8247e"} Dec 03 12:59:48 crc kubenswrapper[4986]: I1203 12:59:48.683017 4986 generic.go:334] "Generic (PLEG): container finished" podID="7aceb259-65a5-45a6-acd1-8f5cac430ef7" containerID="1d721901368a693b8a429157c87785417e12ed63daa1ecb7ce1c7f2a2be8247e" exitCode=0 Dec 03 12:59:48 crc kubenswrapper[4986]: I1203 12:59:48.683167 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b8hxm" event={"ID":"7aceb259-65a5-45a6-acd1-8f5cac430ef7","Type":"ContainerDied","Data":"1d721901368a693b8a429157c87785417e12ed63daa1ecb7ce1c7f2a2be8247e"} Dec 03 12:59:48 crc kubenswrapper[4986]: I1203 12:59:48.686214 4986 generic.go:334] "Generic (PLEG): container finished" podID="37f6dc70-6f91-4162-a225-239a999e4320" containerID="f67254c5fdf27e72c006f14b66f9df3ad8407bbe2e33e3af31bbaf860f8834cd" exitCode=0 Dec 03 12:59:48 crc kubenswrapper[4986]: I1203 12:59:48.686255 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bzs74" event={"ID":"37f6dc70-6f91-4162-a225-239a999e4320","Type":"ContainerDied","Data":"f67254c5fdf27e72c006f14b66f9df3ad8407bbe2e33e3af31bbaf860f8834cd"} Dec 03 12:59:50 crc kubenswrapper[4986]: I1203 12:59:50.721191 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bzs74" event={"ID":"37f6dc70-6f91-4162-a225-239a999e4320","Type":"ContainerStarted","Data":"0cad1c7c53e6c4dfc96a320d019c45fe9bb0847f033b9676e62422dc214c3241"} Dec 03 12:59:50 crc kubenswrapper[4986]: I1203 12:59:50.734084 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b8hxm" event={"ID":"7aceb259-65a5-45a6-acd1-8f5cac430ef7","Type":"ContainerStarted","Data":"9ac0de0f8cd3e208de5a97070e4f0a781356be67cc3626ae845a1a813caf696d"} Dec 03 12:59:50 crc kubenswrapper[4986]: I1203 12:59:50.744751 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bzs74" podStartSLOduration=3.697436483 podStartE2EDuration="1m23.744728122s" podCreationTimestamp="2025-12-03 12:58:27 +0000 UTC" firstStartedPulling="2025-12-03 12:58:29.978010318 +0000 UTC m=+169.444441509" lastFinishedPulling="2025-12-03 12:59:50.025301947 +0000 UTC m=+249.491733148" observedRunningTime="2025-12-03 12:59:50.741800921 +0000 UTC m=+250.208232132" watchObservedRunningTime="2025-12-03 12:59:50.744728122 +0000 UTC m=+250.211159313" Dec 03 12:59:50 crc kubenswrapper[4986]: I1203 12:59:50.763380 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-b8hxm" podStartSLOduration=3.7827889199999998 podStartE2EDuration="1m23.763359861s" podCreationTimestamp="2025-12-03 12:58:27 +0000 UTC" firstStartedPulling="2025-12-03 12:58:29.955889842 +0000 UTC m=+169.422321023" lastFinishedPulling="2025-12-03 12:59:49.936460773 +0000 UTC m=+249.402891964" observedRunningTime="2025-12-03 12:59:50.762778935 +0000 UTC m=+250.229210126" watchObservedRunningTime="2025-12-03 12:59:50.763359861 +0000 UTC m=+250.229791062" Dec 03 12:59:51 crc kubenswrapper[4986]: I1203 12:59:51.747119 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wnfnb" event={"ID":"9e009819-9f9d-48db-a20b-dc29cef30887","Type":"ContainerStarted","Data":"506a6f1fb8cbe50cc27dc03ed23d24f30ffd375aff821ed095160022031fcf5a"} Dec 03 12:59:51 crc kubenswrapper[4986]: I1203 12:59:51.753462 4986 generic.go:334] "Generic (PLEG): container finished" podID="df90b001-135e-4c75-a67e-3084e905378a" containerID="cb585dc90f2e3e407b3ab982a0a37ca9c6eec9474eb9d123604fe82bb8408d32" exitCode=0 Dec 03 12:59:51 crc kubenswrapper[4986]: I1203 12:59:51.753516 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6mz6x" event={"ID":"df90b001-135e-4c75-a67e-3084e905378a","Type":"ContainerDied","Data":"cb585dc90f2e3e407b3ab982a0a37ca9c6eec9474eb9d123604fe82bb8408d32"} Dec 03 12:59:51 crc kubenswrapper[4986]: I1203 12:59:51.757539 4986 generic.go:334] "Generic (PLEG): container finished" podID="bb0a57df-0313-4698-9e73-373c97e2fb72" containerID="99e3e9120a5aea1c711199d17da5aeb9e9702a87de8aa263cc5a0f4b9562411f" exitCode=0 Dec 03 12:59:51 crc kubenswrapper[4986]: I1203 12:59:51.757572 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9w6ts" event={"ID":"bb0a57df-0313-4698-9e73-373c97e2fb72","Type":"ContainerDied","Data":"99e3e9120a5aea1c711199d17da5aeb9e9702a87de8aa263cc5a0f4b9562411f"} Dec 03 12:59:52 crc kubenswrapper[4986]: I1203 12:59:52.765855 4986 generic.go:334] "Generic (PLEG): container finished" podID="9e009819-9f9d-48db-a20b-dc29cef30887" containerID="506a6f1fb8cbe50cc27dc03ed23d24f30ffd375aff821ed095160022031fcf5a" exitCode=0 Dec 03 12:59:52 crc kubenswrapper[4986]: I1203 12:59:52.766145 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wnfnb" event={"ID":"9e009819-9f9d-48db-a20b-dc29cef30887","Type":"ContainerDied","Data":"506a6f1fb8cbe50cc27dc03ed23d24f30ffd375aff821ed095160022031fcf5a"} Dec 03 12:59:52 crc kubenswrapper[4986]: I1203 12:59:52.771067 4986 generic.go:334] "Generic (PLEG): container finished" podID="b2dbfdc4-7122-4b7f-bfdd-189396fb1c77" containerID="a2935678628be63eca90e6f2a6e2ec7cdb2351cb3f8ae96840d4f99a5a927124" exitCode=0 Dec 03 12:59:52 crc kubenswrapper[4986]: I1203 12:59:52.771120 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l6bpp" event={"ID":"b2dbfdc4-7122-4b7f-bfdd-189396fb1c77","Type":"ContainerDied","Data":"a2935678628be63eca90e6f2a6e2ec7cdb2351cb3f8ae96840d4f99a5a927124"} Dec 03 12:59:52 crc kubenswrapper[4986]: I1203 12:59:52.774084 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6mz6x" event={"ID":"df90b001-135e-4c75-a67e-3084e905378a","Type":"ContainerStarted","Data":"9104b235c2d4e4f23cfd73dd20edb07f83152dcc5d48a9d66f19fa4469694117"} Dec 03 12:59:52 crc kubenswrapper[4986]: I1203 12:59:52.820204 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6mz6x" podStartSLOduration=2.595168951 podStartE2EDuration="1m23.820184931s" podCreationTimestamp="2025-12-03 12:58:29 +0000 UTC" firstStartedPulling="2025-12-03 12:58:31.036744661 +0000 UTC m=+170.503175852" lastFinishedPulling="2025-12-03 12:59:52.261760641 +0000 UTC m=+251.728191832" observedRunningTime="2025-12-03 12:59:52.814189835 +0000 UTC m=+252.280621036" watchObservedRunningTime="2025-12-03 12:59:52.820184931 +0000 UTC m=+252.286616122" Dec 03 12:59:54 crc kubenswrapper[4986]: I1203 12:59:54.785638 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9w6ts" event={"ID":"bb0a57df-0313-4698-9e73-373c97e2fb72","Type":"ContainerStarted","Data":"afdcd8aa4a8e311b96feeb979a90355627776e61e9ced20a0c82f879b7546fcc"} Dec 03 12:59:57 crc kubenswrapper[4986]: I1203 12:59:57.208916 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-2k592"] Dec 03 12:59:57 crc kubenswrapper[4986]: I1203 12:59:57.684144 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-b8hxm" Dec 03 12:59:57 crc kubenswrapper[4986]: I1203 12:59:57.684647 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-b8hxm" Dec 03 12:59:57 crc kubenswrapper[4986]: I1203 12:59:57.823796 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9w6ts" podStartSLOduration=9.169573986 podStartE2EDuration="1m30.823781797s" podCreationTimestamp="2025-12-03 12:58:27 +0000 UTC" firstStartedPulling="2025-12-03 12:58:31.060926125 +0000 UTC m=+170.527357316" lastFinishedPulling="2025-12-03 12:59:52.715133936 +0000 UTC m=+252.181565127" observedRunningTime="2025-12-03 12:59:57.820429704 +0000 UTC m=+257.286860905" watchObservedRunningTime="2025-12-03 12:59:57.823781797 +0000 UTC m=+257.290212988" Dec 03 12:59:58 crc kubenswrapper[4986]: I1203 12:59:58.129925 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bzs74" Dec 03 12:59:58 crc kubenswrapper[4986]: I1203 12:59:58.129975 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bzs74" Dec 03 12:59:58 crc kubenswrapper[4986]: I1203 12:59:58.135914 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-b8hxm" Dec 03 12:59:58 crc kubenswrapper[4986]: I1203 12:59:58.169631 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bzs74" Dec 03 12:59:58 crc kubenswrapper[4986]: I1203 12:59:58.184092 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-b8hxm" Dec 03 12:59:58 crc kubenswrapper[4986]: I1203 12:59:58.432934 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9w6ts" Dec 03 12:59:58 crc kubenswrapper[4986]: I1203 12:59:58.432993 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9w6ts" Dec 03 12:59:58 crc kubenswrapper[4986]: I1203 12:59:58.473975 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9w6ts" Dec 03 12:59:58 crc kubenswrapper[4986]: I1203 12:59:58.840785 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bzs74" Dec 03 12:59:59 crc kubenswrapper[4986]: I1203 12:59:59.844314 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9w6ts" Dec 03 12:59:59 crc kubenswrapper[4986]: I1203 12:59:59.970012 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bzs74"] Dec 03 13:00:00 crc kubenswrapper[4986]: I1203 13:00:00.139975 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412780-c4mlz"] Dec 03 13:00:00 crc kubenswrapper[4986]: E1203 13:00:00.140249 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d5fefa4-feb5-4156-a68c-43e94e332068" containerName="pruner" Dec 03 13:00:00 crc kubenswrapper[4986]: I1203 13:00:00.140262 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d5fefa4-feb5-4156-a68c-43e94e332068" containerName="pruner" Dec 03 13:00:00 crc kubenswrapper[4986]: I1203 13:00:00.140442 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d5fefa4-feb5-4156-a68c-43e94e332068" containerName="pruner" Dec 03 13:00:00 crc kubenswrapper[4986]: I1203 13:00:00.140903 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412780-c4mlz" Dec 03 13:00:00 crc kubenswrapper[4986]: I1203 13:00:00.142808 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 03 13:00:00 crc kubenswrapper[4986]: I1203 13:00:00.142870 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 03 13:00:00 crc kubenswrapper[4986]: I1203 13:00:00.149717 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412780-c4mlz"] Dec 03 13:00:00 crc kubenswrapper[4986]: I1203 13:00:00.163862 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6mz6x" Dec 03 13:00:00 crc kubenswrapper[4986]: I1203 13:00:00.163925 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6mz6x" Dec 03 13:00:00 crc kubenswrapper[4986]: I1203 13:00:00.200751 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6mz6x" Dec 03 13:00:00 crc kubenswrapper[4986]: I1203 13:00:00.209379 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f68bd6da-f9ec-44ee-9a80-1b5820c75d8e-secret-volume\") pod \"collect-profiles-29412780-c4mlz\" (UID: \"f68bd6da-f9ec-44ee-9a80-1b5820c75d8e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412780-c4mlz" Dec 03 13:00:00 crc kubenswrapper[4986]: I1203 13:00:00.209466 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f68bd6da-f9ec-44ee-9a80-1b5820c75d8e-config-volume\") pod \"collect-profiles-29412780-c4mlz\" (UID: \"f68bd6da-f9ec-44ee-9a80-1b5820c75d8e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412780-c4mlz" Dec 03 13:00:00 crc kubenswrapper[4986]: I1203 13:00:00.209493 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ts4l\" (UniqueName: \"kubernetes.io/projected/f68bd6da-f9ec-44ee-9a80-1b5820c75d8e-kube-api-access-2ts4l\") pod \"collect-profiles-29412780-c4mlz\" (UID: \"f68bd6da-f9ec-44ee-9a80-1b5820c75d8e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412780-c4mlz" Dec 03 13:00:00 crc kubenswrapper[4986]: I1203 13:00:00.311068 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f68bd6da-f9ec-44ee-9a80-1b5820c75d8e-secret-volume\") pod \"collect-profiles-29412780-c4mlz\" (UID: \"f68bd6da-f9ec-44ee-9a80-1b5820c75d8e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412780-c4mlz" Dec 03 13:00:00 crc kubenswrapper[4986]: I1203 13:00:00.311142 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f68bd6da-f9ec-44ee-9a80-1b5820c75d8e-config-volume\") pod \"collect-profiles-29412780-c4mlz\" (UID: \"f68bd6da-f9ec-44ee-9a80-1b5820c75d8e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412780-c4mlz" Dec 03 13:00:00 crc kubenswrapper[4986]: I1203 13:00:00.311176 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ts4l\" (UniqueName: \"kubernetes.io/projected/f68bd6da-f9ec-44ee-9a80-1b5820c75d8e-kube-api-access-2ts4l\") pod \"collect-profiles-29412780-c4mlz\" (UID: \"f68bd6da-f9ec-44ee-9a80-1b5820c75d8e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412780-c4mlz" Dec 03 13:00:00 crc kubenswrapper[4986]: I1203 13:00:00.312388 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f68bd6da-f9ec-44ee-9a80-1b5820c75d8e-config-volume\") pod \"collect-profiles-29412780-c4mlz\" (UID: \"f68bd6da-f9ec-44ee-9a80-1b5820c75d8e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412780-c4mlz" Dec 03 13:00:00 crc kubenswrapper[4986]: I1203 13:00:00.317555 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f68bd6da-f9ec-44ee-9a80-1b5820c75d8e-secret-volume\") pod \"collect-profiles-29412780-c4mlz\" (UID: \"f68bd6da-f9ec-44ee-9a80-1b5820c75d8e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412780-c4mlz" Dec 03 13:00:00 crc kubenswrapper[4986]: I1203 13:00:00.328781 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ts4l\" (UniqueName: \"kubernetes.io/projected/f68bd6da-f9ec-44ee-9a80-1b5820c75d8e-kube-api-access-2ts4l\") pod \"collect-profiles-29412780-c4mlz\" (UID: \"f68bd6da-f9ec-44ee-9a80-1b5820c75d8e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412780-c4mlz" Dec 03 13:00:00 crc kubenswrapper[4986]: I1203 13:00:00.463921 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412780-c4mlz" Dec 03 13:00:00 crc kubenswrapper[4986]: I1203 13:00:00.568758 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9w6ts"] Dec 03 13:00:00 crc kubenswrapper[4986]: I1203 13:00:00.811508 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bzs74" podUID="37f6dc70-6f91-4162-a225-239a999e4320" containerName="registry-server" containerID="cri-o://0cad1c7c53e6c4dfc96a320d019c45fe9bb0847f033b9676e62422dc214c3241" gracePeriod=2 Dec 03 13:00:00 crc kubenswrapper[4986]: I1203 13:00:00.851974 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6mz6x" Dec 03 13:00:01 crc kubenswrapper[4986]: I1203 13:00:01.815558 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9w6ts" podUID="bb0a57df-0313-4698-9e73-373c97e2fb72" containerName="registry-server" containerID="cri-o://afdcd8aa4a8e311b96feeb979a90355627776e61e9ced20a0c82f879b7546fcc" gracePeriod=2 Dec 03 13:00:02 crc kubenswrapper[4986]: I1203 13:00:02.369031 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6mz6x"] Dec 03 13:00:02 crc kubenswrapper[4986]: I1203 13:00:02.822514 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6mz6x" podUID="df90b001-135e-4c75-a67e-3084e905378a" containerName="registry-server" containerID="cri-o://9104b235c2d4e4f23cfd73dd20edb07f83152dcc5d48a9d66f19fa4469694117" gracePeriod=2 Dec 03 13:00:08 crc kubenswrapper[4986]: E1203 13:00:08.130593 4986 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 0cad1c7c53e6c4dfc96a320d019c45fe9bb0847f033b9676e62422dc214c3241 is running failed: container process not found" containerID="0cad1c7c53e6c4dfc96a320d019c45fe9bb0847f033b9676e62422dc214c3241" cmd=["grpc_health_probe","-addr=:50051"] Dec 03 13:00:08 crc kubenswrapper[4986]: E1203 13:00:08.131353 4986 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 0cad1c7c53e6c4dfc96a320d019c45fe9bb0847f033b9676e62422dc214c3241 is running failed: container process not found" containerID="0cad1c7c53e6c4dfc96a320d019c45fe9bb0847f033b9676e62422dc214c3241" cmd=["grpc_health_probe","-addr=:50051"] Dec 03 13:00:08 crc kubenswrapper[4986]: E1203 13:00:08.131814 4986 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 0cad1c7c53e6c4dfc96a320d019c45fe9bb0847f033b9676e62422dc214c3241 is running failed: container process not found" containerID="0cad1c7c53e6c4dfc96a320d019c45fe9bb0847f033b9676e62422dc214c3241" cmd=["grpc_health_probe","-addr=:50051"] Dec 03 13:00:08 crc kubenswrapper[4986]: E1203 13:00:08.131856 4986 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 0cad1c7c53e6c4dfc96a320d019c45fe9bb0847f033b9676e62422dc214c3241 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-bzs74" podUID="37f6dc70-6f91-4162-a225-239a999e4320" containerName="registry-server" Dec 03 13:00:08 crc kubenswrapper[4986]: E1203 13:00:08.434309 4986 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of afdcd8aa4a8e311b96feeb979a90355627776e61e9ced20a0c82f879b7546fcc is running failed: container process not found" containerID="afdcd8aa4a8e311b96feeb979a90355627776e61e9ced20a0c82f879b7546fcc" cmd=["grpc_health_probe","-addr=:50051"] Dec 03 13:00:08 crc kubenswrapper[4986]: E1203 13:00:08.434874 4986 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of afdcd8aa4a8e311b96feeb979a90355627776e61e9ced20a0c82f879b7546fcc is running failed: container process not found" containerID="afdcd8aa4a8e311b96feeb979a90355627776e61e9ced20a0c82f879b7546fcc" cmd=["grpc_health_probe","-addr=:50051"] Dec 03 13:00:08 crc kubenswrapper[4986]: E1203 13:00:08.435418 4986 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of afdcd8aa4a8e311b96feeb979a90355627776e61e9ced20a0c82f879b7546fcc is running failed: container process not found" containerID="afdcd8aa4a8e311b96feeb979a90355627776e61e9ced20a0c82f879b7546fcc" cmd=["grpc_health_probe","-addr=:50051"] Dec 03 13:00:08 crc kubenswrapper[4986]: E1203 13:00:08.435526 4986 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of afdcd8aa4a8e311b96feeb979a90355627776e61e9ced20a0c82f879b7546fcc is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-9w6ts" podUID="bb0a57df-0313-4698-9e73-373c97e2fb72" containerName="registry-server" Dec 03 13:00:09 crc kubenswrapper[4986]: I1203 13:00:09.869434 4986 generic.go:334] "Generic (PLEG): container finished" podID="37f6dc70-6f91-4162-a225-239a999e4320" containerID="0cad1c7c53e6c4dfc96a320d019c45fe9bb0847f033b9676e62422dc214c3241" exitCode=0 Dec 03 13:00:09 crc kubenswrapper[4986]: I1203 13:00:09.869526 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bzs74" event={"ID":"37f6dc70-6f91-4162-a225-239a999e4320","Type":"ContainerDied","Data":"0cad1c7c53e6c4dfc96a320d019c45fe9bb0847f033b9676e62422dc214c3241"} Dec 03 13:00:10 crc kubenswrapper[4986]: E1203 13:00:10.164475 4986 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9104b235c2d4e4f23cfd73dd20edb07f83152dcc5d48a9d66f19fa4469694117 is running failed: container process not found" containerID="9104b235c2d4e4f23cfd73dd20edb07f83152dcc5d48a9d66f19fa4469694117" cmd=["grpc_health_probe","-addr=:50051"] Dec 03 13:00:10 crc kubenswrapper[4986]: E1203 13:00:10.165400 4986 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9104b235c2d4e4f23cfd73dd20edb07f83152dcc5d48a9d66f19fa4469694117 is running failed: container process not found" containerID="9104b235c2d4e4f23cfd73dd20edb07f83152dcc5d48a9d66f19fa4469694117" cmd=["grpc_health_probe","-addr=:50051"] Dec 03 13:00:10 crc kubenswrapper[4986]: E1203 13:00:10.166244 4986 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9104b235c2d4e4f23cfd73dd20edb07f83152dcc5d48a9d66f19fa4469694117 is running failed: container process not found" containerID="9104b235c2d4e4f23cfd73dd20edb07f83152dcc5d48a9d66f19fa4469694117" cmd=["grpc_health_probe","-addr=:50051"] Dec 03 13:00:10 crc kubenswrapper[4986]: E1203 13:00:10.166380 4986 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9104b235c2d4e4f23cfd73dd20edb07f83152dcc5d48a9d66f19fa4469694117 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-6mz6x" podUID="df90b001-135e-4c75-a67e-3084e905378a" containerName="registry-server" Dec 03 13:00:17 crc kubenswrapper[4986]: I1203 13:00:14.903724 4986 generic.go:334] "Generic (PLEG): container finished" podID="df90b001-135e-4c75-a67e-3084e905378a" containerID="9104b235c2d4e4f23cfd73dd20edb07f83152dcc5d48a9d66f19fa4469694117" exitCode=0 Dec 03 13:00:17 crc kubenswrapper[4986]: I1203 13:00:14.903805 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6mz6x" event={"ID":"df90b001-135e-4c75-a67e-3084e905378a","Type":"ContainerDied","Data":"9104b235c2d4e4f23cfd73dd20edb07f83152dcc5d48a9d66f19fa4469694117"} Dec 03 13:00:17 crc kubenswrapper[4986]: I1203 13:00:14.907826 4986 generic.go:334] "Generic (PLEG): container finished" podID="bb0a57df-0313-4698-9e73-373c97e2fb72" containerID="afdcd8aa4a8e311b96feeb979a90355627776e61e9ced20a0c82f879b7546fcc" exitCode=0 Dec 03 13:00:17 crc kubenswrapper[4986]: I1203 13:00:14.907910 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9w6ts" event={"ID":"bb0a57df-0313-4698-9e73-373c97e2fb72","Type":"ContainerDied","Data":"afdcd8aa4a8e311b96feeb979a90355627776e61e9ced20a0c82f879b7546fcc"} Dec 03 13:00:17 crc kubenswrapper[4986]: I1203 13:00:17.443782 4986 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 03 13:00:17 crc kubenswrapper[4986]: I1203 13:00:17.445242 4986 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 03 13:00:17 crc kubenswrapper[4986]: I1203 13:00:17.445449 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 13:00:17 crc kubenswrapper[4986]: I1203 13:00:17.445860 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://7eb1602f2d542b1998655456e5c49fca8f55a45a07cb075f989d4b10f91f6c95" gracePeriod=15 Dec 03 13:00:17 crc kubenswrapper[4986]: I1203 13:00:17.446063 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://70f0aa85e0be54890d4367deaf1a95970a7b0ac9d95afdd07c431680bdf7f15e" gracePeriod=15 Dec 03 13:00:17 crc kubenswrapper[4986]: I1203 13:00:17.446127 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://0c8ecb87e56d3fa7d72b57a6f333db4244ee7f60381778e7099a6fc88d91e1e1" gracePeriod=15 Dec 03 13:00:17 crc kubenswrapper[4986]: I1203 13:00:17.446123 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://04aa8cf2e01507ace747505514511263d9c560608e33f592a2c10be1edad8674" gracePeriod=15 Dec 03 13:00:17 crc kubenswrapper[4986]: I1203 13:00:17.446566 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://5c0ca216d3d1121b7035d5d7ead25d459f95d56808997c02b3801cac0354c76a" gracePeriod=15 Dec 03 13:00:17 crc kubenswrapper[4986]: I1203 13:00:17.489891 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 03 13:00:17 crc kubenswrapper[4986]: I1203 13:00:17.536379 4986 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 03 13:00:17 crc kubenswrapper[4986]: E1203 13:00:17.536666 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 03 13:00:17 crc kubenswrapper[4986]: I1203 13:00:17.536679 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 03 13:00:17 crc kubenswrapper[4986]: E1203 13:00:17.536690 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 03 13:00:17 crc kubenswrapper[4986]: I1203 13:00:17.536695 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 03 13:00:17 crc kubenswrapper[4986]: E1203 13:00:17.537345 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 03 13:00:17 crc kubenswrapper[4986]: I1203 13:00:17.537373 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 03 13:00:17 crc kubenswrapper[4986]: E1203 13:00:17.537383 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 03 13:00:17 crc kubenswrapper[4986]: I1203 13:00:17.537390 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 03 13:00:17 crc kubenswrapper[4986]: E1203 13:00:17.537400 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 03 13:00:17 crc kubenswrapper[4986]: I1203 13:00:17.537411 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 03 13:00:17 crc kubenswrapper[4986]: E1203 13:00:17.537447 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 03 13:00:17 crc kubenswrapper[4986]: I1203 13:00:17.537457 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 03 13:00:17 crc kubenswrapper[4986]: E1203 13:00:17.537477 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 03 13:00:17 crc kubenswrapper[4986]: I1203 13:00:17.537486 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 03 13:00:17 crc kubenswrapper[4986]: I1203 13:00:17.537740 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 03 13:00:17 crc kubenswrapper[4986]: I1203 13:00:17.537754 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 03 13:00:17 crc kubenswrapper[4986]: I1203 13:00:17.537765 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 03 13:00:17 crc kubenswrapper[4986]: I1203 13:00:17.537793 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 03 13:00:17 crc kubenswrapper[4986]: I1203 13:00:17.537804 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 03 13:00:17 crc kubenswrapper[4986]: I1203 13:00:17.538367 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 03 13:00:17 crc kubenswrapper[4986]: I1203 13:00:17.542686 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 13:00:17 crc kubenswrapper[4986]: I1203 13:00:17.542762 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 13:00:17 crc kubenswrapper[4986]: I1203 13:00:17.542810 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 13:00:17 crc kubenswrapper[4986]: I1203 13:00:17.542858 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 13:00:17 crc kubenswrapper[4986]: I1203 13:00:17.542882 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 13:00:17 crc kubenswrapper[4986]: I1203 13:00:17.644336 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 13:00:17 crc kubenswrapper[4986]: I1203 13:00:17.644404 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 13:00:17 crc kubenswrapper[4986]: I1203 13:00:17.644409 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 13:00:17 crc kubenswrapper[4986]: I1203 13:00:17.644424 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 13:00:17 crc kubenswrapper[4986]: I1203 13:00:17.644494 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 13:00:17 crc kubenswrapper[4986]: I1203 13:00:17.644530 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 13:00:17 crc kubenswrapper[4986]: I1203 13:00:17.644566 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 13:00:17 crc kubenswrapper[4986]: I1203 13:00:17.644606 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 13:00:17 crc kubenswrapper[4986]: I1203 13:00:17.644606 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 13:00:17 crc kubenswrapper[4986]: I1203 13:00:17.644641 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 13:00:17 crc kubenswrapper[4986]: I1203 13:00:17.644670 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 13:00:17 crc kubenswrapper[4986]: I1203 13:00:17.644757 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 13:00:17 crc kubenswrapper[4986]: I1203 13:00:17.644823 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 13:00:20 crc kubenswrapper[4986]: I1203 13:00:17.745685 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 13:00:20 crc kubenswrapper[4986]: I1203 13:00:17.745723 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 13:00:20 crc kubenswrapper[4986]: I1203 13:00:17.745773 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 13:00:20 crc kubenswrapper[4986]: I1203 13:00:17.745838 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 13:00:20 crc kubenswrapper[4986]: I1203 13:00:17.745863 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 13:00:20 crc kubenswrapper[4986]: I1203 13:00:17.745838 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 13:00:20 crc kubenswrapper[4986]: I1203 13:00:17.782259 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 13:00:20 crc kubenswrapper[4986]: I1203 13:00:17.928850 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 03 13:00:20 crc kubenswrapper[4986]: I1203 13:00:17.930001 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 03 13:00:20 crc kubenswrapper[4986]: I1203 13:00:17.930623 4986 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="04aa8cf2e01507ace747505514511263d9c560608e33f592a2c10be1edad8674" exitCode=2 Dec 03 13:00:20 crc kubenswrapper[4986]: E1203 13:00:18.131159 4986 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 0cad1c7c53e6c4dfc96a320d019c45fe9bb0847f033b9676e62422dc214c3241 is running failed: container process not found" containerID="0cad1c7c53e6c4dfc96a320d019c45fe9bb0847f033b9676e62422dc214c3241" cmd=["grpc_health_probe","-addr=:50051"] Dec 03 13:00:20 crc kubenswrapper[4986]: E1203 13:00:18.131844 4986 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 0cad1c7c53e6c4dfc96a320d019c45fe9bb0847f033b9676e62422dc214c3241 is running failed: container process not found" containerID="0cad1c7c53e6c4dfc96a320d019c45fe9bb0847f033b9676e62422dc214c3241" cmd=["grpc_health_probe","-addr=:50051"] Dec 03 13:00:20 crc kubenswrapper[4986]: E1203 13:00:18.132444 4986 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 0cad1c7c53e6c4dfc96a320d019c45fe9bb0847f033b9676e62422dc214c3241 is running failed: container process not found" containerID="0cad1c7c53e6c4dfc96a320d019c45fe9bb0847f033b9676e62422dc214c3241" cmd=["grpc_health_probe","-addr=:50051"] Dec 03 13:00:20 crc kubenswrapper[4986]: E1203 13:00:18.132518 4986 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 0cad1c7c53e6c4dfc96a320d019c45fe9bb0847f033b9676e62422dc214c3241 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-bzs74" podUID="37f6dc70-6f91-4162-a225-239a999e4320" containerName="registry-server" Dec 03 13:00:20 crc kubenswrapper[4986]: E1203 13:00:18.133596 4986 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events/community-operators-bzs74.187db60fd619f4de\": dial tcp 38.129.56.112:6443: connect: connection refused" event="&Event{ObjectMeta:{community-operators-bzs74.187db60fd619f4de openshift-marketplace 29285 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:community-operators-bzs74,UID:37f6dc70-6f91-4162-a225-239a999e4320,APIVersion:v1,ResourceVersion:28181,FieldPath:spec.containers{registry-server},},Reason:Unhealthy,Message:Readiness probe errored: rpc error: code = NotFound desc = container is not created or running: checking if PID of 0cad1c7c53e6c4dfc96a320d019c45fe9bb0847f033b9676e62422dc214c3241 is running failed: container process not found,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-03 13:00:08 +0000 UTC,LastTimestamp:2025-12-03 13:00:18.132559323 +0000 UTC m=+277.598990554,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 03 13:00:20 crc kubenswrapper[4986]: E1203 13:00:18.433737 4986 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of afdcd8aa4a8e311b96feeb979a90355627776e61e9ced20a0c82f879b7546fcc is running failed: container process not found" containerID="afdcd8aa4a8e311b96feeb979a90355627776e61e9ced20a0c82f879b7546fcc" cmd=["grpc_health_probe","-addr=:50051"] Dec 03 13:00:20 crc kubenswrapper[4986]: E1203 13:00:18.434401 4986 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of afdcd8aa4a8e311b96feeb979a90355627776e61e9ced20a0c82f879b7546fcc is running failed: container process not found" containerID="afdcd8aa4a8e311b96feeb979a90355627776e61e9ced20a0c82f879b7546fcc" cmd=["grpc_health_probe","-addr=:50051"] Dec 03 13:00:20 crc kubenswrapper[4986]: E1203 13:00:18.434635 4986 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of afdcd8aa4a8e311b96feeb979a90355627776e61e9ced20a0c82f879b7546fcc is running failed: container process not found" containerID="afdcd8aa4a8e311b96feeb979a90355627776e61e9ced20a0c82f879b7546fcc" cmd=["grpc_health_probe","-addr=:50051"] Dec 03 13:00:20 crc kubenswrapper[4986]: E1203 13:00:18.434707 4986 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of afdcd8aa4a8e311b96feeb979a90355627776e61e9ced20a0c82f879b7546fcc is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-9w6ts" podUID="bb0a57df-0313-4698-9e73-373c97e2fb72" containerName="registry-server" Dec 03 13:00:20 crc kubenswrapper[4986]: E1203 13:00:20.021330 4986 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.129.56.112:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" volumeName="registry-storage" Dec 03 13:00:20 crc kubenswrapper[4986]: E1203 13:00:20.164244 4986 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9104b235c2d4e4f23cfd73dd20edb07f83152dcc5d48a9d66f19fa4469694117 is running failed: container process not found" containerID="9104b235c2d4e4f23cfd73dd20edb07f83152dcc5d48a9d66f19fa4469694117" cmd=["grpc_health_probe","-addr=:50051"] Dec 03 13:00:20 crc kubenswrapper[4986]: E1203 13:00:20.164717 4986 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9104b235c2d4e4f23cfd73dd20edb07f83152dcc5d48a9d66f19fa4469694117 is running failed: container process not found" containerID="9104b235c2d4e4f23cfd73dd20edb07f83152dcc5d48a9d66f19fa4469694117" cmd=["grpc_health_probe","-addr=:50051"] Dec 03 13:00:20 crc kubenswrapper[4986]: E1203 13:00:20.165268 4986 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9104b235c2d4e4f23cfd73dd20edb07f83152dcc5d48a9d66f19fa4469694117 is running failed: container process not found" containerID="9104b235c2d4e4f23cfd73dd20edb07f83152dcc5d48a9d66f19fa4469694117" cmd=["grpc_health_probe","-addr=:50051"] Dec 03 13:00:20 crc kubenswrapper[4986]: E1203 13:00:20.165322 4986 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9104b235c2d4e4f23cfd73dd20edb07f83152dcc5d48a9d66f19fa4469694117 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-6mz6x" podUID="df90b001-135e-4c75-a67e-3084e905378a" containerName="registry-server" Dec 03 13:00:20 crc kubenswrapper[4986]: W1203 13:00:20.516923 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-8d14be873d5b741ac0512e51405ba93b9a647775cbdb408e2f0505f779122176 WatchSource:0}: Error finding container 8d14be873d5b741ac0512e51405ba93b9a647775cbdb408e2f0505f779122176: Status 404 returned error can't find the container with id 8d14be873d5b741ac0512e51405ba93b9a647775cbdb408e2f0505f779122176 Dec 03 13:00:20 crc kubenswrapper[4986]: I1203 13:00:20.558857 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 03 13:00:20 crc kubenswrapper[4986]: I1203 13:00:20.563836 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 03 13:00:20 crc kubenswrapper[4986]: I1203 13:00:20.564911 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 13:00:20 crc kubenswrapper[4986]: I1203 13:00:20.566548 4986 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:20 crc kubenswrapper[4986]: I1203 13:00:20.566829 4986 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:20 crc kubenswrapper[4986]: I1203 13:00:20.660108 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bzs74" Dec 03 13:00:20 crc kubenswrapper[4986]: I1203 13:00:20.666858 4986 status_manager.go:851] "Failed to get status for pod" podUID="37f6dc70-6f91-4162-a225-239a999e4320" pod="openshift-marketplace/community-operators-bzs74" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bzs74\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:20 crc kubenswrapper[4986]: I1203 13:00:20.667400 4986 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:20 crc kubenswrapper[4986]: I1203 13:00:20.671153 4986 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:20 crc kubenswrapper[4986]: I1203 13:00:20.684644 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 03 13:00:20 crc kubenswrapper[4986]: I1203 13:00:20.684726 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 03 13:00:20 crc kubenswrapper[4986]: I1203 13:00:20.684830 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 03 13:00:20 crc kubenswrapper[4986]: I1203 13:00:20.685141 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 13:00:20 crc kubenswrapper[4986]: I1203 13:00:20.685174 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 13:00:20 crc kubenswrapper[4986]: I1203 13:00:20.685198 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 13:00:20 crc kubenswrapper[4986]: I1203 13:00:20.705360 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9w6ts" Dec 03 13:00:20 crc kubenswrapper[4986]: I1203 13:00:20.706095 4986 status_manager.go:851] "Failed to get status for pod" podUID="37f6dc70-6f91-4162-a225-239a999e4320" pod="openshift-marketplace/community-operators-bzs74" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bzs74\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:20 crc kubenswrapper[4986]: I1203 13:00:20.706556 4986 status_manager.go:851] "Failed to get status for pod" podUID="bb0a57df-0313-4698-9e73-373c97e2fb72" pod="openshift-marketplace/certified-operators-9w6ts" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-9w6ts\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:20 crc kubenswrapper[4986]: I1203 13:00:20.707721 4986 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:20 crc kubenswrapper[4986]: I1203 13:00:20.707981 4986 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:20 crc kubenswrapper[4986]: I1203 13:00:20.730848 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6mz6x" Dec 03 13:00:20 crc kubenswrapper[4986]: I1203 13:00:20.733513 4986 status_manager.go:851] "Failed to get status for pod" podUID="37f6dc70-6f91-4162-a225-239a999e4320" pod="openshift-marketplace/community-operators-bzs74" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bzs74\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:20 crc kubenswrapper[4986]: I1203 13:00:20.735099 4986 status_manager.go:851] "Failed to get status for pod" podUID="bb0a57df-0313-4698-9e73-373c97e2fb72" pod="openshift-marketplace/certified-operators-9w6ts" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-9w6ts\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:20 crc kubenswrapper[4986]: I1203 13:00:20.736023 4986 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:20 crc kubenswrapper[4986]: I1203 13:00:20.736429 4986 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:20 crc kubenswrapper[4986]: I1203 13:00:20.736909 4986 status_manager.go:851] "Failed to get status for pod" podUID="df90b001-135e-4c75-a67e-3084e905378a" pod="openshift-marketplace/redhat-marketplace-6mz6x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-6mz6x\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:20 crc kubenswrapper[4986]: I1203 13:00:20.785751 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb0a57df-0313-4698-9e73-373c97e2fb72-utilities\") pod \"bb0a57df-0313-4698-9e73-373c97e2fb72\" (UID: \"bb0a57df-0313-4698-9e73-373c97e2fb72\") " Dec 03 13:00:20 crc kubenswrapper[4986]: I1203 13:00:20.785802 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37f6dc70-6f91-4162-a225-239a999e4320-catalog-content\") pod \"37f6dc70-6f91-4162-a225-239a999e4320\" (UID: \"37f6dc70-6f91-4162-a225-239a999e4320\") " Dec 03 13:00:20 crc kubenswrapper[4986]: I1203 13:00:20.785820 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98pc4\" (UniqueName: \"kubernetes.io/projected/37f6dc70-6f91-4162-a225-239a999e4320-kube-api-access-98pc4\") pod \"37f6dc70-6f91-4162-a225-239a999e4320\" (UID: \"37f6dc70-6f91-4162-a225-239a999e4320\") " Dec 03 13:00:20 crc kubenswrapper[4986]: I1203 13:00:20.785903 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb0a57df-0313-4698-9e73-373c97e2fb72-catalog-content\") pod \"bb0a57df-0313-4698-9e73-373c97e2fb72\" (UID: \"bb0a57df-0313-4698-9e73-373c97e2fb72\") " Dec 03 13:00:20 crc kubenswrapper[4986]: I1203 13:00:20.785957 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37f6dc70-6f91-4162-a225-239a999e4320-utilities\") pod \"37f6dc70-6f91-4162-a225-239a999e4320\" (UID: \"37f6dc70-6f91-4162-a225-239a999e4320\") " Dec 03 13:00:20 crc kubenswrapper[4986]: I1203 13:00:20.786008 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzx5s\" (UniqueName: \"kubernetes.io/projected/bb0a57df-0313-4698-9e73-373c97e2fb72-kube-api-access-wzx5s\") pod \"bb0a57df-0313-4698-9e73-373c97e2fb72\" (UID: \"bb0a57df-0313-4698-9e73-373c97e2fb72\") " Dec 03 13:00:20 crc kubenswrapper[4986]: I1203 13:00:20.786267 4986 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 03 13:00:20 crc kubenswrapper[4986]: I1203 13:00:20.786292 4986 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 03 13:00:20 crc kubenswrapper[4986]: I1203 13:00:20.786301 4986 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Dec 03 13:00:20 crc kubenswrapper[4986]: I1203 13:00:20.788152 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37f6dc70-6f91-4162-a225-239a999e4320-utilities" (OuterVolumeSpecName: "utilities") pod "37f6dc70-6f91-4162-a225-239a999e4320" (UID: "37f6dc70-6f91-4162-a225-239a999e4320"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 13:00:20 crc kubenswrapper[4986]: I1203 13:00:20.788253 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb0a57df-0313-4698-9e73-373c97e2fb72-utilities" (OuterVolumeSpecName: "utilities") pod "bb0a57df-0313-4698-9e73-373c97e2fb72" (UID: "bb0a57df-0313-4698-9e73-373c97e2fb72"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 13:00:20 crc kubenswrapper[4986]: I1203 13:00:20.791467 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb0a57df-0313-4698-9e73-373c97e2fb72-kube-api-access-wzx5s" (OuterVolumeSpecName: "kube-api-access-wzx5s") pod "bb0a57df-0313-4698-9e73-373c97e2fb72" (UID: "bb0a57df-0313-4698-9e73-373c97e2fb72"). InnerVolumeSpecName "kube-api-access-wzx5s". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:00:20 crc kubenswrapper[4986]: I1203 13:00:20.792252 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37f6dc70-6f91-4162-a225-239a999e4320-kube-api-access-98pc4" (OuterVolumeSpecName: "kube-api-access-98pc4") pod "37f6dc70-6f91-4162-a225-239a999e4320" (UID: "37f6dc70-6f91-4162-a225-239a999e4320"). InnerVolumeSpecName "kube-api-access-98pc4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:00:20 crc kubenswrapper[4986]: I1203 13:00:20.865964 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37f6dc70-6f91-4162-a225-239a999e4320-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "37f6dc70-6f91-4162-a225-239a999e4320" (UID: "37f6dc70-6f91-4162-a225-239a999e4320"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 13:00:20 crc kubenswrapper[4986]: I1203 13:00:20.873011 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb0a57df-0313-4698-9e73-373c97e2fb72-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bb0a57df-0313-4698-9e73-373c97e2fb72" (UID: "bb0a57df-0313-4698-9e73-373c97e2fb72"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 13:00:20 crc kubenswrapper[4986]: I1203 13:00:20.888716 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrrvj\" (UniqueName: \"kubernetes.io/projected/df90b001-135e-4c75-a67e-3084e905378a-kube-api-access-hrrvj\") pod \"df90b001-135e-4c75-a67e-3084e905378a\" (UID: \"df90b001-135e-4c75-a67e-3084e905378a\") " Dec 03 13:00:20 crc kubenswrapper[4986]: I1203 13:00:20.889084 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df90b001-135e-4c75-a67e-3084e905378a-utilities\") pod \"df90b001-135e-4c75-a67e-3084e905378a\" (UID: \"df90b001-135e-4c75-a67e-3084e905378a\") " Dec 03 13:00:20 crc kubenswrapper[4986]: I1203 13:00:20.889239 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df90b001-135e-4c75-a67e-3084e905378a-catalog-content\") pod \"df90b001-135e-4c75-a67e-3084e905378a\" (UID: \"df90b001-135e-4c75-a67e-3084e905378a\") " Dec 03 13:00:20 crc kubenswrapper[4986]: I1203 13:00:20.889593 4986 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37f6dc70-6f91-4162-a225-239a999e4320-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 13:00:20 crc kubenswrapper[4986]: I1203 13:00:20.889677 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wzx5s\" (UniqueName: \"kubernetes.io/projected/bb0a57df-0313-4698-9e73-373c97e2fb72-kube-api-access-wzx5s\") on node \"crc\" DevicePath \"\"" Dec 03 13:00:20 crc kubenswrapper[4986]: I1203 13:00:20.889762 4986 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb0a57df-0313-4698-9e73-373c97e2fb72-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 13:00:20 crc kubenswrapper[4986]: I1203 13:00:20.889852 4986 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37f6dc70-6f91-4162-a225-239a999e4320-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 13:00:20 crc kubenswrapper[4986]: I1203 13:00:20.889919 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98pc4\" (UniqueName: \"kubernetes.io/projected/37f6dc70-6f91-4162-a225-239a999e4320-kube-api-access-98pc4\") on node \"crc\" DevicePath \"\"" Dec 03 13:00:20 crc kubenswrapper[4986]: I1203 13:00:20.889981 4986 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb0a57df-0313-4698-9e73-373c97e2fb72-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 13:00:20 crc kubenswrapper[4986]: I1203 13:00:20.892180 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df90b001-135e-4c75-a67e-3084e905378a-utilities" (OuterVolumeSpecName: "utilities") pod "df90b001-135e-4c75-a67e-3084e905378a" (UID: "df90b001-135e-4c75-a67e-3084e905378a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 13:00:20 crc kubenswrapper[4986]: I1203 13:00:20.899048 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df90b001-135e-4c75-a67e-3084e905378a-kube-api-access-hrrvj" (OuterVolumeSpecName: "kube-api-access-hrrvj") pod "df90b001-135e-4c75-a67e-3084e905378a" (UID: "df90b001-135e-4c75-a67e-3084e905378a"). InnerVolumeSpecName "kube-api-access-hrrvj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:00:20 crc kubenswrapper[4986]: I1203 13:00:20.910962 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df90b001-135e-4c75-a67e-3084e905378a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "df90b001-135e-4c75-a67e-3084e905378a" (UID: "df90b001-135e-4c75-a67e-3084e905378a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 13:00:20 crc kubenswrapper[4986]: I1203 13:00:20.947427 4986 status_manager.go:851] "Failed to get status for pod" podUID="37f6dc70-6f91-4162-a225-239a999e4320" pod="openshift-marketplace/community-operators-bzs74" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bzs74\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:20 crc kubenswrapper[4986]: I1203 13:00:20.947640 4986 status_manager.go:851] "Failed to get status for pod" podUID="bb0a57df-0313-4698-9e73-373c97e2fb72" pod="openshift-marketplace/certified-operators-9w6ts" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-9w6ts\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:20 crc kubenswrapper[4986]: I1203 13:00:20.947805 4986 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:20 crc kubenswrapper[4986]: I1203 13:00:20.952722 4986 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:20 crc kubenswrapper[4986]: I1203 13:00:20.953792 4986 status_manager.go:851] "Failed to get status for pod" podUID="df90b001-135e-4c75-a67e-3084e905378a" pod="openshift-marketplace/redhat-marketplace-6mz6x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-6mz6x\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:20 crc kubenswrapper[4986]: I1203 13:00:20.965353 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Dec 03 13:00:20 crc kubenswrapper[4986]: I1203 13:00:20.989857 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6mz6x" event={"ID":"df90b001-135e-4c75-a67e-3084e905378a","Type":"ContainerDied","Data":"afc2dbb4e2d0914a44cc0f0d4c659d4190bffb3c318aa29447b87d3fce9f32c3"} Dec 03 13:00:20 crc kubenswrapper[4986]: I1203 13:00:20.989929 4986 scope.go:117] "RemoveContainer" containerID="9104b235c2d4e4f23cfd73dd20edb07f83152dcc5d48a9d66f19fa4469694117" Dec 03 13:00:20 crc kubenswrapper[4986]: I1203 13:00:20.990083 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6mz6x" Dec 03 13:00:20 crc kubenswrapper[4986]: I1203 13:00:20.990738 4986 status_manager.go:851] "Failed to get status for pod" podUID="37f6dc70-6f91-4162-a225-239a999e4320" pod="openshift-marketplace/community-operators-bzs74" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bzs74\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:20 crc kubenswrapper[4986]: I1203 13:00:20.991000 4986 status_manager.go:851] "Failed to get status for pod" podUID="bb0a57df-0313-4698-9e73-373c97e2fb72" pod="openshift-marketplace/certified-operators-9w6ts" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-9w6ts\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:20 crc kubenswrapper[4986]: I1203 13:00:20.991210 4986 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:20 crc kubenswrapper[4986]: I1203 13:00:20.991654 4986 status_manager.go:851] "Failed to get status for pod" podUID="df90b001-135e-4c75-a67e-3084e905378a" pod="openshift-marketplace/redhat-marketplace-6mz6x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-6mz6x\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:20 crc kubenswrapper[4986]: I1203 13:00:20.991956 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hrrvj\" (UniqueName: \"kubernetes.io/projected/df90b001-135e-4c75-a67e-3084e905378a-kube-api-access-hrrvj\") on node \"crc\" DevicePath \"\"" Dec 03 13:00:20 crc kubenswrapper[4986]: I1203 13:00:20.992078 4986 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df90b001-135e-4c75-a67e-3084e905378a-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 13:00:20 crc kubenswrapper[4986]: I1203 13:00:20.992151 4986 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df90b001-135e-4c75-a67e-3084e905378a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 13:00:20 crc kubenswrapper[4986]: I1203 13:00:20.997215 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wnfnb" event={"ID":"9e009819-9f9d-48db-a20b-dc29cef30887","Type":"ContainerStarted","Data":"d2af21c038905513a752bed20b95b4d5e9d30cab21f74aa4cd9212130f15534f"} Dec 03 13:00:20 crc kubenswrapper[4986]: I1203 13:00:20.997903 4986 status_manager.go:851] "Failed to get status for pod" podUID="37f6dc70-6f91-4162-a225-239a999e4320" pod="openshift-marketplace/community-operators-bzs74" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bzs74\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:20 crc kubenswrapper[4986]: I1203 13:00:20.998255 4986 status_manager.go:851] "Failed to get status for pod" podUID="bb0a57df-0313-4698-9e73-373c97e2fb72" pod="openshift-marketplace/certified-operators-9w6ts" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-9w6ts\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:20 crc kubenswrapper[4986]: I1203 13:00:20.998475 4986 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:20 crc kubenswrapper[4986]: I1203 13:00:20.998816 4986 status_manager.go:851] "Failed to get status for pod" podUID="df90b001-135e-4c75-a67e-3084e905378a" pod="openshift-marketplace/redhat-marketplace-6mz6x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-6mz6x\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.000352 4986 status_manager.go:851] "Failed to get status for pod" podUID="9e009819-9f9d-48db-a20b-dc29cef30887" pod="openshift-marketplace/redhat-operators-wnfnb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-wnfnb\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.011634 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vzt5m" event={"ID":"d28745c8-a7a6-4a38-acb2-f7b6ff1ef54f","Type":"ContainerStarted","Data":"894818a368572ea3c57b48b8abcaef755f3b87b224801c8733c1833cf52cff4d"} Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.012322 4986 status_manager.go:851] "Failed to get status for pod" podUID="d28745c8-a7a6-4a38-acb2-f7b6ff1ef54f" pod="openshift-marketplace/certified-operators-vzt5m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-vzt5m\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.013221 4986 status_manager.go:851] "Failed to get status for pod" podUID="37f6dc70-6f91-4162-a225-239a999e4320" pod="openshift-marketplace/community-operators-bzs74" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bzs74\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.013709 4986 status_manager.go:851] "Failed to get status for pod" podUID="bb0a57df-0313-4698-9e73-373c97e2fb72" pod="openshift-marketplace/certified-operators-9w6ts" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-9w6ts\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.015964 4986 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.017244 4986 status_manager.go:851] "Failed to get status for pod" podUID="df90b001-135e-4c75-a67e-3084e905378a" pod="openshift-marketplace/redhat-marketplace-6mz6x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-6mz6x\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.017512 4986 status_manager.go:851] "Failed to get status for pod" podUID="9e009819-9f9d-48db-a20b-dc29cef30887" pod="openshift-marketplace/redhat-operators-wnfnb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-wnfnb\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.020645 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l6bpp" event={"ID":"b2dbfdc4-7122-4b7f-bfdd-189396fb1c77","Type":"ContainerStarted","Data":"a984f95e59e23e7eb127a6998aed6915ec511ae26702fb7a4c4e6b7f312754d2"} Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.023049 4986 status_manager.go:851] "Failed to get status for pod" podUID="b2dbfdc4-7122-4b7f-bfdd-189396fb1c77" pod="openshift-marketplace/redhat-marketplace-l6bpp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-l6bpp\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.023346 4986 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.023705 4986 status_manager.go:851] "Failed to get status for pod" podUID="df90b001-135e-4c75-a67e-3084e905378a" pod="openshift-marketplace/redhat-marketplace-6mz6x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-6mz6x\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.023950 4986 status_manager.go:851] "Failed to get status for pod" podUID="9e009819-9f9d-48db-a20b-dc29cef30887" pod="openshift-marketplace/redhat-operators-wnfnb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-wnfnb\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.024413 4986 status_manager.go:851] "Failed to get status for pod" podUID="d28745c8-a7a6-4a38-acb2-f7b6ff1ef54f" pod="openshift-marketplace/certified-operators-vzt5m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-vzt5m\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.024627 4986 status_manager.go:851] "Failed to get status for pod" podUID="37f6dc70-6f91-4162-a225-239a999e4320" pod="openshift-marketplace/community-operators-bzs74" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bzs74\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.025034 4986 status_manager.go:851] "Failed to get status for pod" podUID="bb0a57df-0313-4698-9e73-373c97e2fb72" pod="openshift-marketplace/certified-operators-9w6ts" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-9w6ts\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.030667 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9w6ts" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.030991 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9w6ts" event={"ID":"bb0a57df-0313-4698-9e73-373c97e2fb72","Type":"ContainerDied","Data":"9aab0b119c0c4bb6641e872a4b2a9bf44406efad7a294c2ad1c9ae1913a3ff6a"} Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.032014 4986 status_manager.go:851] "Failed to get status for pod" podUID="9e009819-9f9d-48db-a20b-dc29cef30887" pod="openshift-marketplace/redhat-operators-wnfnb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-wnfnb\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.032588 4986 status_manager.go:851] "Failed to get status for pod" podUID="d28745c8-a7a6-4a38-acb2-f7b6ff1ef54f" pod="openshift-marketplace/certified-operators-vzt5m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-vzt5m\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.032893 4986 status_manager.go:851] "Failed to get status for pod" podUID="37f6dc70-6f91-4162-a225-239a999e4320" pod="openshift-marketplace/community-operators-bzs74" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bzs74\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.033413 4986 status_manager.go:851] "Failed to get status for pod" podUID="bb0a57df-0313-4698-9e73-373c97e2fb72" pod="openshift-marketplace/certified-operators-9w6ts" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-9w6ts\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.033942 4986 status_manager.go:851] "Failed to get status for pod" podUID="b2dbfdc4-7122-4b7f-bfdd-189396fb1c77" pod="openshift-marketplace/redhat-marketplace-l6bpp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-l6bpp\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.034409 4986 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.034523 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"fb60d5f6-dca0-4f66-91c4-00dd32d26bcf","Type":"ContainerDied","Data":"bdc6b8c80d0c7ef7a53f00d17927a687fdbc7b4793eec0a6236998a1703c7f42"} Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.034502 4986 generic.go:334] "Generic (PLEG): container finished" podID="fb60d5f6-dca0-4f66-91c4-00dd32d26bcf" containerID="bdc6b8c80d0c7ef7a53f00d17927a687fdbc7b4793eec0a6236998a1703c7f42" exitCode=0 Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.035047 4986 status_manager.go:851] "Failed to get status for pod" podUID="df90b001-135e-4c75-a67e-3084e905378a" pod="openshift-marketplace/redhat-marketplace-6mz6x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-6mz6x\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.035594 4986 status_manager.go:851] "Failed to get status for pod" podUID="d28745c8-a7a6-4a38-acb2-f7b6ff1ef54f" pod="openshift-marketplace/certified-operators-vzt5m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-vzt5m\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.036198 4986 status_manager.go:851] "Failed to get status for pod" podUID="37f6dc70-6f91-4162-a225-239a999e4320" pod="openshift-marketplace/community-operators-bzs74" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bzs74\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.036485 4986 status_manager.go:851] "Failed to get status for pod" podUID="bb0a57df-0313-4698-9e73-373c97e2fb72" pod="openshift-marketplace/certified-operators-9w6ts" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-9w6ts\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.036822 4986 status_manager.go:851] "Failed to get status for pod" podUID="b2dbfdc4-7122-4b7f-bfdd-189396fb1c77" pod="openshift-marketplace/redhat-marketplace-l6bpp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-l6bpp\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.037021 4986 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.037314 4986 status_manager.go:851] "Failed to get status for pod" podUID="df90b001-135e-4c75-a67e-3084e905378a" pod="openshift-marketplace/redhat-marketplace-6mz6x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-6mz6x\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.037544 4986 status_manager.go:851] "Failed to get status for pod" podUID="fb60d5f6-dca0-4f66-91c4-00dd32d26bcf" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.037691 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bzs74" event={"ID":"37f6dc70-6f91-4162-a225-239a999e4320","Type":"ContainerDied","Data":"96bdf089baa05ec4b934cd86bc1687102d13f34c9fbd12b644b947d3fac047ce"} Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.037755 4986 status_manager.go:851] "Failed to get status for pod" podUID="9e009819-9f9d-48db-a20b-dc29cef30887" pod="openshift-marketplace/redhat-operators-wnfnb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-wnfnb\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.037766 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bzs74" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.038638 4986 status_manager.go:851] "Failed to get status for pod" podUID="d28745c8-a7a6-4a38-acb2-f7b6ff1ef54f" pod="openshift-marketplace/certified-operators-vzt5m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-vzt5m\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.038830 4986 status_manager.go:851] "Failed to get status for pod" podUID="37f6dc70-6f91-4162-a225-239a999e4320" pod="openshift-marketplace/community-operators-bzs74" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bzs74\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.039009 4986 status_manager.go:851] "Failed to get status for pod" podUID="bb0a57df-0313-4698-9e73-373c97e2fb72" pod="openshift-marketplace/certified-operators-9w6ts" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-9w6ts\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.039271 4986 status_manager.go:851] "Failed to get status for pod" podUID="b2dbfdc4-7122-4b7f-bfdd-189396fb1c77" pod="openshift-marketplace/redhat-marketplace-l6bpp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-l6bpp\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.039829 4986 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.040715 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"8aafe9b6b4a35d06b8c404e5fbc5b2c15dd3fddca0abd812941dcdbf3f1fe62a"} Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.040753 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"8d14be873d5b741ac0512e51405ba93b9a647775cbdb408e2f0505f779122176"} Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.041563 4986 status_manager.go:851] "Failed to get status for pod" podUID="df90b001-135e-4c75-a67e-3084e905378a" pod="openshift-marketplace/redhat-marketplace-6mz6x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-6mz6x\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.042223 4986 status_manager.go:851] "Failed to get status for pod" podUID="fb60d5f6-dca0-4f66-91c4-00dd32d26bcf" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.042733 4986 status_manager.go:851] "Failed to get status for pod" podUID="9e009819-9f9d-48db-a20b-dc29cef30887" pod="openshift-marketplace/redhat-operators-wnfnb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-wnfnb\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.043412 4986 status_manager.go:851] "Failed to get status for pod" podUID="d28745c8-a7a6-4a38-acb2-f7b6ff1ef54f" pod="openshift-marketplace/certified-operators-vzt5m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-vzt5m\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.043645 4986 status_manager.go:851] "Failed to get status for pod" podUID="37f6dc70-6f91-4162-a225-239a999e4320" pod="openshift-marketplace/community-operators-bzs74" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bzs74\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.043838 4986 status_manager.go:851] "Failed to get status for pod" podUID="bb0a57df-0313-4698-9e73-373c97e2fb72" pod="openshift-marketplace/certified-operators-9w6ts" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-9w6ts\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.044036 4986 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.044251 4986 status_manager.go:851] "Failed to get status for pod" podUID="b2dbfdc4-7122-4b7f-bfdd-189396fb1c77" pod="openshift-marketplace/redhat-marketplace-l6bpp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-l6bpp\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.044427 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.044492 4986 status_manager.go:851] "Failed to get status for pod" podUID="df90b001-135e-4c75-a67e-3084e905378a" pod="openshift-marketplace/redhat-marketplace-6mz6x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-6mz6x\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.044793 4986 status_manager.go:851] "Failed to get status for pod" podUID="fb60d5f6-dca0-4f66-91c4-00dd32d26bcf" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.044988 4986 status_manager.go:851] "Failed to get status for pod" podUID="9e009819-9f9d-48db-a20b-dc29cef30887" pod="openshift-marketplace/redhat-operators-wnfnb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-wnfnb\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.047701 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.049341 4986 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="70f0aa85e0be54890d4367deaf1a95970a7b0ac9d95afdd07c431680bdf7f15e" exitCode=0 Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.049519 4986 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="0c8ecb87e56d3fa7d72b57a6f333db4244ee7f60381778e7099a6fc88d91e1e1" exitCode=0 Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.049603 4986 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5c0ca216d3d1121b7035d5d7ead25d459f95d56808997c02b3801cac0354c76a" exitCode=0 Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.049678 4986 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7eb1602f2d542b1998655456e5c49fca8f55a45a07cb075f989d4b10f91f6c95" exitCode=0 Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.049425 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.050584 4986 status_manager.go:851] "Failed to get status for pod" podUID="d28745c8-a7a6-4a38-acb2-f7b6ff1ef54f" pod="openshift-marketplace/certified-operators-vzt5m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-vzt5m\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.050950 4986 status_manager.go:851] "Failed to get status for pod" podUID="37f6dc70-6f91-4162-a225-239a999e4320" pod="openshift-marketplace/community-operators-bzs74" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bzs74\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.051140 4986 status_manager.go:851] "Failed to get status for pod" podUID="bb0a57df-0313-4698-9e73-373c97e2fb72" pod="openshift-marketplace/certified-operators-9w6ts" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-9w6ts\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.051373 4986 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.051565 4986 status_manager.go:851] "Failed to get status for pod" podUID="b2dbfdc4-7122-4b7f-bfdd-189396fb1c77" pod="openshift-marketplace/redhat-marketplace-l6bpp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-l6bpp\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.051813 4986 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.052076 4986 status_manager.go:851] "Failed to get status for pod" podUID="df90b001-135e-4c75-a67e-3084e905378a" pod="openshift-marketplace/redhat-marketplace-6mz6x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-6mz6x\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.052351 4986 status_manager.go:851] "Failed to get status for pod" podUID="fb60d5f6-dca0-4f66-91c4-00dd32d26bcf" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.052516 4986 status_manager.go:851] "Failed to get status for pod" podUID="9e009819-9f9d-48db-a20b-dc29cef30887" pod="openshift-marketplace/redhat-operators-wnfnb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-wnfnb\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.054062 4986 scope.go:117] "RemoveContainer" containerID="cb585dc90f2e3e407b3ab982a0a37ca9c6eec9474eb9d123604fe82bb8408d32" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.055022 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dgwd8" event={"ID":"49d6e247-25ce-45e1-b2fe-2e3ec70cf966","Type":"ContainerStarted","Data":"e4f8b2a4b2924c3654d9f4f01b9a9164e507313f9ade81c18fc0992782e3fae0"} Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.055819 4986 status_manager.go:851] "Failed to get status for pod" podUID="df90b001-135e-4c75-a67e-3084e905378a" pod="openshift-marketplace/redhat-marketplace-6mz6x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-6mz6x\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.056098 4986 status_manager.go:851] "Failed to get status for pod" podUID="fb60d5f6-dca0-4f66-91c4-00dd32d26bcf" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.056591 4986 status_manager.go:851] "Failed to get status for pod" podUID="9e009819-9f9d-48db-a20b-dc29cef30887" pod="openshift-marketplace/redhat-operators-wnfnb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-wnfnb\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.057225 4986 status_manager.go:851] "Failed to get status for pod" podUID="d28745c8-a7a6-4a38-acb2-f7b6ff1ef54f" pod="openshift-marketplace/certified-operators-vzt5m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-vzt5m\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.057596 4986 status_manager.go:851] "Failed to get status for pod" podUID="37f6dc70-6f91-4162-a225-239a999e4320" pod="openshift-marketplace/community-operators-bzs74" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bzs74\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.057963 4986 status_manager.go:851] "Failed to get status for pod" podUID="bb0a57df-0313-4698-9e73-373c97e2fb72" pod="openshift-marketplace/certified-operators-9w6ts" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-9w6ts\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.058521 4986 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.058828 4986 status_manager.go:851] "Failed to get status for pod" podUID="49d6e247-25ce-45e1-b2fe-2e3ec70cf966" pod="openshift-marketplace/redhat-operators-dgwd8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-dgwd8\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.059017 4986 status_manager.go:851] "Failed to get status for pod" podUID="b2dbfdc4-7122-4b7f-bfdd-189396fb1c77" pod="openshift-marketplace/redhat-marketplace-l6bpp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-l6bpp\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.059201 4986 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.067778 4986 status_manager.go:851] "Failed to get status for pod" podUID="d28745c8-a7a6-4a38-acb2-f7b6ff1ef54f" pod="openshift-marketplace/certified-operators-vzt5m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-vzt5m\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.068569 4986 status_manager.go:851] "Failed to get status for pod" podUID="37f6dc70-6f91-4162-a225-239a999e4320" pod="openshift-marketplace/community-operators-bzs74" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bzs74\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.069003 4986 status_manager.go:851] "Failed to get status for pod" podUID="bb0a57df-0313-4698-9e73-373c97e2fb72" pod="openshift-marketplace/certified-operators-9w6ts" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-9w6ts\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.069678 4986 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.069932 4986 status_manager.go:851] "Failed to get status for pod" podUID="49d6e247-25ce-45e1-b2fe-2e3ec70cf966" pod="openshift-marketplace/redhat-operators-dgwd8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-dgwd8\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.070403 4986 status_manager.go:851] "Failed to get status for pod" podUID="b2dbfdc4-7122-4b7f-bfdd-189396fb1c77" pod="openshift-marketplace/redhat-marketplace-l6bpp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-l6bpp\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.070620 4986 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.070817 4986 status_manager.go:851] "Failed to get status for pod" podUID="df90b001-135e-4c75-a67e-3084e905378a" pod="openshift-marketplace/redhat-marketplace-6mz6x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-6mz6x\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.071436 4986 status_manager.go:851] "Failed to get status for pod" podUID="fb60d5f6-dca0-4f66-91c4-00dd32d26bcf" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.071792 4986 status_manager.go:851] "Failed to get status for pod" podUID="9e009819-9f9d-48db-a20b-dc29cef30887" pod="openshift-marketplace/redhat-operators-wnfnb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-wnfnb\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.072294 4986 status_manager.go:851] "Failed to get status for pod" podUID="d28745c8-a7a6-4a38-acb2-f7b6ff1ef54f" pod="openshift-marketplace/certified-operators-vzt5m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-vzt5m\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.072631 4986 status_manager.go:851] "Failed to get status for pod" podUID="37f6dc70-6f91-4162-a225-239a999e4320" pod="openshift-marketplace/community-operators-bzs74" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bzs74\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.072842 4986 status_manager.go:851] "Failed to get status for pod" podUID="bb0a57df-0313-4698-9e73-373c97e2fb72" pod="openshift-marketplace/certified-operators-9w6ts" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-9w6ts\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.073051 4986 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.073344 4986 status_manager.go:851] "Failed to get status for pod" podUID="49d6e247-25ce-45e1-b2fe-2e3ec70cf966" pod="openshift-marketplace/redhat-operators-dgwd8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-dgwd8\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.073591 4986 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.073851 4986 status_manager.go:851] "Failed to get status for pod" podUID="b2dbfdc4-7122-4b7f-bfdd-189396fb1c77" pod="openshift-marketplace/redhat-marketplace-l6bpp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-l6bpp\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.074037 4986 status_manager.go:851] "Failed to get status for pod" podUID="df90b001-135e-4c75-a67e-3084e905378a" pod="openshift-marketplace/redhat-marketplace-6mz6x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-6mz6x\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.074206 4986 status_manager.go:851] "Failed to get status for pod" podUID="fb60d5f6-dca0-4f66-91c4-00dd32d26bcf" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.074434 4986 status_manager.go:851] "Failed to get status for pod" podUID="9e009819-9f9d-48db-a20b-dc29cef30887" pod="openshift-marketplace/redhat-operators-wnfnb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-wnfnb\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.074672 4986 status_manager.go:851] "Failed to get status for pod" podUID="d28745c8-a7a6-4a38-acb2-f7b6ff1ef54f" pod="openshift-marketplace/certified-operators-vzt5m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-vzt5m\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.074852 4986 status_manager.go:851] "Failed to get status for pod" podUID="37f6dc70-6f91-4162-a225-239a999e4320" pod="openshift-marketplace/community-operators-bzs74" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bzs74\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.075028 4986 status_manager.go:851] "Failed to get status for pod" podUID="bb0a57df-0313-4698-9e73-373c97e2fb72" pod="openshift-marketplace/certified-operators-9w6ts" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-9w6ts\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.075206 4986 status_manager.go:851] "Failed to get status for pod" podUID="49d6e247-25ce-45e1-b2fe-2e3ec70cf966" pod="openshift-marketplace/redhat-operators-dgwd8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-dgwd8\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.075419 4986 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.075715 4986 status_manager.go:851] "Failed to get status for pod" podUID="b2dbfdc4-7122-4b7f-bfdd-189396fb1c77" pod="openshift-marketplace/redhat-marketplace-l6bpp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-l6bpp\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.076125 4986 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.076472 4986 status_manager.go:851] "Failed to get status for pod" podUID="df90b001-135e-4c75-a67e-3084e905378a" pod="openshift-marketplace/redhat-marketplace-6mz6x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-6mz6x\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.076823 4986 status_manager.go:851] "Failed to get status for pod" podUID="fb60d5f6-dca0-4f66-91c4-00dd32d26bcf" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.087961 4986 status_manager.go:851] "Failed to get status for pod" podUID="9e009819-9f9d-48db-a20b-dc29cef30887" pod="openshift-marketplace/redhat-operators-wnfnb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-wnfnb\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.091113 4986 scope.go:117] "RemoveContainer" containerID="9956c60d607c5bcb9a97bc8b02faeac53e525e67623f77c4e2114a9b17cc0707" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.126708 4986 scope.go:117] "RemoveContainer" containerID="afdcd8aa4a8e311b96feeb979a90355627776e61e9ced20a0c82f879b7546fcc" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.147529 4986 scope.go:117] "RemoveContainer" containerID="99e3e9120a5aea1c711199d17da5aeb9e9702a87de8aa263cc5a0f4b9562411f" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.176548 4986 scope.go:117] "RemoveContainer" containerID="66dc9dd79127f8956aa40c0b7eea666ffb04b1d9b0485c51919d89c460b975c6" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.191225 4986 scope.go:117] "RemoveContainer" containerID="0cad1c7c53e6c4dfc96a320d019c45fe9bb0847f033b9676e62422dc214c3241" Dec 03 13:00:21 crc kubenswrapper[4986]: E1203 13:00:21.196576 4986 log.go:32] "RunPodSandbox from runtime service failed" err=< Dec 03 13:00:21 crc kubenswrapper[4986]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29412780-c4mlz_openshift-operator-lifecycle-manager_f68bd6da-f9ec-44ee-9a80-1b5820c75d8e_0(9ed031e177e05821497f54b94ef52e2d8476d01e052f217bee1d435e7988d97f): error adding pod openshift-operator-lifecycle-manager_collect-profiles-29412780-c4mlz to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"9ed031e177e05821497f54b94ef52e2d8476d01e052f217bee1d435e7988d97f" Netns:"/var/run/netns/5e425263-1108-41d7-9646-7a7acf29511b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=collect-profiles-29412780-c4mlz;K8S_POD_INFRA_CONTAINER_ID=9ed031e177e05821497f54b94ef52e2d8476d01e052f217bee1d435e7988d97f;K8S_POD_UID=f68bd6da-f9ec-44ee-9a80-1b5820c75d8e" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/collect-profiles-29412780-c4mlz] networking: Multus: [openshift-operator-lifecycle-manager/collect-profiles-29412780-c4mlz/f68bd6da-f9ec-44ee-9a80-1b5820c75d8e]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod collect-profiles-29412780-c4mlz in out of cluster comm: SetNetworkStatus: failed to update the pod collect-profiles-29412780-c4mlz in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/collect-profiles-29412780-c4mlz?timeout=1m0s": dial tcp 38.129.56.112:6443: connect: connection refused Dec 03 13:00:21 crc kubenswrapper[4986]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Dec 03 13:00:21 crc kubenswrapper[4986]: > Dec 03 13:00:21 crc kubenswrapper[4986]: E1203 13:00:21.196641 4986 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Dec 03 13:00:21 crc kubenswrapper[4986]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29412780-c4mlz_openshift-operator-lifecycle-manager_f68bd6da-f9ec-44ee-9a80-1b5820c75d8e_0(9ed031e177e05821497f54b94ef52e2d8476d01e052f217bee1d435e7988d97f): error adding pod openshift-operator-lifecycle-manager_collect-profiles-29412780-c4mlz to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"9ed031e177e05821497f54b94ef52e2d8476d01e052f217bee1d435e7988d97f" Netns:"/var/run/netns/5e425263-1108-41d7-9646-7a7acf29511b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=collect-profiles-29412780-c4mlz;K8S_POD_INFRA_CONTAINER_ID=9ed031e177e05821497f54b94ef52e2d8476d01e052f217bee1d435e7988d97f;K8S_POD_UID=f68bd6da-f9ec-44ee-9a80-1b5820c75d8e" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/collect-profiles-29412780-c4mlz] networking: Multus: [openshift-operator-lifecycle-manager/collect-profiles-29412780-c4mlz/f68bd6da-f9ec-44ee-9a80-1b5820c75d8e]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod collect-profiles-29412780-c4mlz in out of cluster comm: SetNetworkStatus: failed to update the pod collect-profiles-29412780-c4mlz in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/collect-profiles-29412780-c4mlz?timeout=1m0s": dial tcp 38.129.56.112:6443: connect: connection refused Dec 03 13:00:21 crc kubenswrapper[4986]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Dec 03 13:00:21 crc kubenswrapper[4986]: > pod="openshift-operator-lifecycle-manager/collect-profiles-29412780-c4mlz" Dec 03 13:00:21 crc kubenswrapper[4986]: E1203 13:00:21.196666 4986 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Dec 03 13:00:21 crc kubenswrapper[4986]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29412780-c4mlz_openshift-operator-lifecycle-manager_f68bd6da-f9ec-44ee-9a80-1b5820c75d8e_0(9ed031e177e05821497f54b94ef52e2d8476d01e052f217bee1d435e7988d97f): error adding pod openshift-operator-lifecycle-manager_collect-profiles-29412780-c4mlz to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"9ed031e177e05821497f54b94ef52e2d8476d01e052f217bee1d435e7988d97f" Netns:"/var/run/netns/5e425263-1108-41d7-9646-7a7acf29511b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=collect-profiles-29412780-c4mlz;K8S_POD_INFRA_CONTAINER_ID=9ed031e177e05821497f54b94ef52e2d8476d01e052f217bee1d435e7988d97f;K8S_POD_UID=f68bd6da-f9ec-44ee-9a80-1b5820c75d8e" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/collect-profiles-29412780-c4mlz] networking: Multus: [openshift-operator-lifecycle-manager/collect-profiles-29412780-c4mlz/f68bd6da-f9ec-44ee-9a80-1b5820c75d8e]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod collect-profiles-29412780-c4mlz in out of cluster comm: SetNetworkStatus: failed to update the pod collect-profiles-29412780-c4mlz in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/collect-profiles-29412780-c4mlz?timeout=1m0s": dial tcp 38.129.56.112:6443: connect: connection refused Dec 03 13:00:21 crc kubenswrapper[4986]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Dec 03 13:00:21 crc kubenswrapper[4986]: > pod="openshift-operator-lifecycle-manager/collect-profiles-29412780-c4mlz" Dec 03 13:00:21 crc kubenswrapper[4986]: E1203 13:00:21.196737 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"collect-profiles-29412780-c4mlz_openshift-operator-lifecycle-manager(f68bd6da-f9ec-44ee-9a80-1b5820c75d8e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"collect-profiles-29412780-c4mlz_openshift-operator-lifecycle-manager(f68bd6da-f9ec-44ee-9a80-1b5820c75d8e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29412780-c4mlz_openshift-operator-lifecycle-manager_f68bd6da-f9ec-44ee-9a80-1b5820c75d8e_0(9ed031e177e05821497f54b94ef52e2d8476d01e052f217bee1d435e7988d97f): error adding pod openshift-operator-lifecycle-manager_collect-profiles-29412780-c4mlz to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"9ed031e177e05821497f54b94ef52e2d8476d01e052f217bee1d435e7988d97f\\\" Netns:\\\"/var/run/netns/5e425263-1108-41d7-9646-7a7acf29511b\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=collect-profiles-29412780-c4mlz;K8S_POD_INFRA_CONTAINER_ID=9ed031e177e05821497f54b94ef52e2d8476d01e052f217bee1d435e7988d97f;K8S_POD_UID=f68bd6da-f9ec-44ee-9a80-1b5820c75d8e\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/collect-profiles-29412780-c4mlz] networking: Multus: [openshift-operator-lifecycle-manager/collect-profiles-29412780-c4mlz/f68bd6da-f9ec-44ee-9a80-1b5820c75d8e]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod collect-profiles-29412780-c4mlz in out of cluster comm: SetNetworkStatus: failed to update the pod collect-profiles-29412780-c4mlz in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/collect-profiles-29412780-c4mlz?timeout=1m0s\\\": dial tcp 38.129.56.112:6443: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-operator-lifecycle-manager/collect-profiles-29412780-c4mlz" podUID="f68bd6da-f9ec-44ee-9a80-1b5820c75d8e" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.205237 4986 scope.go:117] "RemoveContainer" containerID="f67254c5fdf27e72c006f14b66f9df3ad8407bbe2e33e3af31bbaf860f8834cd" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.226409 4986 scope.go:117] "RemoveContainer" containerID="51d2b301b70919f9cb92ebd21387a58082157ef5f0788c77ae18aa4112cfa056" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.242620 4986 scope.go:117] "RemoveContainer" containerID="70f0aa85e0be54890d4367deaf1a95970a7b0ac9d95afdd07c431680bdf7f15e" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.249384 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wnfnb" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.249431 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wnfnb" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.266784 4986 scope.go:117] "RemoveContainer" containerID="8c0ed6891f87bfc3c62f0f13739720e012dc8fa3b122a18e799e164e4656abb7" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.289227 4986 scope.go:117] "RemoveContainer" containerID="0c8ecb87e56d3fa7d72b57a6f333db4244ee7f60381778e7099a6fc88d91e1e1" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.301105 4986 scope.go:117] "RemoveContainer" containerID="5c0ca216d3d1121b7035d5d7ead25d459f95d56808997c02b3801cac0354c76a" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.317099 4986 scope.go:117] "RemoveContainer" containerID="04aa8cf2e01507ace747505514511263d9c560608e33f592a2c10be1edad8674" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.337216 4986 scope.go:117] "RemoveContainer" containerID="7eb1602f2d542b1998655456e5c49fca8f55a45a07cb075f989d4b10f91f6c95" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.355486 4986 scope.go:117] "RemoveContainer" containerID="29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.380504 4986 scope.go:117] "RemoveContainer" containerID="70f0aa85e0be54890d4367deaf1a95970a7b0ac9d95afdd07c431680bdf7f15e" Dec 03 13:00:21 crc kubenswrapper[4986]: E1203 13:00:21.381112 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70f0aa85e0be54890d4367deaf1a95970a7b0ac9d95afdd07c431680bdf7f15e\": container with ID starting with 70f0aa85e0be54890d4367deaf1a95970a7b0ac9d95afdd07c431680bdf7f15e not found: ID does not exist" containerID="70f0aa85e0be54890d4367deaf1a95970a7b0ac9d95afdd07c431680bdf7f15e" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.381145 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70f0aa85e0be54890d4367deaf1a95970a7b0ac9d95afdd07c431680bdf7f15e"} err="failed to get container status \"70f0aa85e0be54890d4367deaf1a95970a7b0ac9d95afdd07c431680bdf7f15e\": rpc error: code = NotFound desc = could not find container \"70f0aa85e0be54890d4367deaf1a95970a7b0ac9d95afdd07c431680bdf7f15e\": container with ID starting with 70f0aa85e0be54890d4367deaf1a95970a7b0ac9d95afdd07c431680bdf7f15e not found: ID does not exist" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.381170 4986 scope.go:117] "RemoveContainer" containerID="8c0ed6891f87bfc3c62f0f13739720e012dc8fa3b122a18e799e164e4656abb7" Dec 03 13:00:21 crc kubenswrapper[4986]: E1203 13:00:21.382112 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c0ed6891f87bfc3c62f0f13739720e012dc8fa3b122a18e799e164e4656abb7\": container with ID starting with 8c0ed6891f87bfc3c62f0f13739720e012dc8fa3b122a18e799e164e4656abb7 not found: ID does not exist" containerID="8c0ed6891f87bfc3c62f0f13739720e012dc8fa3b122a18e799e164e4656abb7" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.382140 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c0ed6891f87bfc3c62f0f13739720e012dc8fa3b122a18e799e164e4656abb7"} err="failed to get container status \"8c0ed6891f87bfc3c62f0f13739720e012dc8fa3b122a18e799e164e4656abb7\": rpc error: code = NotFound desc = could not find container \"8c0ed6891f87bfc3c62f0f13739720e012dc8fa3b122a18e799e164e4656abb7\": container with ID starting with 8c0ed6891f87bfc3c62f0f13739720e012dc8fa3b122a18e799e164e4656abb7 not found: ID does not exist" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.382155 4986 scope.go:117] "RemoveContainer" containerID="0c8ecb87e56d3fa7d72b57a6f333db4244ee7f60381778e7099a6fc88d91e1e1" Dec 03 13:00:21 crc kubenswrapper[4986]: E1203 13:00:21.382563 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c8ecb87e56d3fa7d72b57a6f333db4244ee7f60381778e7099a6fc88d91e1e1\": container with ID starting with 0c8ecb87e56d3fa7d72b57a6f333db4244ee7f60381778e7099a6fc88d91e1e1 not found: ID does not exist" containerID="0c8ecb87e56d3fa7d72b57a6f333db4244ee7f60381778e7099a6fc88d91e1e1" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.382590 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c8ecb87e56d3fa7d72b57a6f333db4244ee7f60381778e7099a6fc88d91e1e1"} err="failed to get container status \"0c8ecb87e56d3fa7d72b57a6f333db4244ee7f60381778e7099a6fc88d91e1e1\": rpc error: code = NotFound desc = could not find container \"0c8ecb87e56d3fa7d72b57a6f333db4244ee7f60381778e7099a6fc88d91e1e1\": container with ID starting with 0c8ecb87e56d3fa7d72b57a6f333db4244ee7f60381778e7099a6fc88d91e1e1 not found: ID does not exist" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.382602 4986 scope.go:117] "RemoveContainer" containerID="5c0ca216d3d1121b7035d5d7ead25d459f95d56808997c02b3801cac0354c76a" Dec 03 13:00:21 crc kubenswrapper[4986]: E1203 13:00:21.382907 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c0ca216d3d1121b7035d5d7ead25d459f95d56808997c02b3801cac0354c76a\": container with ID starting with 5c0ca216d3d1121b7035d5d7ead25d459f95d56808997c02b3801cac0354c76a not found: ID does not exist" containerID="5c0ca216d3d1121b7035d5d7ead25d459f95d56808997c02b3801cac0354c76a" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.382929 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c0ca216d3d1121b7035d5d7ead25d459f95d56808997c02b3801cac0354c76a"} err="failed to get container status \"5c0ca216d3d1121b7035d5d7ead25d459f95d56808997c02b3801cac0354c76a\": rpc error: code = NotFound desc = could not find container \"5c0ca216d3d1121b7035d5d7ead25d459f95d56808997c02b3801cac0354c76a\": container with ID starting with 5c0ca216d3d1121b7035d5d7ead25d459f95d56808997c02b3801cac0354c76a not found: ID does not exist" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.382940 4986 scope.go:117] "RemoveContainer" containerID="04aa8cf2e01507ace747505514511263d9c560608e33f592a2c10be1edad8674" Dec 03 13:00:21 crc kubenswrapper[4986]: E1203 13:00:21.383269 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04aa8cf2e01507ace747505514511263d9c560608e33f592a2c10be1edad8674\": container with ID starting with 04aa8cf2e01507ace747505514511263d9c560608e33f592a2c10be1edad8674 not found: ID does not exist" containerID="04aa8cf2e01507ace747505514511263d9c560608e33f592a2c10be1edad8674" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.383369 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04aa8cf2e01507ace747505514511263d9c560608e33f592a2c10be1edad8674"} err="failed to get container status \"04aa8cf2e01507ace747505514511263d9c560608e33f592a2c10be1edad8674\": rpc error: code = NotFound desc = could not find container \"04aa8cf2e01507ace747505514511263d9c560608e33f592a2c10be1edad8674\": container with ID starting with 04aa8cf2e01507ace747505514511263d9c560608e33f592a2c10be1edad8674 not found: ID does not exist" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.383387 4986 scope.go:117] "RemoveContainer" containerID="7eb1602f2d542b1998655456e5c49fca8f55a45a07cb075f989d4b10f91f6c95" Dec 03 13:00:21 crc kubenswrapper[4986]: E1203 13:00:21.383607 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7eb1602f2d542b1998655456e5c49fca8f55a45a07cb075f989d4b10f91f6c95\": container with ID starting with 7eb1602f2d542b1998655456e5c49fca8f55a45a07cb075f989d4b10f91f6c95 not found: ID does not exist" containerID="7eb1602f2d542b1998655456e5c49fca8f55a45a07cb075f989d4b10f91f6c95" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.383674 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7eb1602f2d542b1998655456e5c49fca8f55a45a07cb075f989d4b10f91f6c95"} err="failed to get container status \"7eb1602f2d542b1998655456e5c49fca8f55a45a07cb075f989d4b10f91f6c95\": rpc error: code = NotFound desc = could not find container \"7eb1602f2d542b1998655456e5c49fca8f55a45a07cb075f989d4b10f91f6c95\": container with ID starting with 7eb1602f2d542b1998655456e5c49fca8f55a45a07cb075f989d4b10f91f6c95 not found: ID does not exist" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.383688 4986 scope.go:117] "RemoveContainer" containerID="29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41" Dec 03 13:00:21 crc kubenswrapper[4986]: E1203 13:00:21.383927 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41\": container with ID starting with 29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41 not found: ID does not exist" containerID="29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.383949 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41"} err="failed to get container status \"29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41\": rpc error: code = NotFound desc = could not find container \"29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41\": container with ID starting with 29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41 not found: ID does not exist" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.383962 4986 scope.go:117] "RemoveContainer" containerID="70f0aa85e0be54890d4367deaf1a95970a7b0ac9d95afdd07c431680bdf7f15e" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.384208 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70f0aa85e0be54890d4367deaf1a95970a7b0ac9d95afdd07c431680bdf7f15e"} err="failed to get container status \"70f0aa85e0be54890d4367deaf1a95970a7b0ac9d95afdd07c431680bdf7f15e\": rpc error: code = NotFound desc = could not find container \"70f0aa85e0be54890d4367deaf1a95970a7b0ac9d95afdd07c431680bdf7f15e\": container with ID starting with 70f0aa85e0be54890d4367deaf1a95970a7b0ac9d95afdd07c431680bdf7f15e not found: ID does not exist" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.384229 4986 scope.go:117] "RemoveContainer" containerID="8c0ed6891f87bfc3c62f0f13739720e012dc8fa3b122a18e799e164e4656abb7" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.384468 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c0ed6891f87bfc3c62f0f13739720e012dc8fa3b122a18e799e164e4656abb7"} err="failed to get container status \"8c0ed6891f87bfc3c62f0f13739720e012dc8fa3b122a18e799e164e4656abb7\": rpc error: code = NotFound desc = could not find container \"8c0ed6891f87bfc3c62f0f13739720e012dc8fa3b122a18e799e164e4656abb7\": container with ID starting with 8c0ed6891f87bfc3c62f0f13739720e012dc8fa3b122a18e799e164e4656abb7 not found: ID does not exist" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.384492 4986 scope.go:117] "RemoveContainer" containerID="0c8ecb87e56d3fa7d72b57a6f333db4244ee7f60381778e7099a6fc88d91e1e1" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.384742 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c8ecb87e56d3fa7d72b57a6f333db4244ee7f60381778e7099a6fc88d91e1e1"} err="failed to get container status \"0c8ecb87e56d3fa7d72b57a6f333db4244ee7f60381778e7099a6fc88d91e1e1\": rpc error: code = NotFound desc = could not find container \"0c8ecb87e56d3fa7d72b57a6f333db4244ee7f60381778e7099a6fc88d91e1e1\": container with ID starting with 0c8ecb87e56d3fa7d72b57a6f333db4244ee7f60381778e7099a6fc88d91e1e1 not found: ID does not exist" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.384767 4986 scope.go:117] "RemoveContainer" containerID="5c0ca216d3d1121b7035d5d7ead25d459f95d56808997c02b3801cac0354c76a" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.384984 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c0ca216d3d1121b7035d5d7ead25d459f95d56808997c02b3801cac0354c76a"} err="failed to get container status \"5c0ca216d3d1121b7035d5d7ead25d459f95d56808997c02b3801cac0354c76a\": rpc error: code = NotFound desc = could not find container \"5c0ca216d3d1121b7035d5d7ead25d459f95d56808997c02b3801cac0354c76a\": container with ID starting with 5c0ca216d3d1121b7035d5d7ead25d459f95d56808997c02b3801cac0354c76a not found: ID does not exist" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.385003 4986 scope.go:117] "RemoveContainer" containerID="04aa8cf2e01507ace747505514511263d9c560608e33f592a2c10be1edad8674" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.385251 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04aa8cf2e01507ace747505514511263d9c560608e33f592a2c10be1edad8674"} err="failed to get container status \"04aa8cf2e01507ace747505514511263d9c560608e33f592a2c10be1edad8674\": rpc error: code = NotFound desc = could not find container \"04aa8cf2e01507ace747505514511263d9c560608e33f592a2c10be1edad8674\": container with ID starting with 04aa8cf2e01507ace747505514511263d9c560608e33f592a2c10be1edad8674 not found: ID does not exist" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.385268 4986 scope.go:117] "RemoveContainer" containerID="7eb1602f2d542b1998655456e5c49fca8f55a45a07cb075f989d4b10f91f6c95" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.385713 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7eb1602f2d542b1998655456e5c49fca8f55a45a07cb075f989d4b10f91f6c95"} err="failed to get container status \"7eb1602f2d542b1998655456e5c49fca8f55a45a07cb075f989d4b10f91f6c95\": rpc error: code = NotFound desc = could not find container \"7eb1602f2d542b1998655456e5c49fca8f55a45a07cb075f989d4b10f91f6c95\": container with ID starting with 7eb1602f2d542b1998655456e5c49fca8f55a45a07cb075f989d4b10f91f6c95 not found: ID does not exist" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.385740 4986 scope.go:117] "RemoveContainer" containerID="29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.386079 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41"} err="failed to get container status \"29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41\": rpc error: code = NotFound desc = could not find container \"29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41\": container with ID starting with 29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41 not found: ID does not exist" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.386102 4986 scope.go:117] "RemoveContainer" containerID="70f0aa85e0be54890d4367deaf1a95970a7b0ac9d95afdd07c431680bdf7f15e" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.388094 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70f0aa85e0be54890d4367deaf1a95970a7b0ac9d95afdd07c431680bdf7f15e"} err="failed to get container status \"70f0aa85e0be54890d4367deaf1a95970a7b0ac9d95afdd07c431680bdf7f15e\": rpc error: code = NotFound desc = could not find container \"70f0aa85e0be54890d4367deaf1a95970a7b0ac9d95afdd07c431680bdf7f15e\": container with ID starting with 70f0aa85e0be54890d4367deaf1a95970a7b0ac9d95afdd07c431680bdf7f15e not found: ID does not exist" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.388132 4986 scope.go:117] "RemoveContainer" containerID="8c0ed6891f87bfc3c62f0f13739720e012dc8fa3b122a18e799e164e4656abb7" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.388476 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c0ed6891f87bfc3c62f0f13739720e012dc8fa3b122a18e799e164e4656abb7"} err="failed to get container status \"8c0ed6891f87bfc3c62f0f13739720e012dc8fa3b122a18e799e164e4656abb7\": rpc error: code = NotFound desc = could not find container \"8c0ed6891f87bfc3c62f0f13739720e012dc8fa3b122a18e799e164e4656abb7\": container with ID starting with 8c0ed6891f87bfc3c62f0f13739720e012dc8fa3b122a18e799e164e4656abb7 not found: ID does not exist" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.388503 4986 scope.go:117] "RemoveContainer" containerID="0c8ecb87e56d3fa7d72b57a6f333db4244ee7f60381778e7099a6fc88d91e1e1" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.388710 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c8ecb87e56d3fa7d72b57a6f333db4244ee7f60381778e7099a6fc88d91e1e1"} err="failed to get container status \"0c8ecb87e56d3fa7d72b57a6f333db4244ee7f60381778e7099a6fc88d91e1e1\": rpc error: code = NotFound desc = could not find container \"0c8ecb87e56d3fa7d72b57a6f333db4244ee7f60381778e7099a6fc88d91e1e1\": container with ID starting with 0c8ecb87e56d3fa7d72b57a6f333db4244ee7f60381778e7099a6fc88d91e1e1 not found: ID does not exist" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.388734 4986 scope.go:117] "RemoveContainer" containerID="5c0ca216d3d1121b7035d5d7ead25d459f95d56808997c02b3801cac0354c76a" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.389004 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c0ca216d3d1121b7035d5d7ead25d459f95d56808997c02b3801cac0354c76a"} err="failed to get container status \"5c0ca216d3d1121b7035d5d7ead25d459f95d56808997c02b3801cac0354c76a\": rpc error: code = NotFound desc = could not find container \"5c0ca216d3d1121b7035d5d7ead25d459f95d56808997c02b3801cac0354c76a\": container with ID starting with 5c0ca216d3d1121b7035d5d7ead25d459f95d56808997c02b3801cac0354c76a not found: ID does not exist" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.389030 4986 scope.go:117] "RemoveContainer" containerID="04aa8cf2e01507ace747505514511263d9c560608e33f592a2c10be1edad8674" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.389938 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04aa8cf2e01507ace747505514511263d9c560608e33f592a2c10be1edad8674"} err="failed to get container status \"04aa8cf2e01507ace747505514511263d9c560608e33f592a2c10be1edad8674\": rpc error: code = NotFound desc = could not find container \"04aa8cf2e01507ace747505514511263d9c560608e33f592a2c10be1edad8674\": container with ID starting with 04aa8cf2e01507ace747505514511263d9c560608e33f592a2c10be1edad8674 not found: ID does not exist" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.389999 4986 scope.go:117] "RemoveContainer" containerID="7eb1602f2d542b1998655456e5c49fca8f55a45a07cb075f989d4b10f91f6c95" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.390351 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7eb1602f2d542b1998655456e5c49fca8f55a45a07cb075f989d4b10f91f6c95"} err="failed to get container status \"7eb1602f2d542b1998655456e5c49fca8f55a45a07cb075f989d4b10f91f6c95\": rpc error: code = NotFound desc = could not find container \"7eb1602f2d542b1998655456e5c49fca8f55a45a07cb075f989d4b10f91f6c95\": container with ID starting with 7eb1602f2d542b1998655456e5c49fca8f55a45a07cb075f989d4b10f91f6c95 not found: ID does not exist" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.390389 4986 scope.go:117] "RemoveContainer" containerID="29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.392785 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41"} err="failed to get container status \"29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41\": rpc error: code = NotFound desc = could not find container \"29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41\": container with ID starting with 29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41 not found: ID does not exist" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.392824 4986 scope.go:117] "RemoveContainer" containerID="70f0aa85e0be54890d4367deaf1a95970a7b0ac9d95afdd07c431680bdf7f15e" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.393255 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70f0aa85e0be54890d4367deaf1a95970a7b0ac9d95afdd07c431680bdf7f15e"} err="failed to get container status \"70f0aa85e0be54890d4367deaf1a95970a7b0ac9d95afdd07c431680bdf7f15e\": rpc error: code = NotFound desc = could not find container \"70f0aa85e0be54890d4367deaf1a95970a7b0ac9d95afdd07c431680bdf7f15e\": container with ID starting with 70f0aa85e0be54890d4367deaf1a95970a7b0ac9d95afdd07c431680bdf7f15e not found: ID does not exist" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.393313 4986 scope.go:117] "RemoveContainer" containerID="8c0ed6891f87bfc3c62f0f13739720e012dc8fa3b122a18e799e164e4656abb7" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.393643 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c0ed6891f87bfc3c62f0f13739720e012dc8fa3b122a18e799e164e4656abb7"} err="failed to get container status \"8c0ed6891f87bfc3c62f0f13739720e012dc8fa3b122a18e799e164e4656abb7\": rpc error: code = NotFound desc = could not find container \"8c0ed6891f87bfc3c62f0f13739720e012dc8fa3b122a18e799e164e4656abb7\": container with ID starting with 8c0ed6891f87bfc3c62f0f13739720e012dc8fa3b122a18e799e164e4656abb7 not found: ID does not exist" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.393701 4986 scope.go:117] "RemoveContainer" containerID="0c8ecb87e56d3fa7d72b57a6f333db4244ee7f60381778e7099a6fc88d91e1e1" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.394647 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c8ecb87e56d3fa7d72b57a6f333db4244ee7f60381778e7099a6fc88d91e1e1"} err="failed to get container status \"0c8ecb87e56d3fa7d72b57a6f333db4244ee7f60381778e7099a6fc88d91e1e1\": rpc error: code = NotFound desc = could not find container \"0c8ecb87e56d3fa7d72b57a6f333db4244ee7f60381778e7099a6fc88d91e1e1\": container with ID starting with 0c8ecb87e56d3fa7d72b57a6f333db4244ee7f60381778e7099a6fc88d91e1e1 not found: ID does not exist" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.394690 4986 scope.go:117] "RemoveContainer" containerID="5c0ca216d3d1121b7035d5d7ead25d459f95d56808997c02b3801cac0354c76a" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.395080 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c0ca216d3d1121b7035d5d7ead25d459f95d56808997c02b3801cac0354c76a"} err="failed to get container status \"5c0ca216d3d1121b7035d5d7ead25d459f95d56808997c02b3801cac0354c76a\": rpc error: code = NotFound desc = could not find container \"5c0ca216d3d1121b7035d5d7ead25d459f95d56808997c02b3801cac0354c76a\": container with ID starting with 5c0ca216d3d1121b7035d5d7ead25d459f95d56808997c02b3801cac0354c76a not found: ID does not exist" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.395118 4986 scope.go:117] "RemoveContainer" containerID="04aa8cf2e01507ace747505514511263d9c560608e33f592a2c10be1edad8674" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.396166 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04aa8cf2e01507ace747505514511263d9c560608e33f592a2c10be1edad8674"} err="failed to get container status \"04aa8cf2e01507ace747505514511263d9c560608e33f592a2c10be1edad8674\": rpc error: code = NotFound desc = could not find container \"04aa8cf2e01507ace747505514511263d9c560608e33f592a2c10be1edad8674\": container with ID starting with 04aa8cf2e01507ace747505514511263d9c560608e33f592a2c10be1edad8674 not found: ID does not exist" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.396211 4986 scope.go:117] "RemoveContainer" containerID="7eb1602f2d542b1998655456e5c49fca8f55a45a07cb075f989d4b10f91f6c95" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.396652 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7eb1602f2d542b1998655456e5c49fca8f55a45a07cb075f989d4b10f91f6c95"} err="failed to get container status \"7eb1602f2d542b1998655456e5c49fca8f55a45a07cb075f989d4b10f91f6c95\": rpc error: code = NotFound desc = could not find container \"7eb1602f2d542b1998655456e5c49fca8f55a45a07cb075f989d4b10f91f6c95\": container with ID starting with 7eb1602f2d542b1998655456e5c49fca8f55a45a07cb075f989d4b10f91f6c95 not found: ID does not exist" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.396690 4986 scope.go:117] "RemoveContainer" containerID="29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41" Dec 03 13:00:21 crc kubenswrapper[4986]: I1203 13:00:21.397065 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41"} err="failed to get container status \"29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41\": rpc error: code = NotFound desc = could not find container \"29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41\": container with ID starting with 29d3053218b56785dddfdaa641902458bf21dd3be82c7cc7670f4cee8cea3d41 not found: ID does not exist" Dec 03 13:00:22 crc kubenswrapper[4986]: I1203 13:00:22.075022 4986 generic.go:334] "Generic (PLEG): container finished" podID="d28745c8-a7a6-4a38-acb2-f7b6ff1ef54f" containerID="894818a368572ea3c57b48b8abcaef755f3b87b224801c8733c1833cf52cff4d" exitCode=0 Dec 03 13:00:22 crc kubenswrapper[4986]: I1203 13:00:22.075105 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vzt5m" event={"ID":"d28745c8-a7a6-4a38-acb2-f7b6ff1ef54f","Type":"ContainerDied","Data":"894818a368572ea3c57b48b8abcaef755f3b87b224801c8733c1833cf52cff4d"} Dec 03 13:00:22 crc kubenswrapper[4986]: I1203 13:00:22.075500 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vzt5m" event={"ID":"d28745c8-a7a6-4a38-acb2-f7b6ff1ef54f","Type":"ContainerStarted","Data":"eea71fc2fcb5242dbe6cf6f1a3dc20caaaa2f85d3ded122858372e2375318af1"} Dec 03 13:00:22 crc kubenswrapper[4986]: I1203 13:00:22.076923 4986 status_manager.go:851] "Failed to get status for pod" podUID="d28745c8-a7a6-4a38-acb2-f7b6ff1ef54f" pod="openshift-marketplace/certified-operators-vzt5m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-vzt5m\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:22 crc kubenswrapper[4986]: I1203 13:00:22.077389 4986 status_manager.go:851] "Failed to get status for pod" podUID="37f6dc70-6f91-4162-a225-239a999e4320" pod="openshift-marketplace/community-operators-bzs74" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bzs74\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:22 crc kubenswrapper[4986]: I1203 13:00:22.077644 4986 status_manager.go:851] "Failed to get status for pod" podUID="bb0a57df-0313-4698-9e73-373c97e2fb72" pod="openshift-marketplace/certified-operators-9w6ts" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-9w6ts\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:22 crc kubenswrapper[4986]: I1203 13:00:22.078552 4986 status_manager.go:851] "Failed to get status for pod" podUID="49d6e247-25ce-45e1-b2fe-2e3ec70cf966" pod="openshift-marketplace/redhat-operators-dgwd8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-dgwd8\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:22 crc kubenswrapper[4986]: I1203 13:00:22.078876 4986 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:22 crc kubenswrapper[4986]: I1203 13:00:22.079255 4986 status_manager.go:851] "Failed to get status for pod" podUID="b2dbfdc4-7122-4b7f-bfdd-189396fb1c77" pod="openshift-marketplace/redhat-marketplace-l6bpp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-l6bpp\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:22 crc kubenswrapper[4986]: I1203 13:00:22.079492 4986 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:22 crc kubenswrapper[4986]: I1203 13:00:22.079681 4986 status_manager.go:851] "Failed to get status for pod" podUID="df90b001-135e-4c75-a67e-3084e905378a" pod="openshift-marketplace/redhat-marketplace-6mz6x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-6mz6x\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:22 crc kubenswrapper[4986]: I1203 13:00:22.079888 4986 status_manager.go:851] "Failed to get status for pod" podUID="fb60d5f6-dca0-4f66-91c4-00dd32d26bcf" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:22 crc kubenswrapper[4986]: I1203 13:00:22.080068 4986 status_manager.go:851] "Failed to get status for pod" podUID="9e009819-9f9d-48db-a20b-dc29cef30887" pod="openshift-marketplace/redhat-operators-wnfnb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-wnfnb\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:22 crc kubenswrapper[4986]: I1203 13:00:22.085559 4986 generic.go:334] "Generic (PLEG): container finished" podID="49d6e247-25ce-45e1-b2fe-2e3ec70cf966" containerID="e4f8b2a4b2924c3654d9f4f01b9a9164e507313f9ade81c18fc0992782e3fae0" exitCode=0 Dec 03 13:00:22 crc kubenswrapper[4986]: I1203 13:00:22.085715 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dgwd8" event={"ID":"49d6e247-25ce-45e1-b2fe-2e3ec70cf966","Type":"ContainerDied","Data":"e4f8b2a4b2924c3654d9f4f01b9a9164e507313f9ade81c18fc0992782e3fae0"} Dec 03 13:00:22 crc kubenswrapper[4986]: I1203 13:00:22.085891 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412780-c4mlz" Dec 03 13:00:22 crc kubenswrapper[4986]: I1203 13:00:22.087043 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412780-c4mlz" Dec 03 13:00:22 crc kubenswrapper[4986]: I1203 13:00:22.087295 4986 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:22 crc kubenswrapper[4986]: I1203 13:00:22.087573 4986 status_manager.go:851] "Failed to get status for pod" podUID="b2dbfdc4-7122-4b7f-bfdd-189396fb1c77" pod="openshift-marketplace/redhat-marketplace-l6bpp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-l6bpp\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:22 crc kubenswrapper[4986]: I1203 13:00:22.088010 4986 status_manager.go:851] "Failed to get status for pod" podUID="df90b001-135e-4c75-a67e-3084e905378a" pod="openshift-marketplace/redhat-marketplace-6mz6x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-6mz6x\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:22 crc kubenswrapper[4986]: I1203 13:00:22.088364 4986 status_manager.go:851] "Failed to get status for pod" podUID="fb60d5f6-dca0-4f66-91c4-00dd32d26bcf" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:22 crc kubenswrapper[4986]: I1203 13:00:22.088799 4986 status_manager.go:851] "Failed to get status for pod" podUID="9e009819-9f9d-48db-a20b-dc29cef30887" pod="openshift-marketplace/redhat-operators-wnfnb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-wnfnb\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:22 crc kubenswrapper[4986]: I1203 13:00:22.089658 4986 status_manager.go:851] "Failed to get status for pod" podUID="d28745c8-a7a6-4a38-acb2-f7b6ff1ef54f" pod="openshift-marketplace/certified-operators-vzt5m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-vzt5m\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:22 crc kubenswrapper[4986]: I1203 13:00:22.090382 4986 status_manager.go:851] "Failed to get status for pod" podUID="37f6dc70-6f91-4162-a225-239a999e4320" pod="openshift-marketplace/community-operators-bzs74" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bzs74\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:22 crc kubenswrapper[4986]: I1203 13:00:22.090934 4986 status_manager.go:851] "Failed to get status for pod" podUID="bb0a57df-0313-4698-9e73-373c97e2fb72" pod="openshift-marketplace/certified-operators-9w6ts" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-9w6ts\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:22 crc kubenswrapper[4986]: I1203 13:00:22.091294 4986 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:22 crc kubenswrapper[4986]: I1203 13:00:22.091642 4986 status_manager.go:851] "Failed to get status for pod" podUID="49d6e247-25ce-45e1-b2fe-2e3ec70cf966" pod="openshift-marketplace/redhat-operators-dgwd8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-dgwd8\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:22 crc kubenswrapper[4986]: I1203 13:00:22.250987 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-2k592" podUID="0636c9e2-4b00-4d01-8426-5fbfd9f9fa88" containerName="oauth-openshift" containerID="cri-o://60fb09eaea08025ec0c51955536e27a9b7699d4e13d804c5329d1b3e5991b8ae" gracePeriod=15 Dec 03 13:00:22 crc kubenswrapper[4986]: I1203 13:00:22.292891 4986 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wnfnb" podUID="9e009819-9f9d-48db-a20b-dc29cef30887" containerName="registry-server" probeResult="failure" output=< Dec 03 13:00:22 crc kubenswrapper[4986]: timeout: failed to connect service ":50051" within 1s Dec 03 13:00:22 crc kubenswrapper[4986]: > Dec 03 13:00:22 crc kubenswrapper[4986]: I1203 13:00:22.355680 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 03 13:00:22 crc kubenswrapper[4986]: I1203 13:00:22.356998 4986 status_manager.go:851] "Failed to get status for pod" podUID="fb60d5f6-dca0-4f66-91c4-00dd32d26bcf" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:22 crc kubenswrapper[4986]: I1203 13:00:22.358345 4986 status_manager.go:851] "Failed to get status for pod" podUID="9e009819-9f9d-48db-a20b-dc29cef30887" pod="openshift-marketplace/redhat-operators-wnfnb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-wnfnb\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:22 crc kubenswrapper[4986]: I1203 13:00:22.358526 4986 status_manager.go:851] "Failed to get status for pod" podUID="d28745c8-a7a6-4a38-acb2-f7b6ff1ef54f" pod="openshift-marketplace/certified-operators-vzt5m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-vzt5m\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:22 crc kubenswrapper[4986]: I1203 13:00:22.358680 4986 status_manager.go:851] "Failed to get status for pod" podUID="37f6dc70-6f91-4162-a225-239a999e4320" pod="openshift-marketplace/community-operators-bzs74" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bzs74\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:22 crc kubenswrapper[4986]: I1203 13:00:22.358825 4986 status_manager.go:851] "Failed to get status for pod" podUID="bb0a57df-0313-4698-9e73-373c97e2fb72" pod="openshift-marketplace/certified-operators-9w6ts" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-9w6ts\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:22 crc kubenswrapper[4986]: I1203 13:00:22.359225 4986 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:22 crc kubenswrapper[4986]: I1203 13:00:22.359418 4986 status_manager.go:851] "Failed to get status for pod" podUID="49d6e247-25ce-45e1-b2fe-2e3ec70cf966" pod="openshift-marketplace/redhat-operators-dgwd8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-dgwd8\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:22 crc kubenswrapper[4986]: I1203 13:00:22.359564 4986 status_manager.go:851] "Failed to get status for pod" podUID="b2dbfdc4-7122-4b7f-bfdd-189396fb1c77" pod="openshift-marketplace/redhat-marketplace-l6bpp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-l6bpp\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:22 crc kubenswrapper[4986]: I1203 13:00:22.359698 4986 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:22 crc kubenswrapper[4986]: I1203 13:00:22.359839 4986 status_manager.go:851] "Failed to get status for pod" podUID="df90b001-135e-4c75-a67e-3084e905378a" pod="openshift-marketplace/redhat-marketplace-6mz6x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-6mz6x\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:22 crc kubenswrapper[4986]: I1203 13:00:22.510030 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fb60d5f6-dca0-4f66-91c4-00dd32d26bcf-kubelet-dir\") pod \"fb60d5f6-dca0-4f66-91c4-00dd32d26bcf\" (UID: \"fb60d5f6-dca0-4f66-91c4-00dd32d26bcf\") " Dec 03 13:00:22 crc kubenswrapper[4986]: I1203 13:00:22.510104 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fb60d5f6-dca0-4f66-91c4-00dd32d26bcf-var-lock\") pod \"fb60d5f6-dca0-4f66-91c4-00dd32d26bcf\" (UID: \"fb60d5f6-dca0-4f66-91c4-00dd32d26bcf\") " Dec 03 13:00:22 crc kubenswrapper[4986]: I1203 13:00:22.510117 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fb60d5f6-dca0-4f66-91c4-00dd32d26bcf-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "fb60d5f6-dca0-4f66-91c4-00dd32d26bcf" (UID: "fb60d5f6-dca0-4f66-91c4-00dd32d26bcf"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 13:00:22 crc kubenswrapper[4986]: I1203 13:00:22.510137 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fb60d5f6-dca0-4f66-91c4-00dd32d26bcf-kube-api-access\") pod \"fb60d5f6-dca0-4f66-91c4-00dd32d26bcf\" (UID: \"fb60d5f6-dca0-4f66-91c4-00dd32d26bcf\") " Dec 03 13:00:22 crc kubenswrapper[4986]: I1203 13:00:22.510127 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fb60d5f6-dca0-4f66-91c4-00dd32d26bcf-var-lock" (OuterVolumeSpecName: "var-lock") pod "fb60d5f6-dca0-4f66-91c4-00dd32d26bcf" (UID: "fb60d5f6-dca0-4f66-91c4-00dd32d26bcf"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 13:00:22 crc kubenswrapper[4986]: I1203 13:00:22.510477 4986 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fb60d5f6-dca0-4f66-91c4-00dd32d26bcf-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 03 13:00:22 crc kubenswrapper[4986]: I1203 13:00:22.510492 4986 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fb60d5f6-dca0-4f66-91c4-00dd32d26bcf-var-lock\") on node \"crc\" DevicePath \"\"" Dec 03 13:00:22 crc kubenswrapper[4986]: I1203 13:00:22.515851 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb60d5f6-dca0-4f66-91c4-00dd32d26bcf-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "fb60d5f6-dca0-4f66-91c4-00dd32d26bcf" (UID: "fb60d5f6-dca0-4f66-91c4-00dd32d26bcf"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:00:22 crc kubenswrapper[4986]: E1203 13:00:22.610241 4986 log.go:32] "RunPodSandbox from runtime service failed" err=< Dec 03 13:00:22 crc kubenswrapper[4986]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29412780-c4mlz_openshift-operator-lifecycle-manager_f68bd6da-f9ec-44ee-9a80-1b5820c75d8e_0(3de52b316fafef2de3f6c315b67d92fcbb138f42fd44fac1a7ff899a05bb632c): error adding pod openshift-operator-lifecycle-manager_collect-profiles-29412780-c4mlz to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"3de52b316fafef2de3f6c315b67d92fcbb138f42fd44fac1a7ff899a05bb632c" Netns:"/var/run/netns/7caec1a4-c6b5-4360-a8d8-f948f4ab95d3" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=collect-profiles-29412780-c4mlz;K8S_POD_INFRA_CONTAINER_ID=3de52b316fafef2de3f6c315b67d92fcbb138f42fd44fac1a7ff899a05bb632c;K8S_POD_UID=f68bd6da-f9ec-44ee-9a80-1b5820c75d8e" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/collect-profiles-29412780-c4mlz] networking: Multus: [openshift-operator-lifecycle-manager/collect-profiles-29412780-c4mlz/f68bd6da-f9ec-44ee-9a80-1b5820c75d8e]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod collect-profiles-29412780-c4mlz in out of cluster comm: SetNetworkStatus: failed to update the pod collect-profiles-29412780-c4mlz in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/collect-profiles-29412780-c4mlz?timeout=1m0s": dial tcp 38.129.56.112:6443: connect: connection refused Dec 03 13:00:22 crc kubenswrapper[4986]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Dec 03 13:00:22 crc kubenswrapper[4986]: > Dec 03 13:00:22 crc kubenswrapper[4986]: E1203 13:00:22.610351 4986 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Dec 03 13:00:22 crc kubenswrapper[4986]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29412780-c4mlz_openshift-operator-lifecycle-manager_f68bd6da-f9ec-44ee-9a80-1b5820c75d8e_0(3de52b316fafef2de3f6c315b67d92fcbb138f42fd44fac1a7ff899a05bb632c): error adding pod openshift-operator-lifecycle-manager_collect-profiles-29412780-c4mlz to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"3de52b316fafef2de3f6c315b67d92fcbb138f42fd44fac1a7ff899a05bb632c" Netns:"/var/run/netns/7caec1a4-c6b5-4360-a8d8-f948f4ab95d3" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=collect-profiles-29412780-c4mlz;K8S_POD_INFRA_CONTAINER_ID=3de52b316fafef2de3f6c315b67d92fcbb138f42fd44fac1a7ff899a05bb632c;K8S_POD_UID=f68bd6da-f9ec-44ee-9a80-1b5820c75d8e" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/collect-profiles-29412780-c4mlz] networking: Multus: [openshift-operator-lifecycle-manager/collect-profiles-29412780-c4mlz/f68bd6da-f9ec-44ee-9a80-1b5820c75d8e]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod collect-profiles-29412780-c4mlz in out of cluster comm: SetNetworkStatus: failed to update the pod collect-profiles-29412780-c4mlz in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/collect-profiles-29412780-c4mlz?timeout=1m0s": dial tcp 38.129.56.112:6443: connect: connection refused Dec 03 13:00:22 crc kubenswrapper[4986]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Dec 03 13:00:22 crc kubenswrapper[4986]: > pod="openshift-operator-lifecycle-manager/collect-profiles-29412780-c4mlz" Dec 03 13:00:22 crc kubenswrapper[4986]: E1203 13:00:22.610376 4986 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Dec 03 13:00:22 crc kubenswrapper[4986]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29412780-c4mlz_openshift-operator-lifecycle-manager_f68bd6da-f9ec-44ee-9a80-1b5820c75d8e_0(3de52b316fafef2de3f6c315b67d92fcbb138f42fd44fac1a7ff899a05bb632c): error adding pod openshift-operator-lifecycle-manager_collect-profiles-29412780-c4mlz to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"3de52b316fafef2de3f6c315b67d92fcbb138f42fd44fac1a7ff899a05bb632c" Netns:"/var/run/netns/7caec1a4-c6b5-4360-a8d8-f948f4ab95d3" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=collect-profiles-29412780-c4mlz;K8S_POD_INFRA_CONTAINER_ID=3de52b316fafef2de3f6c315b67d92fcbb138f42fd44fac1a7ff899a05bb632c;K8S_POD_UID=f68bd6da-f9ec-44ee-9a80-1b5820c75d8e" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/collect-profiles-29412780-c4mlz] networking: Multus: [openshift-operator-lifecycle-manager/collect-profiles-29412780-c4mlz/f68bd6da-f9ec-44ee-9a80-1b5820c75d8e]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod collect-profiles-29412780-c4mlz in out of cluster comm: SetNetworkStatus: failed to update the pod collect-profiles-29412780-c4mlz in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/collect-profiles-29412780-c4mlz?timeout=1m0s": dial tcp 38.129.56.112:6443: connect: connection refused Dec 03 13:00:22 crc kubenswrapper[4986]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Dec 03 13:00:22 crc kubenswrapper[4986]: > pod="openshift-operator-lifecycle-manager/collect-profiles-29412780-c4mlz" Dec 03 13:00:22 crc kubenswrapper[4986]: E1203 13:00:22.610446 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"collect-profiles-29412780-c4mlz_openshift-operator-lifecycle-manager(f68bd6da-f9ec-44ee-9a80-1b5820c75d8e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"collect-profiles-29412780-c4mlz_openshift-operator-lifecycle-manager(f68bd6da-f9ec-44ee-9a80-1b5820c75d8e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29412780-c4mlz_openshift-operator-lifecycle-manager_f68bd6da-f9ec-44ee-9a80-1b5820c75d8e_0(3de52b316fafef2de3f6c315b67d92fcbb138f42fd44fac1a7ff899a05bb632c): error adding pod openshift-operator-lifecycle-manager_collect-profiles-29412780-c4mlz to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"3de52b316fafef2de3f6c315b67d92fcbb138f42fd44fac1a7ff899a05bb632c\\\" Netns:\\\"/var/run/netns/7caec1a4-c6b5-4360-a8d8-f948f4ab95d3\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=collect-profiles-29412780-c4mlz;K8S_POD_INFRA_CONTAINER_ID=3de52b316fafef2de3f6c315b67d92fcbb138f42fd44fac1a7ff899a05bb632c;K8S_POD_UID=f68bd6da-f9ec-44ee-9a80-1b5820c75d8e\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/collect-profiles-29412780-c4mlz] networking: Multus: [openshift-operator-lifecycle-manager/collect-profiles-29412780-c4mlz/f68bd6da-f9ec-44ee-9a80-1b5820c75d8e]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod collect-profiles-29412780-c4mlz in out of cluster comm: SetNetworkStatus: failed to update the pod collect-profiles-29412780-c4mlz in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/collect-profiles-29412780-c4mlz?timeout=1m0s\\\": dial tcp 38.129.56.112:6443: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-operator-lifecycle-manager/collect-profiles-29412780-c4mlz" podUID="f68bd6da-f9ec-44ee-9a80-1b5820c75d8e" Dec 03 13:00:22 crc kubenswrapper[4986]: I1203 13:00:22.611192 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fb60d5f6-dca0-4f66-91c4-00dd32d26bcf-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 03 13:00:22 crc kubenswrapper[4986]: E1203 13:00:22.669689 4986 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:22 crc kubenswrapper[4986]: E1203 13:00:22.671944 4986 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:22 crc kubenswrapper[4986]: E1203 13:00:22.672218 4986 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:22 crc kubenswrapper[4986]: E1203 13:00:22.672442 4986 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:22 crc kubenswrapper[4986]: E1203 13:00:22.672629 4986 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:22 crc kubenswrapper[4986]: I1203 13:00:22.672658 4986 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Dec 03 13:00:22 crc kubenswrapper[4986]: E1203 13:00:22.673582 4986 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.112:6443: connect: connection refused" interval="200ms" Dec 03 13:00:22 crc kubenswrapper[4986]: I1203 13:00:22.700020 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-2k592" Dec 03 13:00:22 crc kubenswrapper[4986]: I1203 13:00:22.700414 4986 status_manager.go:851] "Failed to get status for pod" podUID="d28745c8-a7a6-4a38-acb2-f7b6ff1ef54f" pod="openshift-marketplace/certified-operators-vzt5m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-vzt5m\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:22 crc kubenswrapper[4986]: I1203 13:00:22.700668 4986 status_manager.go:851] "Failed to get status for pod" podUID="37f6dc70-6f91-4162-a225-239a999e4320" pod="openshift-marketplace/community-operators-bzs74" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bzs74\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:22 crc kubenswrapper[4986]: I1203 13:00:22.701022 4986 status_manager.go:851] "Failed to get status for pod" podUID="0636c9e2-4b00-4d01-8426-5fbfd9f9fa88" pod="openshift-authentication/oauth-openshift-558db77b4-2k592" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-2k592\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:22 crc kubenswrapper[4986]: I1203 13:00:22.701386 4986 status_manager.go:851] "Failed to get status for pod" podUID="bb0a57df-0313-4698-9e73-373c97e2fb72" pod="openshift-marketplace/certified-operators-9w6ts" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-9w6ts\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:22 crc kubenswrapper[4986]: I1203 13:00:22.701649 4986 status_manager.go:851] "Failed to get status for pod" podUID="49d6e247-25ce-45e1-b2fe-2e3ec70cf966" pod="openshift-marketplace/redhat-operators-dgwd8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-dgwd8\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:22 crc kubenswrapper[4986]: I1203 13:00:22.701933 4986 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:22 crc kubenswrapper[4986]: I1203 13:00:22.702167 4986 status_manager.go:851] "Failed to get status for pod" podUID="b2dbfdc4-7122-4b7f-bfdd-189396fb1c77" pod="openshift-marketplace/redhat-marketplace-l6bpp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-l6bpp\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:22 crc kubenswrapper[4986]: I1203 13:00:22.702419 4986 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:22 crc kubenswrapper[4986]: I1203 13:00:22.702659 4986 status_manager.go:851] "Failed to get status for pod" podUID="df90b001-135e-4c75-a67e-3084e905378a" pod="openshift-marketplace/redhat-marketplace-6mz6x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-6mz6x\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:22 crc kubenswrapper[4986]: I1203 13:00:22.702868 4986 status_manager.go:851] "Failed to get status for pod" podUID="fb60d5f6-dca0-4f66-91c4-00dd32d26bcf" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:22 crc kubenswrapper[4986]: I1203 13:00:22.703122 4986 status_manager.go:851] "Failed to get status for pod" podUID="9e009819-9f9d-48db-a20b-dc29cef30887" pod="openshift-marketplace/redhat-operators-wnfnb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-wnfnb\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:22 crc kubenswrapper[4986]: I1203 13:00:22.813156 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnfjq\" (UniqueName: \"kubernetes.io/projected/0636c9e2-4b00-4d01-8426-5fbfd9f9fa88-kube-api-access-xnfjq\") pod \"0636c9e2-4b00-4d01-8426-5fbfd9f9fa88\" (UID: \"0636c9e2-4b00-4d01-8426-5fbfd9f9fa88\") " Dec 03 13:00:22 crc kubenswrapper[4986]: I1203 13:00:22.813219 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0636c9e2-4b00-4d01-8426-5fbfd9f9fa88-v4-0-config-user-template-provider-selection\") pod \"0636c9e2-4b00-4d01-8426-5fbfd9f9fa88\" (UID: \"0636c9e2-4b00-4d01-8426-5fbfd9f9fa88\") " Dec 03 13:00:22 crc kubenswrapper[4986]: I1203 13:00:22.813265 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0636c9e2-4b00-4d01-8426-5fbfd9f9fa88-v4-0-config-user-template-login\") pod \"0636c9e2-4b00-4d01-8426-5fbfd9f9fa88\" (UID: \"0636c9e2-4b00-4d01-8426-5fbfd9f9fa88\") " Dec 03 13:00:22 crc kubenswrapper[4986]: I1203 13:00:22.813341 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0636c9e2-4b00-4d01-8426-5fbfd9f9fa88-audit-policies\") pod \"0636c9e2-4b00-4d01-8426-5fbfd9f9fa88\" (UID: \"0636c9e2-4b00-4d01-8426-5fbfd9f9fa88\") " Dec 03 13:00:22 crc kubenswrapper[4986]: I1203 13:00:22.813376 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0636c9e2-4b00-4d01-8426-5fbfd9f9fa88-v4-0-config-system-service-ca\") pod \"0636c9e2-4b00-4d01-8426-5fbfd9f9fa88\" (UID: \"0636c9e2-4b00-4d01-8426-5fbfd9f9fa88\") " Dec 03 13:00:22 crc kubenswrapper[4986]: I1203 13:00:22.813413 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0636c9e2-4b00-4d01-8426-5fbfd9f9fa88-v4-0-config-system-ocp-branding-template\") pod \"0636c9e2-4b00-4d01-8426-5fbfd9f9fa88\" (UID: \"0636c9e2-4b00-4d01-8426-5fbfd9f9fa88\") " Dec 03 13:00:22 crc kubenswrapper[4986]: I1203 13:00:22.813436 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0636c9e2-4b00-4d01-8426-5fbfd9f9fa88-audit-dir\") pod \"0636c9e2-4b00-4d01-8426-5fbfd9f9fa88\" (UID: \"0636c9e2-4b00-4d01-8426-5fbfd9f9fa88\") " Dec 03 13:00:22 crc kubenswrapper[4986]: I1203 13:00:22.813481 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0636c9e2-4b00-4d01-8426-5fbfd9f9fa88-v4-0-config-system-router-certs\") pod \"0636c9e2-4b00-4d01-8426-5fbfd9f9fa88\" (UID: \"0636c9e2-4b00-4d01-8426-5fbfd9f9fa88\") " Dec 03 13:00:22 crc kubenswrapper[4986]: I1203 13:00:22.813514 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0636c9e2-4b00-4d01-8426-5fbfd9f9fa88-v4-0-config-system-trusted-ca-bundle\") pod \"0636c9e2-4b00-4d01-8426-5fbfd9f9fa88\" (UID: \"0636c9e2-4b00-4d01-8426-5fbfd9f9fa88\") " Dec 03 13:00:22 crc kubenswrapper[4986]: I1203 13:00:22.813571 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0636c9e2-4b00-4d01-8426-5fbfd9f9fa88-v4-0-config-system-cliconfig\") pod \"0636c9e2-4b00-4d01-8426-5fbfd9f9fa88\" (UID: \"0636c9e2-4b00-4d01-8426-5fbfd9f9fa88\") " Dec 03 13:00:22 crc kubenswrapper[4986]: I1203 13:00:22.813599 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0636c9e2-4b00-4d01-8426-5fbfd9f9fa88-v4-0-config-user-template-error\") pod \"0636c9e2-4b00-4d01-8426-5fbfd9f9fa88\" (UID: \"0636c9e2-4b00-4d01-8426-5fbfd9f9fa88\") " Dec 03 13:00:22 crc kubenswrapper[4986]: I1203 13:00:22.813641 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0636c9e2-4b00-4d01-8426-5fbfd9f9fa88-v4-0-config-system-serving-cert\") pod \"0636c9e2-4b00-4d01-8426-5fbfd9f9fa88\" (UID: \"0636c9e2-4b00-4d01-8426-5fbfd9f9fa88\") " Dec 03 13:00:22 crc kubenswrapper[4986]: I1203 13:00:22.813665 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0636c9e2-4b00-4d01-8426-5fbfd9f9fa88-v4-0-config-system-session\") pod \"0636c9e2-4b00-4d01-8426-5fbfd9f9fa88\" (UID: \"0636c9e2-4b00-4d01-8426-5fbfd9f9fa88\") " Dec 03 13:00:22 crc kubenswrapper[4986]: I1203 13:00:22.813694 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0636c9e2-4b00-4d01-8426-5fbfd9f9fa88-v4-0-config-user-idp-0-file-data\") pod \"0636c9e2-4b00-4d01-8426-5fbfd9f9fa88\" (UID: \"0636c9e2-4b00-4d01-8426-5fbfd9f9fa88\") " Dec 03 13:00:22 crc kubenswrapper[4986]: I1203 13:00:22.813961 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0636c9e2-4b00-4d01-8426-5fbfd9f9fa88-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "0636c9e2-4b00-4d01-8426-5fbfd9f9fa88" (UID: "0636c9e2-4b00-4d01-8426-5fbfd9f9fa88"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 13:00:22 crc kubenswrapper[4986]: I1203 13:00:22.814838 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0636c9e2-4b00-4d01-8426-5fbfd9f9fa88-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "0636c9e2-4b00-4d01-8426-5fbfd9f9fa88" (UID: "0636c9e2-4b00-4d01-8426-5fbfd9f9fa88"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:00:22 crc kubenswrapper[4986]: I1203 13:00:22.817716 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0636c9e2-4b00-4d01-8426-5fbfd9f9fa88-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "0636c9e2-4b00-4d01-8426-5fbfd9f9fa88" (UID: "0636c9e2-4b00-4d01-8426-5fbfd9f9fa88"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:00:22 crc kubenswrapper[4986]: I1203 13:00:22.817756 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0636c9e2-4b00-4d01-8426-5fbfd9f9fa88-kube-api-access-xnfjq" (OuterVolumeSpecName: "kube-api-access-xnfjq") pod "0636c9e2-4b00-4d01-8426-5fbfd9f9fa88" (UID: "0636c9e2-4b00-4d01-8426-5fbfd9f9fa88"). InnerVolumeSpecName "kube-api-access-xnfjq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:00:22 crc kubenswrapper[4986]: I1203 13:00:22.818043 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0636c9e2-4b00-4d01-8426-5fbfd9f9fa88-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "0636c9e2-4b00-4d01-8426-5fbfd9f9fa88" (UID: "0636c9e2-4b00-4d01-8426-5fbfd9f9fa88"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:00:22 crc kubenswrapper[4986]: I1203 13:00:22.818087 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0636c9e2-4b00-4d01-8426-5fbfd9f9fa88-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "0636c9e2-4b00-4d01-8426-5fbfd9f9fa88" (UID: "0636c9e2-4b00-4d01-8426-5fbfd9f9fa88"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:00:22 crc kubenswrapper[4986]: I1203 13:00:22.819991 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0636c9e2-4b00-4d01-8426-5fbfd9f9fa88-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "0636c9e2-4b00-4d01-8426-5fbfd9f9fa88" (UID: "0636c9e2-4b00-4d01-8426-5fbfd9f9fa88"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:00:22 crc kubenswrapper[4986]: I1203 13:00:22.822356 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0636c9e2-4b00-4d01-8426-5fbfd9f9fa88-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "0636c9e2-4b00-4d01-8426-5fbfd9f9fa88" (UID: "0636c9e2-4b00-4d01-8426-5fbfd9f9fa88"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:00:22 crc kubenswrapper[4986]: I1203 13:00:22.822847 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0636c9e2-4b00-4d01-8426-5fbfd9f9fa88-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "0636c9e2-4b00-4d01-8426-5fbfd9f9fa88" (UID: "0636c9e2-4b00-4d01-8426-5fbfd9f9fa88"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:00:22 crc kubenswrapper[4986]: I1203 13:00:22.823203 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0636c9e2-4b00-4d01-8426-5fbfd9f9fa88-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "0636c9e2-4b00-4d01-8426-5fbfd9f9fa88" (UID: "0636c9e2-4b00-4d01-8426-5fbfd9f9fa88"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:00:22 crc kubenswrapper[4986]: I1203 13:00:22.823509 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0636c9e2-4b00-4d01-8426-5fbfd9f9fa88-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "0636c9e2-4b00-4d01-8426-5fbfd9f9fa88" (UID: "0636c9e2-4b00-4d01-8426-5fbfd9f9fa88"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:00:22 crc kubenswrapper[4986]: I1203 13:00:22.823549 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0636c9e2-4b00-4d01-8426-5fbfd9f9fa88-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "0636c9e2-4b00-4d01-8426-5fbfd9f9fa88" (UID: "0636c9e2-4b00-4d01-8426-5fbfd9f9fa88"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:00:22 crc kubenswrapper[4986]: I1203 13:00:22.823586 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0636c9e2-4b00-4d01-8426-5fbfd9f9fa88-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "0636c9e2-4b00-4d01-8426-5fbfd9f9fa88" (UID: "0636c9e2-4b00-4d01-8426-5fbfd9f9fa88"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:00:22 crc kubenswrapper[4986]: I1203 13:00:22.823955 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0636c9e2-4b00-4d01-8426-5fbfd9f9fa88-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "0636c9e2-4b00-4d01-8426-5fbfd9f9fa88" (UID: "0636c9e2-4b00-4d01-8426-5fbfd9f9fa88"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:00:22 crc kubenswrapper[4986]: E1203 13:00:22.874627 4986 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.112:6443: connect: connection refused" interval="400ms" Dec 03 13:00:22 crc kubenswrapper[4986]: I1203 13:00:22.915247 4986 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0636c9e2-4b00-4d01-8426-5fbfd9f9fa88-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 13:00:22 crc kubenswrapper[4986]: I1203 13:00:22.915297 4986 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0636c9e2-4b00-4d01-8426-5fbfd9f9fa88-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 03 13:00:22 crc kubenswrapper[4986]: I1203 13:00:22.915311 4986 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0636c9e2-4b00-4d01-8426-5fbfd9f9fa88-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 03 13:00:22 crc kubenswrapper[4986]: I1203 13:00:22.915322 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xnfjq\" (UniqueName: \"kubernetes.io/projected/0636c9e2-4b00-4d01-8426-5fbfd9f9fa88-kube-api-access-xnfjq\") on node \"crc\" DevicePath \"\"" Dec 03 13:00:22 crc kubenswrapper[4986]: I1203 13:00:22.915332 4986 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0636c9e2-4b00-4d01-8426-5fbfd9f9fa88-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 03 13:00:22 crc kubenswrapper[4986]: I1203 13:00:22.915342 4986 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0636c9e2-4b00-4d01-8426-5fbfd9f9fa88-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 03 13:00:22 crc kubenswrapper[4986]: I1203 13:00:22.915352 4986 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0636c9e2-4b00-4d01-8426-5fbfd9f9fa88-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 03 13:00:22 crc kubenswrapper[4986]: I1203 13:00:22.915360 4986 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0636c9e2-4b00-4d01-8426-5fbfd9f9fa88-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 03 13:00:22 crc kubenswrapper[4986]: I1203 13:00:22.915370 4986 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0636c9e2-4b00-4d01-8426-5fbfd9f9fa88-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 03 13:00:22 crc kubenswrapper[4986]: I1203 13:00:22.915381 4986 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0636c9e2-4b00-4d01-8426-5fbfd9f9fa88-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 03 13:00:22 crc kubenswrapper[4986]: I1203 13:00:22.915389 4986 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0636c9e2-4b00-4d01-8426-5fbfd9f9fa88-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 03 13:00:22 crc kubenswrapper[4986]: I1203 13:00:22.915411 4986 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0636c9e2-4b00-4d01-8426-5fbfd9f9fa88-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 13:00:22 crc kubenswrapper[4986]: I1203 13:00:22.915419 4986 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0636c9e2-4b00-4d01-8426-5fbfd9f9fa88-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 03 13:00:22 crc kubenswrapper[4986]: I1203 13:00:22.915427 4986 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0636c9e2-4b00-4d01-8426-5fbfd9f9fa88-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 03 13:00:23 crc kubenswrapper[4986]: I1203 13:00:23.091048 4986 generic.go:334] "Generic (PLEG): container finished" podID="0636c9e2-4b00-4d01-8426-5fbfd9f9fa88" containerID="60fb09eaea08025ec0c51955536e27a9b7699d4e13d804c5329d1b3e5991b8ae" exitCode=0 Dec 03 13:00:23 crc kubenswrapper[4986]: I1203 13:00:23.091148 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-2k592" Dec 03 13:00:23 crc kubenswrapper[4986]: I1203 13:00:23.091407 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-2k592" event={"ID":"0636c9e2-4b00-4d01-8426-5fbfd9f9fa88","Type":"ContainerDied","Data":"60fb09eaea08025ec0c51955536e27a9b7699d4e13d804c5329d1b3e5991b8ae"} Dec 03 13:00:23 crc kubenswrapper[4986]: I1203 13:00:23.092045 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-2k592" event={"ID":"0636c9e2-4b00-4d01-8426-5fbfd9f9fa88","Type":"ContainerDied","Data":"c04830faa2b790bac276081c49f98734f272d64ecbd9fac160b26c2f16ad1470"} Dec 03 13:00:23 crc kubenswrapper[4986]: I1203 13:00:23.091921 4986 status_manager.go:851] "Failed to get status for pod" podUID="9e009819-9f9d-48db-a20b-dc29cef30887" pod="openshift-marketplace/redhat-operators-wnfnb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-wnfnb\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:23 crc kubenswrapper[4986]: I1203 13:00:23.092147 4986 scope.go:117] "RemoveContainer" containerID="60fb09eaea08025ec0c51955536e27a9b7699d4e13d804c5329d1b3e5991b8ae" Dec 03 13:00:23 crc kubenswrapper[4986]: I1203 13:00:23.092917 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"fb60d5f6-dca0-4f66-91c4-00dd32d26bcf","Type":"ContainerDied","Data":"dcddaf35f8441460dc577f13e9e88e0af5eb0387e55331ac2a0733c3a322c88e"} Dec 03 13:00:23 crc kubenswrapper[4986]: I1203 13:00:23.092941 4986 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dcddaf35f8441460dc577f13e9e88e0af5eb0387e55331ac2a0733c3a322c88e" Dec 03 13:00:23 crc kubenswrapper[4986]: I1203 13:00:23.092993 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 03 13:00:23 crc kubenswrapper[4986]: I1203 13:00:23.093524 4986 status_manager.go:851] "Failed to get status for pod" podUID="37f6dc70-6f91-4162-a225-239a999e4320" pod="openshift-marketplace/community-operators-bzs74" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bzs74\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:23 crc kubenswrapper[4986]: I1203 13:00:23.093826 4986 status_manager.go:851] "Failed to get status for pod" podUID="0636c9e2-4b00-4d01-8426-5fbfd9f9fa88" pod="openshift-authentication/oauth-openshift-558db77b4-2k592" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-2k592\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:23 crc kubenswrapper[4986]: I1203 13:00:23.094042 4986 status_manager.go:851] "Failed to get status for pod" podUID="d28745c8-a7a6-4a38-acb2-f7b6ff1ef54f" pod="openshift-marketplace/certified-operators-vzt5m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-vzt5m\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:23 crc kubenswrapper[4986]: I1203 13:00:23.094237 4986 status_manager.go:851] "Failed to get status for pod" podUID="bb0a57df-0313-4698-9e73-373c97e2fb72" pod="openshift-marketplace/certified-operators-9w6ts" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-9w6ts\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:23 crc kubenswrapper[4986]: I1203 13:00:23.094430 4986 status_manager.go:851] "Failed to get status for pod" podUID="49d6e247-25ce-45e1-b2fe-2e3ec70cf966" pod="openshift-marketplace/redhat-operators-dgwd8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-dgwd8\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:23 crc kubenswrapper[4986]: I1203 13:00:23.094573 4986 status_manager.go:851] "Failed to get status for pod" podUID="b2dbfdc4-7122-4b7f-bfdd-189396fb1c77" pod="openshift-marketplace/redhat-marketplace-l6bpp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-l6bpp\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:23 crc kubenswrapper[4986]: I1203 13:00:23.094744 4986 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:23 crc kubenswrapper[4986]: I1203 13:00:23.094883 4986 status_manager.go:851] "Failed to get status for pod" podUID="df90b001-135e-4c75-a67e-3084e905378a" pod="openshift-marketplace/redhat-marketplace-6mz6x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-6mz6x\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:23 crc kubenswrapper[4986]: I1203 13:00:23.095022 4986 status_manager.go:851] "Failed to get status for pod" podUID="fb60d5f6-dca0-4f66-91c4-00dd32d26bcf" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:23 crc kubenswrapper[4986]: I1203 13:00:23.109305 4986 status_manager.go:851] "Failed to get status for pod" podUID="df90b001-135e-4c75-a67e-3084e905378a" pod="openshift-marketplace/redhat-marketplace-6mz6x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-6mz6x\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:23 crc kubenswrapper[4986]: I1203 13:00:23.109652 4986 status_manager.go:851] "Failed to get status for pod" podUID="fb60d5f6-dca0-4f66-91c4-00dd32d26bcf" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:23 crc kubenswrapper[4986]: I1203 13:00:23.109911 4986 status_manager.go:851] "Failed to get status for pod" podUID="9e009819-9f9d-48db-a20b-dc29cef30887" pod="openshift-marketplace/redhat-operators-wnfnb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-wnfnb\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:23 crc kubenswrapper[4986]: I1203 13:00:23.110179 4986 status_manager.go:851] "Failed to get status for pod" podUID="0636c9e2-4b00-4d01-8426-5fbfd9f9fa88" pod="openshift-authentication/oauth-openshift-558db77b4-2k592" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-2k592\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:23 crc kubenswrapper[4986]: I1203 13:00:23.110491 4986 status_manager.go:851] "Failed to get status for pod" podUID="d28745c8-a7a6-4a38-acb2-f7b6ff1ef54f" pod="openshift-marketplace/certified-operators-vzt5m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-vzt5m\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:23 crc kubenswrapper[4986]: I1203 13:00:23.110695 4986 status_manager.go:851] "Failed to get status for pod" podUID="37f6dc70-6f91-4162-a225-239a999e4320" pod="openshift-marketplace/community-operators-bzs74" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bzs74\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:23 crc kubenswrapper[4986]: I1203 13:00:23.110938 4986 status_manager.go:851] "Failed to get status for pod" podUID="bb0a57df-0313-4698-9e73-373c97e2fb72" pod="openshift-marketplace/certified-operators-9w6ts" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-9w6ts\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:23 crc kubenswrapper[4986]: I1203 13:00:23.111230 4986 status_manager.go:851] "Failed to get status for pod" podUID="49d6e247-25ce-45e1-b2fe-2e3ec70cf966" pod="openshift-marketplace/redhat-operators-dgwd8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-dgwd8\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:23 crc kubenswrapper[4986]: I1203 13:00:23.111482 4986 status_manager.go:851] "Failed to get status for pod" podUID="b2dbfdc4-7122-4b7f-bfdd-189396fb1c77" pod="openshift-marketplace/redhat-marketplace-l6bpp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-l6bpp\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:23 crc kubenswrapper[4986]: I1203 13:00:23.111739 4986 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:23 crc kubenswrapper[4986]: I1203 13:00:23.112565 4986 status_manager.go:851] "Failed to get status for pod" podUID="0636c9e2-4b00-4d01-8426-5fbfd9f9fa88" pod="openshift-authentication/oauth-openshift-558db77b4-2k592" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-2k592\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:23 crc kubenswrapper[4986]: I1203 13:00:23.112801 4986 status_manager.go:851] "Failed to get status for pod" podUID="d28745c8-a7a6-4a38-acb2-f7b6ff1ef54f" pod="openshift-marketplace/certified-operators-vzt5m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-vzt5m\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:23 crc kubenswrapper[4986]: I1203 13:00:23.113035 4986 status_manager.go:851] "Failed to get status for pod" podUID="37f6dc70-6f91-4162-a225-239a999e4320" pod="openshift-marketplace/community-operators-bzs74" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bzs74\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:23 crc kubenswrapper[4986]: I1203 13:00:23.113338 4986 status_manager.go:851] "Failed to get status for pod" podUID="bb0a57df-0313-4698-9e73-373c97e2fb72" pod="openshift-marketplace/certified-operators-9w6ts" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-9w6ts\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:23 crc kubenswrapper[4986]: I1203 13:00:23.113581 4986 status_manager.go:851] "Failed to get status for pod" podUID="49d6e247-25ce-45e1-b2fe-2e3ec70cf966" pod="openshift-marketplace/redhat-operators-dgwd8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-dgwd8\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:23 crc kubenswrapper[4986]: I1203 13:00:23.113830 4986 status_manager.go:851] "Failed to get status for pod" podUID="b2dbfdc4-7122-4b7f-bfdd-189396fb1c77" pod="openshift-marketplace/redhat-marketplace-l6bpp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-l6bpp\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:23 crc kubenswrapper[4986]: I1203 13:00:23.114062 4986 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:23 crc kubenswrapper[4986]: I1203 13:00:23.114334 4986 status_manager.go:851] "Failed to get status for pod" podUID="df90b001-135e-4c75-a67e-3084e905378a" pod="openshift-marketplace/redhat-marketplace-6mz6x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-6mz6x\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:23 crc kubenswrapper[4986]: I1203 13:00:23.114561 4986 status_manager.go:851] "Failed to get status for pod" podUID="fb60d5f6-dca0-4f66-91c4-00dd32d26bcf" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:23 crc kubenswrapper[4986]: I1203 13:00:23.114784 4986 status_manager.go:851] "Failed to get status for pod" podUID="9e009819-9f9d-48db-a20b-dc29cef30887" pod="openshift-marketplace/redhat-operators-wnfnb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-wnfnb\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:23 crc kubenswrapper[4986]: I1203 13:00:23.116880 4986 scope.go:117] "RemoveContainer" containerID="60fb09eaea08025ec0c51955536e27a9b7699d4e13d804c5329d1b3e5991b8ae" Dec 03 13:00:23 crc kubenswrapper[4986]: E1203 13:00:23.117508 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60fb09eaea08025ec0c51955536e27a9b7699d4e13d804c5329d1b3e5991b8ae\": container with ID starting with 60fb09eaea08025ec0c51955536e27a9b7699d4e13d804c5329d1b3e5991b8ae not found: ID does not exist" containerID="60fb09eaea08025ec0c51955536e27a9b7699d4e13d804c5329d1b3e5991b8ae" Dec 03 13:00:23 crc kubenswrapper[4986]: I1203 13:00:23.117546 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60fb09eaea08025ec0c51955536e27a9b7699d4e13d804c5329d1b3e5991b8ae"} err="failed to get container status \"60fb09eaea08025ec0c51955536e27a9b7699d4e13d804c5329d1b3e5991b8ae\": rpc error: code = NotFound desc = could not find container \"60fb09eaea08025ec0c51955536e27a9b7699d4e13d804c5329d1b3e5991b8ae\": container with ID starting with 60fb09eaea08025ec0c51955536e27a9b7699d4e13d804c5329d1b3e5991b8ae not found: ID does not exist" Dec 03 13:00:23 crc kubenswrapper[4986]: E1203 13:00:23.275554 4986 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.112:6443: connect: connection refused" interval="800ms" Dec 03 13:00:24 crc kubenswrapper[4986]: E1203 13:00:24.076132 4986 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.112:6443: connect: connection refused" interval="1.6s" Dec 03 13:00:24 crc kubenswrapper[4986]: I1203 13:00:24.099666 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dgwd8" event={"ID":"49d6e247-25ce-45e1-b2fe-2e3ec70cf966","Type":"ContainerStarted","Data":"ef9c551367e0d6d4d7c142f98707fdb054ce586d24cef4af590745134e6e6a3c"} Dec 03 13:00:24 crc kubenswrapper[4986]: I1203 13:00:24.100648 4986 status_manager.go:851] "Failed to get status for pod" podUID="d28745c8-a7a6-4a38-acb2-f7b6ff1ef54f" pod="openshift-marketplace/certified-operators-vzt5m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-vzt5m\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:24 crc kubenswrapper[4986]: I1203 13:00:24.100937 4986 status_manager.go:851] "Failed to get status for pod" podUID="37f6dc70-6f91-4162-a225-239a999e4320" pod="openshift-marketplace/community-operators-bzs74" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bzs74\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:24 crc kubenswrapper[4986]: I1203 13:00:24.101099 4986 status_manager.go:851] "Failed to get status for pod" podUID="0636c9e2-4b00-4d01-8426-5fbfd9f9fa88" pod="openshift-authentication/oauth-openshift-558db77b4-2k592" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-2k592\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:24 crc kubenswrapper[4986]: I1203 13:00:24.101378 4986 status_manager.go:851] "Failed to get status for pod" podUID="bb0a57df-0313-4698-9e73-373c97e2fb72" pod="openshift-marketplace/certified-operators-9w6ts" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-9w6ts\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:24 crc kubenswrapper[4986]: I1203 13:00:24.101528 4986 status_manager.go:851] "Failed to get status for pod" podUID="49d6e247-25ce-45e1-b2fe-2e3ec70cf966" pod="openshift-marketplace/redhat-operators-dgwd8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-dgwd8\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:24 crc kubenswrapper[4986]: I1203 13:00:24.101678 4986 status_manager.go:851] "Failed to get status for pod" podUID="b2dbfdc4-7122-4b7f-bfdd-189396fb1c77" pod="openshift-marketplace/redhat-marketplace-l6bpp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-l6bpp\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:24 crc kubenswrapper[4986]: I1203 13:00:24.101819 4986 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:24 crc kubenswrapper[4986]: I1203 13:00:24.101963 4986 status_manager.go:851] "Failed to get status for pod" podUID="df90b001-135e-4c75-a67e-3084e905378a" pod="openshift-marketplace/redhat-marketplace-6mz6x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-6mz6x\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:24 crc kubenswrapper[4986]: I1203 13:00:24.102099 4986 status_manager.go:851] "Failed to get status for pod" podUID="fb60d5f6-dca0-4f66-91c4-00dd32d26bcf" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:24 crc kubenswrapper[4986]: I1203 13:00:24.102233 4986 status_manager.go:851] "Failed to get status for pod" podUID="9e009819-9f9d-48db-a20b-dc29cef30887" pod="openshift-marketplace/redhat-operators-wnfnb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-wnfnb\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:25 crc kubenswrapper[4986]: E1203 13:00:25.676865 4986 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.112:6443: connect: connection refused" interval="3.2s" Dec 03 13:00:27 crc kubenswrapper[4986]: E1203 13:00:27.073322 4986 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events/community-operators-bzs74.187db60fd619f4de\": dial tcp 38.129.56.112:6443: connect: connection refused" event="&Event{ObjectMeta:{community-operators-bzs74.187db60fd619f4de openshift-marketplace 29285 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:community-operators-bzs74,UID:37f6dc70-6f91-4162-a225-239a999e4320,APIVersion:v1,ResourceVersion:28181,FieldPath:spec.containers{registry-server},},Reason:Unhealthy,Message:Readiness probe errored: rpc error: code = NotFound desc = container is not created or running: checking if PID of 0cad1c7c53e6c4dfc96a320d019c45fe9bb0847f033b9676e62422dc214c3241 is running failed: container process not found,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-03 13:00:08 +0000 UTC,LastTimestamp:2025-12-03 13:00:18.132559323 +0000 UTC m=+277.598990554,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 03 13:00:27 crc kubenswrapper[4986]: E1203 13:00:27.624717 4986 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T13:00:27Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T13:00:27Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T13:00:27Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T13:00:27Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:416a7dc57b2b95775e679e0ab93111baaa063e55a4c6d73856a248d85a2debbd\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:5cdd97eed164a2eda9842fb91d284f06d2e63d69af9a98001fca2d6cebd0b52a\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1609873225},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:446e5d504e70c7963ef7b0f090f3fcb19847ef48150299bf030847565d7a579b\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a01ee07f4838bab6cfa5a3d25d867557aa271725bfcd20a1e52d3cc63423c06b\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1204969293},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:dba4a8e0293f7b6e1459b74484a7126274bdc9efa75f808eb15e5a3896a3c818\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:eee64597300e249e4bead6abda4d235cc4fc3a87be82e3e9f582609602ed87d7\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1201319250},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:e8990432556acad31519b1a73ec32f32d27c2034cf9e5cc4db8980efc7331594\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:ebe9f523f5c211a3a0f2570331dddcd5be15b12c1fecd9b8b121f881bfaad029\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1129027903},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:27 crc kubenswrapper[4986]: E1203 13:00:27.626508 4986 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:27 crc kubenswrapper[4986]: E1203 13:00:27.627171 4986 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:27 crc kubenswrapper[4986]: E1203 13:00:27.627777 4986 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:27 crc kubenswrapper[4986]: E1203 13:00:27.628252 4986 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:27 crc kubenswrapper[4986]: E1203 13:00:27.628318 4986 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 03 13:00:28 crc kubenswrapper[4986]: I1203 13:00:28.279542 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vzt5m" Dec 03 13:00:28 crc kubenswrapper[4986]: I1203 13:00:28.279607 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vzt5m" Dec 03 13:00:28 crc kubenswrapper[4986]: I1203 13:00:28.323714 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vzt5m" Dec 03 13:00:28 crc kubenswrapper[4986]: I1203 13:00:28.324513 4986 status_manager.go:851] "Failed to get status for pod" podUID="0636c9e2-4b00-4d01-8426-5fbfd9f9fa88" pod="openshift-authentication/oauth-openshift-558db77b4-2k592" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-2k592\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:28 crc kubenswrapper[4986]: I1203 13:00:28.324979 4986 status_manager.go:851] "Failed to get status for pod" podUID="d28745c8-a7a6-4a38-acb2-f7b6ff1ef54f" pod="openshift-marketplace/certified-operators-vzt5m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-vzt5m\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:28 crc kubenswrapper[4986]: I1203 13:00:28.325437 4986 status_manager.go:851] "Failed to get status for pod" podUID="37f6dc70-6f91-4162-a225-239a999e4320" pod="openshift-marketplace/community-operators-bzs74" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bzs74\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:28 crc kubenswrapper[4986]: I1203 13:00:28.325738 4986 status_manager.go:851] "Failed to get status for pod" podUID="bb0a57df-0313-4698-9e73-373c97e2fb72" pod="openshift-marketplace/certified-operators-9w6ts" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-9w6ts\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:28 crc kubenswrapper[4986]: I1203 13:00:28.326016 4986 status_manager.go:851] "Failed to get status for pod" podUID="49d6e247-25ce-45e1-b2fe-2e3ec70cf966" pod="openshift-marketplace/redhat-operators-dgwd8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-dgwd8\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:28 crc kubenswrapper[4986]: I1203 13:00:28.326373 4986 status_manager.go:851] "Failed to get status for pod" podUID="b2dbfdc4-7122-4b7f-bfdd-189396fb1c77" pod="openshift-marketplace/redhat-marketplace-l6bpp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-l6bpp\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:28 crc kubenswrapper[4986]: I1203 13:00:28.326690 4986 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:28 crc kubenswrapper[4986]: I1203 13:00:28.326979 4986 status_manager.go:851] "Failed to get status for pod" podUID="df90b001-135e-4c75-a67e-3084e905378a" pod="openshift-marketplace/redhat-marketplace-6mz6x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-6mz6x\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:28 crc kubenswrapper[4986]: I1203 13:00:28.327293 4986 status_manager.go:851] "Failed to get status for pod" podUID="fb60d5f6-dca0-4f66-91c4-00dd32d26bcf" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:28 crc kubenswrapper[4986]: I1203 13:00:28.327587 4986 status_manager.go:851] "Failed to get status for pod" podUID="9e009819-9f9d-48db-a20b-dc29cef30887" pod="openshift-marketplace/redhat-operators-wnfnb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-wnfnb\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:28 crc kubenswrapper[4986]: E1203 13:00:28.878497 4986 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.112:6443: connect: connection refused" interval="6.4s" Dec 03 13:00:29 crc kubenswrapper[4986]: I1203 13:00:29.162435 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vzt5m" Dec 03 13:00:29 crc kubenswrapper[4986]: I1203 13:00:29.163255 4986 status_manager.go:851] "Failed to get status for pod" podUID="d28745c8-a7a6-4a38-acb2-f7b6ff1ef54f" pod="openshift-marketplace/certified-operators-vzt5m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-vzt5m\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:29 crc kubenswrapper[4986]: I1203 13:00:29.163667 4986 status_manager.go:851] "Failed to get status for pod" podUID="37f6dc70-6f91-4162-a225-239a999e4320" pod="openshift-marketplace/community-operators-bzs74" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bzs74\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:29 crc kubenswrapper[4986]: I1203 13:00:29.164027 4986 status_manager.go:851] "Failed to get status for pod" podUID="0636c9e2-4b00-4d01-8426-5fbfd9f9fa88" pod="openshift-authentication/oauth-openshift-558db77b4-2k592" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-2k592\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:29 crc kubenswrapper[4986]: I1203 13:00:29.164366 4986 status_manager.go:851] "Failed to get status for pod" podUID="bb0a57df-0313-4698-9e73-373c97e2fb72" pod="openshift-marketplace/certified-operators-9w6ts" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-9w6ts\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:29 crc kubenswrapper[4986]: I1203 13:00:29.164671 4986 status_manager.go:851] "Failed to get status for pod" podUID="49d6e247-25ce-45e1-b2fe-2e3ec70cf966" pod="openshift-marketplace/redhat-operators-dgwd8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-dgwd8\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:29 crc kubenswrapper[4986]: I1203 13:00:29.164958 4986 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:29 crc kubenswrapper[4986]: I1203 13:00:29.165204 4986 status_manager.go:851] "Failed to get status for pod" podUID="b2dbfdc4-7122-4b7f-bfdd-189396fb1c77" pod="openshift-marketplace/redhat-marketplace-l6bpp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-l6bpp\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:29 crc kubenswrapper[4986]: I1203 13:00:29.165486 4986 status_manager.go:851] "Failed to get status for pod" podUID="df90b001-135e-4c75-a67e-3084e905378a" pod="openshift-marketplace/redhat-marketplace-6mz6x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-6mz6x\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:29 crc kubenswrapper[4986]: I1203 13:00:29.166015 4986 status_manager.go:851] "Failed to get status for pod" podUID="fb60d5f6-dca0-4f66-91c4-00dd32d26bcf" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:29 crc kubenswrapper[4986]: I1203 13:00:29.166272 4986 status_manager.go:851] "Failed to get status for pod" podUID="9e009819-9f9d-48db-a20b-dc29cef30887" pod="openshift-marketplace/redhat-operators-wnfnb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-wnfnb\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:29 crc kubenswrapper[4986]: I1203 13:00:29.705165 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-l6bpp" Dec 03 13:00:29 crc kubenswrapper[4986]: I1203 13:00:29.705626 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-l6bpp" Dec 03 13:00:29 crc kubenswrapper[4986]: I1203 13:00:29.752909 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-l6bpp" Dec 03 13:00:29 crc kubenswrapper[4986]: I1203 13:00:29.753462 4986 status_manager.go:851] "Failed to get status for pod" podUID="0636c9e2-4b00-4d01-8426-5fbfd9f9fa88" pod="openshift-authentication/oauth-openshift-558db77b4-2k592" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-2k592\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:29 crc kubenswrapper[4986]: I1203 13:00:29.753908 4986 status_manager.go:851] "Failed to get status for pod" podUID="d28745c8-a7a6-4a38-acb2-f7b6ff1ef54f" pod="openshift-marketplace/certified-operators-vzt5m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-vzt5m\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:29 crc kubenswrapper[4986]: I1203 13:00:29.754473 4986 status_manager.go:851] "Failed to get status for pod" podUID="37f6dc70-6f91-4162-a225-239a999e4320" pod="openshift-marketplace/community-operators-bzs74" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bzs74\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:29 crc kubenswrapper[4986]: I1203 13:00:29.754785 4986 status_manager.go:851] "Failed to get status for pod" podUID="bb0a57df-0313-4698-9e73-373c97e2fb72" pod="openshift-marketplace/certified-operators-9w6ts" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-9w6ts\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:29 crc kubenswrapper[4986]: I1203 13:00:29.755220 4986 status_manager.go:851] "Failed to get status for pod" podUID="49d6e247-25ce-45e1-b2fe-2e3ec70cf966" pod="openshift-marketplace/redhat-operators-dgwd8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-dgwd8\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:29 crc kubenswrapper[4986]: I1203 13:00:29.755513 4986 status_manager.go:851] "Failed to get status for pod" podUID="b2dbfdc4-7122-4b7f-bfdd-189396fb1c77" pod="openshift-marketplace/redhat-marketplace-l6bpp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-l6bpp\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:29 crc kubenswrapper[4986]: I1203 13:00:29.755839 4986 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:29 crc kubenswrapper[4986]: I1203 13:00:29.756526 4986 status_manager.go:851] "Failed to get status for pod" podUID="df90b001-135e-4c75-a67e-3084e905378a" pod="openshift-marketplace/redhat-marketplace-6mz6x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-6mz6x\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:29 crc kubenswrapper[4986]: I1203 13:00:29.756942 4986 status_manager.go:851] "Failed to get status for pod" podUID="fb60d5f6-dca0-4f66-91c4-00dd32d26bcf" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:29 crc kubenswrapper[4986]: I1203 13:00:29.757274 4986 status_manager.go:851] "Failed to get status for pod" podUID="9e009819-9f9d-48db-a20b-dc29cef30887" pod="openshift-marketplace/redhat-operators-wnfnb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-wnfnb\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:30 crc kubenswrapper[4986]: I1203 13:00:30.171300 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-l6bpp" Dec 03 13:00:30 crc kubenswrapper[4986]: I1203 13:00:30.171952 4986 status_manager.go:851] "Failed to get status for pod" podUID="0636c9e2-4b00-4d01-8426-5fbfd9f9fa88" pod="openshift-authentication/oauth-openshift-558db77b4-2k592" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-2k592\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:30 crc kubenswrapper[4986]: I1203 13:00:30.172569 4986 status_manager.go:851] "Failed to get status for pod" podUID="d28745c8-a7a6-4a38-acb2-f7b6ff1ef54f" pod="openshift-marketplace/certified-operators-vzt5m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-vzt5m\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:30 crc kubenswrapper[4986]: I1203 13:00:30.172882 4986 status_manager.go:851] "Failed to get status for pod" podUID="37f6dc70-6f91-4162-a225-239a999e4320" pod="openshift-marketplace/community-operators-bzs74" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bzs74\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:30 crc kubenswrapper[4986]: I1203 13:00:30.173141 4986 status_manager.go:851] "Failed to get status for pod" podUID="bb0a57df-0313-4698-9e73-373c97e2fb72" pod="openshift-marketplace/certified-operators-9w6ts" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-9w6ts\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:30 crc kubenswrapper[4986]: I1203 13:00:30.173428 4986 status_manager.go:851] "Failed to get status for pod" podUID="49d6e247-25ce-45e1-b2fe-2e3ec70cf966" pod="openshift-marketplace/redhat-operators-dgwd8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-dgwd8\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:30 crc kubenswrapper[4986]: I1203 13:00:30.173632 4986 status_manager.go:851] "Failed to get status for pod" podUID="b2dbfdc4-7122-4b7f-bfdd-189396fb1c77" pod="openshift-marketplace/redhat-marketplace-l6bpp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-l6bpp\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:30 crc kubenswrapper[4986]: I1203 13:00:30.173852 4986 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:30 crc kubenswrapper[4986]: I1203 13:00:30.174168 4986 status_manager.go:851] "Failed to get status for pod" podUID="df90b001-135e-4c75-a67e-3084e905378a" pod="openshift-marketplace/redhat-marketplace-6mz6x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-6mz6x\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:30 crc kubenswrapper[4986]: I1203 13:00:30.174462 4986 status_manager.go:851] "Failed to get status for pod" podUID="fb60d5f6-dca0-4f66-91c4-00dd32d26bcf" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:30 crc kubenswrapper[4986]: I1203 13:00:30.174746 4986 status_manager.go:851] "Failed to get status for pod" podUID="9e009819-9f9d-48db-a20b-dc29cef30887" pod="openshift-marketplace/redhat-operators-wnfnb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-wnfnb\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:30 crc kubenswrapper[4986]: I1203 13:00:30.844428 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dgwd8" Dec 03 13:00:30 crc kubenswrapper[4986]: I1203 13:00:30.844499 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dgwd8" Dec 03 13:00:30 crc kubenswrapper[4986]: I1203 13:00:30.877924 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dgwd8" Dec 03 13:00:30 crc kubenswrapper[4986]: I1203 13:00:30.878465 4986 status_manager.go:851] "Failed to get status for pod" podUID="d28745c8-a7a6-4a38-acb2-f7b6ff1ef54f" pod="openshift-marketplace/certified-operators-vzt5m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-vzt5m\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:30 crc kubenswrapper[4986]: I1203 13:00:30.878874 4986 status_manager.go:851] "Failed to get status for pod" podUID="37f6dc70-6f91-4162-a225-239a999e4320" pod="openshift-marketplace/community-operators-bzs74" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bzs74\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:30 crc kubenswrapper[4986]: I1203 13:00:30.879247 4986 status_manager.go:851] "Failed to get status for pod" podUID="0636c9e2-4b00-4d01-8426-5fbfd9f9fa88" pod="openshift-authentication/oauth-openshift-558db77b4-2k592" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-2k592\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:30 crc kubenswrapper[4986]: I1203 13:00:30.879516 4986 status_manager.go:851] "Failed to get status for pod" podUID="bb0a57df-0313-4698-9e73-373c97e2fb72" pod="openshift-marketplace/certified-operators-9w6ts" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-9w6ts\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:30 crc kubenswrapper[4986]: I1203 13:00:30.879745 4986 status_manager.go:851] "Failed to get status for pod" podUID="49d6e247-25ce-45e1-b2fe-2e3ec70cf966" pod="openshift-marketplace/redhat-operators-dgwd8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-dgwd8\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:30 crc kubenswrapper[4986]: I1203 13:00:30.879982 4986 status_manager.go:851] "Failed to get status for pod" podUID="b2dbfdc4-7122-4b7f-bfdd-189396fb1c77" pod="openshift-marketplace/redhat-marketplace-l6bpp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-l6bpp\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:30 crc kubenswrapper[4986]: I1203 13:00:30.880206 4986 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:30 crc kubenswrapper[4986]: I1203 13:00:30.880408 4986 status_manager.go:851] "Failed to get status for pod" podUID="df90b001-135e-4c75-a67e-3084e905378a" pod="openshift-marketplace/redhat-marketplace-6mz6x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-6mz6x\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:30 crc kubenswrapper[4986]: I1203 13:00:30.880633 4986 status_manager.go:851] "Failed to get status for pod" podUID="fb60d5f6-dca0-4f66-91c4-00dd32d26bcf" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:30 crc kubenswrapper[4986]: I1203 13:00:30.880885 4986 status_manager.go:851] "Failed to get status for pod" podUID="9e009819-9f9d-48db-a20b-dc29cef30887" pod="openshift-marketplace/redhat-operators-wnfnb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-wnfnb\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:30 crc kubenswrapper[4986]: I1203 13:00:30.949805 4986 status_manager.go:851] "Failed to get status for pod" podUID="0636c9e2-4b00-4d01-8426-5fbfd9f9fa88" pod="openshift-authentication/oauth-openshift-558db77b4-2k592" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-2k592\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:30 crc kubenswrapper[4986]: I1203 13:00:30.950348 4986 status_manager.go:851] "Failed to get status for pod" podUID="d28745c8-a7a6-4a38-acb2-f7b6ff1ef54f" pod="openshift-marketplace/certified-operators-vzt5m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-vzt5m\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:30 crc kubenswrapper[4986]: I1203 13:00:30.950805 4986 status_manager.go:851] "Failed to get status for pod" podUID="37f6dc70-6f91-4162-a225-239a999e4320" pod="openshift-marketplace/community-operators-bzs74" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bzs74\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:30 crc kubenswrapper[4986]: I1203 13:00:30.951217 4986 status_manager.go:851] "Failed to get status for pod" podUID="bb0a57df-0313-4698-9e73-373c97e2fb72" pod="openshift-marketplace/certified-operators-9w6ts" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-9w6ts\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:30 crc kubenswrapper[4986]: I1203 13:00:30.951573 4986 status_manager.go:851] "Failed to get status for pod" podUID="49d6e247-25ce-45e1-b2fe-2e3ec70cf966" pod="openshift-marketplace/redhat-operators-dgwd8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-dgwd8\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:30 crc kubenswrapper[4986]: I1203 13:00:30.952161 4986 status_manager.go:851] "Failed to get status for pod" podUID="b2dbfdc4-7122-4b7f-bfdd-189396fb1c77" pod="openshift-marketplace/redhat-marketplace-l6bpp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-l6bpp\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:30 crc kubenswrapper[4986]: I1203 13:00:30.952981 4986 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:30 crc kubenswrapper[4986]: I1203 13:00:30.953586 4986 status_manager.go:851] "Failed to get status for pod" podUID="df90b001-135e-4c75-a67e-3084e905378a" pod="openshift-marketplace/redhat-marketplace-6mz6x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-6mz6x\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:30 crc kubenswrapper[4986]: I1203 13:00:30.953971 4986 status_manager.go:851] "Failed to get status for pod" podUID="fb60d5f6-dca0-4f66-91c4-00dd32d26bcf" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:30 crc kubenswrapper[4986]: I1203 13:00:30.954483 4986 status_manager.go:851] "Failed to get status for pod" podUID="9e009819-9f9d-48db-a20b-dc29cef30887" pod="openshift-marketplace/redhat-operators-wnfnb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-wnfnb\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:31 crc kubenswrapper[4986]: I1203 13:00:31.139741 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 03 13:00:31 crc kubenswrapper[4986]: I1203 13:00:31.139804 4986 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="b86d62d991d6516ed42f7cad655018e1ebc85c8c1b100d624c1c2e5e8f616cf4" exitCode=1 Dec 03 13:00:31 crc kubenswrapper[4986]: I1203 13:00:31.140333 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"b86d62d991d6516ed42f7cad655018e1ebc85c8c1b100d624c1c2e5e8f616cf4"} Dec 03 13:00:31 crc kubenswrapper[4986]: I1203 13:00:31.141117 4986 scope.go:117] "RemoveContainer" containerID="b86d62d991d6516ed42f7cad655018e1ebc85c8c1b100d624c1c2e5e8f616cf4" Dec 03 13:00:31 crc kubenswrapper[4986]: I1203 13:00:31.141239 4986 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:31 crc kubenswrapper[4986]: I1203 13:00:31.142273 4986 status_manager.go:851] "Failed to get status for pod" podUID="d28745c8-a7a6-4a38-acb2-f7b6ff1ef54f" pod="openshift-marketplace/certified-operators-vzt5m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-vzt5m\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:31 crc kubenswrapper[4986]: I1203 13:00:31.142766 4986 status_manager.go:851] "Failed to get status for pod" podUID="37f6dc70-6f91-4162-a225-239a999e4320" pod="openshift-marketplace/community-operators-bzs74" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bzs74\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:31 crc kubenswrapper[4986]: I1203 13:00:31.143165 4986 status_manager.go:851] "Failed to get status for pod" podUID="0636c9e2-4b00-4d01-8426-5fbfd9f9fa88" pod="openshift-authentication/oauth-openshift-558db77b4-2k592" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-2k592\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:31 crc kubenswrapper[4986]: I1203 13:00:31.143443 4986 status_manager.go:851] "Failed to get status for pod" podUID="bb0a57df-0313-4698-9e73-373c97e2fb72" pod="openshift-marketplace/certified-operators-9w6ts" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-9w6ts\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:31 crc kubenswrapper[4986]: I1203 13:00:31.143722 4986 status_manager.go:851] "Failed to get status for pod" podUID="49d6e247-25ce-45e1-b2fe-2e3ec70cf966" pod="openshift-marketplace/redhat-operators-dgwd8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-dgwd8\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:31 crc kubenswrapper[4986]: I1203 13:00:31.144035 4986 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:31 crc kubenswrapper[4986]: I1203 13:00:31.144245 4986 status_manager.go:851] "Failed to get status for pod" podUID="b2dbfdc4-7122-4b7f-bfdd-189396fb1c77" pod="openshift-marketplace/redhat-marketplace-l6bpp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-l6bpp\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:31 crc kubenswrapper[4986]: I1203 13:00:31.144655 4986 status_manager.go:851] "Failed to get status for pod" podUID="df90b001-135e-4c75-a67e-3084e905378a" pod="openshift-marketplace/redhat-marketplace-6mz6x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-6mz6x\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:31 crc kubenswrapper[4986]: I1203 13:00:31.145354 4986 status_manager.go:851] "Failed to get status for pod" podUID="fb60d5f6-dca0-4f66-91c4-00dd32d26bcf" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:31 crc kubenswrapper[4986]: I1203 13:00:31.145724 4986 status_manager.go:851] "Failed to get status for pod" podUID="9e009819-9f9d-48db-a20b-dc29cef30887" pod="openshift-marketplace/redhat-operators-wnfnb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-wnfnb\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:31 crc kubenswrapper[4986]: I1203 13:00:31.184268 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dgwd8" Dec 03 13:00:31 crc kubenswrapper[4986]: I1203 13:00:31.184961 4986 status_manager.go:851] "Failed to get status for pod" podUID="bb0a57df-0313-4698-9e73-373c97e2fb72" pod="openshift-marketplace/certified-operators-9w6ts" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-9w6ts\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:31 crc kubenswrapper[4986]: I1203 13:00:31.185610 4986 status_manager.go:851] "Failed to get status for pod" podUID="49d6e247-25ce-45e1-b2fe-2e3ec70cf966" pod="openshift-marketplace/redhat-operators-dgwd8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-dgwd8\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:31 crc kubenswrapper[4986]: I1203 13:00:31.185803 4986 status_manager.go:851] "Failed to get status for pod" podUID="b2dbfdc4-7122-4b7f-bfdd-189396fb1c77" pod="openshift-marketplace/redhat-marketplace-l6bpp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-l6bpp\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:31 crc kubenswrapper[4986]: I1203 13:00:31.185956 4986 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:31 crc kubenswrapper[4986]: I1203 13:00:31.186180 4986 status_manager.go:851] "Failed to get status for pod" podUID="df90b001-135e-4c75-a67e-3084e905378a" pod="openshift-marketplace/redhat-marketplace-6mz6x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-6mz6x\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:31 crc kubenswrapper[4986]: I1203 13:00:31.186497 4986 status_manager.go:851] "Failed to get status for pod" podUID="fb60d5f6-dca0-4f66-91c4-00dd32d26bcf" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:31 crc kubenswrapper[4986]: I1203 13:00:31.187324 4986 status_manager.go:851] "Failed to get status for pod" podUID="9e009819-9f9d-48db-a20b-dc29cef30887" pod="openshift-marketplace/redhat-operators-wnfnb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-wnfnb\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:31 crc kubenswrapper[4986]: I1203 13:00:31.187525 4986 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:31 crc kubenswrapper[4986]: I1203 13:00:31.187674 4986 status_manager.go:851] "Failed to get status for pod" podUID="0636c9e2-4b00-4d01-8426-5fbfd9f9fa88" pod="openshift-authentication/oauth-openshift-558db77b4-2k592" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-2k592\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:31 crc kubenswrapper[4986]: I1203 13:00:31.187916 4986 status_manager.go:851] "Failed to get status for pod" podUID="d28745c8-a7a6-4a38-acb2-f7b6ff1ef54f" pod="openshift-marketplace/certified-operators-vzt5m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-vzt5m\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:31 crc kubenswrapper[4986]: I1203 13:00:31.188156 4986 status_manager.go:851] "Failed to get status for pod" podUID="37f6dc70-6f91-4162-a225-239a999e4320" pod="openshift-marketplace/community-operators-bzs74" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bzs74\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:31 crc kubenswrapper[4986]: I1203 13:00:31.290911 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wnfnb" Dec 03 13:00:31 crc kubenswrapper[4986]: I1203 13:00:31.291526 4986 status_manager.go:851] "Failed to get status for pod" podUID="bb0a57df-0313-4698-9e73-373c97e2fb72" pod="openshift-marketplace/certified-operators-9w6ts" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-9w6ts\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:31 crc kubenswrapper[4986]: I1203 13:00:31.291841 4986 status_manager.go:851] "Failed to get status for pod" podUID="49d6e247-25ce-45e1-b2fe-2e3ec70cf966" pod="openshift-marketplace/redhat-operators-dgwd8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-dgwd8\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:31 crc kubenswrapper[4986]: I1203 13:00:31.292318 4986 status_manager.go:851] "Failed to get status for pod" podUID="b2dbfdc4-7122-4b7f-bfdd-189396fb1c77" pod="openshift-marketplace/redhat-marketplace-l6bpp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-l6bpp\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:31 crc kubenswrapper[4986]: I1203 13:00:31.292660 4986 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:31 crc kubenswrapper[4986]: I1203 13:00:31.293039 4986 status_manager.go:851] "Failed to get status for pod" podUID="df90b001-135e-4c75-a67e-3084e905378a" pod="openshift-marketplace/redhat-marketplace-6mz6x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-6mz6x\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:31 crc kubenswrapper[4986]: I1203 13:00:31.293369 4986 status_manager.go:851] "Failed to get status for pod" podUID="fb60d5f6-dca0-4f66-91c4-00dd32d26bcf" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:31 crc kubenswrapper[4986]: I1203 13:00:31.293670 4986 status_manager.go:851] "Failed to get status for pod" podUID="9e009819-9f9d-48db-a20b-dc29cef30887" pod="openshift-marketplace/redhat-operators-wnfnb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-wnfnb\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:31 crc kubenswrapper[4986]: I1203 13:00:31.293945 4986 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:31 crc kubenswrapper[4986]: I1203 13:00:31.294184 4986 status_manager.go:851] "Failed to get status for pod" podUID="0636c9e2-4b00-4d01-8426-5fbfd9f9fa88" pod="openshift-authentication/oauth-openshift-558db77b4-2k592" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-2k592\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:31 crc kubenswrapper[4986]: I1203 13:00:31.294462 4986 status_manager.go:851] "Failed to get status for pod" podUID="d28745c8-a7a6-4a38-acb2-f7b6ff1ef54f" pod="openshift-marketplace/certified-operators-vzt5m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-vzt5m\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:31 crc kubenswrapper[4986]: I1203 13:00:31.294734 4986 status_manager.go:851] "Failed to get status for pod" podUID="37f6dc70-6f91-4162-a225-239a999e4320" pod="openshift-marketplace/community-operators-bzs74" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bzs74\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:31 crc kubenswrapper[4986]: I1203 13:00:31.330618 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wnfnb" Dec 03 13:00:31 crc kubenswrapper[4986]: I1203 13:00:31.331264 4986 status_manager.go:851] "Failed to get status for pod" podUID="d28745c8-a7a6-4a38-acb2-f7b6ff1ef54f" pod="openshift-marketplace/certified-operators-vzt5m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-vzt5m\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:31 crc kubenswrapper[4986]: I1203 13:00:31.331616 4986 status_manager.go:851] "Failed to get status for pod" podUID="37f6dc70-6f91-4162-a225-239a999e4320" pod="openshift-marketplace/community-operators-bzs74" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bzs74\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:31 crc kubenswrapper[4986]: I1203 13:00:31.332063 4986 status_manager.go:851] "Failed to get status for pod" podUID="0636c9e2-4b00-4d01-8426-5fbfd9f9fa88" pod="openshift-authentication/oauth-openshift-558db77b4-2k592" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-2k592\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:31 crc kubenswrapper[4986]: I1203 13:00:31.332278 4986 status_manager.go:851] "Failed to get status for pod" podUID="bb0a57df-0313-4698-9e73-373c97e2fb72" pod="openshift-marketplace/certified-operators-9w6ts" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-9w6ts\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:31 crc kubenswrapper[4986]: I1203 13:00:31.332590 4986 status_manager.go:851] "Failed to get status for pod" podUID="49d6e247-25ce-45e1-b2fe-2e3ec70cf966" pod="openshift-marketplace/redhat-operators-dgwd8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-dgwd8\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:31 crc kubenswrapper[4986]: I1203 13:00:31.332862 4986 status_manager.go:851] "Failed to get status for pod" podUID="b2dbfdc4-7122-4b7f-bfdd-189396fb1c77" pod="openshift-marketplace/redhat-marketplace-l6bpp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-l6bpp\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:31 crc kubenswrapper[4986]: I1203 13:00:31.333136 4986 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:31 crc kubenswrapper[4986]: I1203 13:00:31.333397 4986 status_manager.go:851] "Failed to get status for pod" podUID="df90b001-135e-4c75-a67e-3084e905378a" pod="openshift-marketplace/redhat-marketplace-6mz6x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-6mz6x\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:31 crc kubenswrapper[4986]: I1203 13:00:31.333710 4986 status_manager.go:851] "Failed to get status for pod" podUID="fb60d5f6-dca0-4f66-91c4-00dd32d26bcf" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:31 crc kubenswrapper[4986]: I1203 13:00:31.334017 4986 status_manager.go:851] "Failed to get status for pod" podUID="9e009819-9f9d-48db-a20b-dc29cef30887" pod="openshift-marketplace/redhat-operators-wnfnb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-wnfnb\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:31 crc kubenswrapper[4986]: I1203 13:00:31.334355 4986 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:31 crc kubenswrapper[4986]: I1203 13:00:31.943358 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 13:00:31 crc kubenswrapper[4986]: I1203 13:00:31.944178 4986 status_manager.go:851] "Failed to get status for pod" podUID="bb0a57df-0313-4698-9e73-373c97e2fb72" pod="openshift-marketplace/certified-operators-9w6ts" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-9w6ts\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:31 crc kubenswrapper[4986]: I1203 13:00:31.944792 4986 status_manager.go:851] "Failed to get status for pod" podUID="49d6e247-25ce-45e1-b2fe-2e3ec70cf966" pod="openshift-marketplace/redhat-operators-dgwd8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-dgwd8\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:31 crc kubenswrapper[4986]: I1203 13:00:31.945414 4986 status_manager.go:851] "Failed to get status for pod" podUID="b2dbfdc4-7122-4b7f-bfdd-189396fb1c77" pod="openshift-marketplace/redhat-marketplace-l6bpp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-l6bpp\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:31 crc kubenswrapper[4986]: I1203 13:00:31.945931 4986 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:31 crc kubenswrapper[4986]: I1203 13:00:31.946189 4986 status_manager.go:851] "Failed to get status for pod" podUID="df90b001-135e-4c75-a67e-3084e905378a" pod="openshift-marketplace/redhat-marketplace-6mz6x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-6mz6x\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:31 crc kubenswrapper[4986]: I1203 13:00:31.946504 4986 status_manager.go:851] "Failed to get status for pod" podUID="fb60d5f6-dca0-4f66-91c4-00dd32d26bcf" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:31 crc kubenswrapper[4986]: I1203 13:00:31.946924 4986 status_manager.go:851] "Failed to get status for pod" podUID="9e009819-9f9d-48db-a20b-dc29cef30887" pod="openshift-marketplace/redhat-operators-wnfnb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-wnfnb\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:31 crc kubenswrapper[4986]: I1203 13:00:31.947202 4986 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:31 crc kubenswrapper[4986]: I1203 13:00:31.947558 4986 status_manager.go:851] "Failed to get status for pod" podUID="0636c9e2-4b00-4d01-8426-5fbfd9f9fa88" pod="openshift-authentication/oauth-openshift-558db77b4-2k592" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-2k592\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:31 crc kubenswrapper[4986]: I1203 13:00:31.948006 4986 status_manager.go:851] "Failed to get status for pod" podUID="d28745c8-a7a6-4a38-acb2-f7b6ff1ef54f" pod="openshift-marketplace/certified-operators-vzt5m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-vzt5m\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:31 crc kubenswrapper[4986]: I1203 13:00:31.948647 4986 status_manager.go:851] "Failed to get status for pod" podUID="37f6dc70-6f91-4162-a225-239a999e4320" pod="openshift-marketplace/community-operators-bzs74" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bzs74\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:31 crc kubenswrapper[4986]: I1203 13:00:31.964237 4986 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="231ed674-1d19-4ee2-b29c-a1b7453ed531" Dec 03 13:00:31 crc kubenswrapper[4986]: I1203 13:00:31.964265 4986 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="231ed674-1d19-4ee2-b29c-a1b7453ed531" Dec 03 13:00:31 crc kubenswrapper[4986]: E1203 13:00:31.964674 4986 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.112:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 13:00:31 crc kubenswrapper[4986]: I1203 13:00:31.965171 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 13:00:31 crc kubenswrapper[4986]: W1203 13:00:31.981993 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-b3879f46941d0b78d2d391c8546da2b9d45211dd8c5d6c9f084a4ce761bdca2c WatchSource:0}: Error finding container b3879f46941d0b78d2d391c8546da2b9d45211dd8c5d6c9f084a4ce761bdca2c: Status 404 returned error can't find the container with id b3879f46941d0b78d2d391c8546da2b9d45211dd8c5d6c9f084a4ce761bdca2c Dec 03 13:00:32 crc kubenswrapper[4986]: I1203 13:00:32.146864 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"b3879f46941d0b78d2d391c8546da2b9d45211dd8c5d6c9f084a4ce761bdca2c"} Dec 03 13:00:32 crc kubenswrapper[4986]: I1203 13:00:32.150342 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 03 13:00:32 crc kubenswrapper[4986]: I1203 13:00:32.150449 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"768cfc71a151effff9936ec41adefea5f29723a2da73a86842ab285859280cdf"} Dec 03 13:00:34 crc kubenswrapper[4986]: I1203 13:00:34.205546 4986 status_manager.go:851] "Failed to get status for pod" podUID="0636c9e2-4b00-4d01-8426-5fbfd9f9fa88" pod="openshift-authentication/oauth-openshift-558db77b4-2k592" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-2k592\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:34 crc kubenswrapper[4986]: I1203 13:00:34.205831 4986 status_manager.go:851] "Failed to get status for pod" podUID="d28745c8-a7a6-4a38-acb2-f7b6ff1ef54f" pod="openshift-marketplace/certified-operators-vzt5m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-vzt5m\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:34 crc kubenswrapper[4986]: I1203 13:00:34.206178 4986 status_manager.go:851] "Failed to get status for pod" podUID="37f6dc70-6f91-4162-a225-239a999e4320" pod="openshift-marketplace/community-operators-bzs74" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bzs74\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:34 crc kubenswrapper[4986]: I1203 13:00:34.206379 4986 status_manager.go:851] "Failed to get status for pod" podUID="bb0a57df-0313-4698-9e73-373c97e2fb72" pod="openshift-marketplace/certified-operators-9w6ts" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-9w6ts\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:34 crc kubenswrapper[4986]: I1203 13:00:34.206548 4986 status_manager.go:851] "Failed to get status for pod" podUID="49d6e247-25ce-45e1-b2fe-2e3ec70cf966" pod="openshift-marketplace/redhat-operators-dgwd8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-dgwd8\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:34 crc kubenswrapper[4986]: I1203 13:00:34.206723 4986 status_manager.go:851] "Failed to get status for pod" podUID="b2dbfdc4-7122-4b7f-bfdd-189396fb1c77" pod="openshift-marketplace/redhat-marketplace-l6bpp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-l6bpp\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:34 crc kubenswrapper[4986]: I1203 13:00:34.206897 4986 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:34 crc kubenswrapper[4986]: I1203 13:00:34.207095 4986 status_manager.go:851] "Failed to get status for pod" podUID="df90b001-135e-4c75-a67e-3084e905378a" pod="openshift-marketplace/redhat-marketplace-6mz6x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-6mz6x\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:34 crc kubenswrapper[4986]: I1203 13:00:34.207297 4986 status_manager.go:851] "Failed to get status for pod" podUID="fb60d5f6-dca0-4f66-91c4-00dd32d26bcf" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:34 crc kubenswrapper[4986]: I1203 13:00:34.207820 4986 status_manager.go:851] "Failed to get status for pod" podUID="9e009819-9f9d-48db-a20b-dc29cef30887" pod="openshift-marketplace/redhat-operators-wnfnb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-wnfnb\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:34 crc kubenswrapper[4986]: I1203 13:00:34.208178 4986 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:35 crc kubenswrapper[4986]: I1203 13:00:35.210100 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"4b6dc57f3e88c3c161f9e70f3812b77960e59e43536f5f95ab9dd6f4c5ba8fd1"} Dec 03 13:00:35 crc kubenswrapper[4986]: E1203 13:00:35.279875 4986 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.112:6443: connect: connection refused" interval="7s" Dec 03 13:00:36 crc kubenswrapper[4986]: I1203 13:00:36.216156 4986 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="4b6dc57f3e88c3c161f9e70f3812b77960e59e43536f5f95ab9dd6f4c5ba8fd1" exitCode=0 Dec 03 13:00:36 crc kubenswrapper[4986]: I1203 13:00:36.216198 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"4b6dc57f3e88c3c161f9e70f3812b77960e59e43536f5f95ab9dd6f4c5ba8fd1"} Dec 03 13:00:36 crc kubenswrapper[4986]: I1203 13:00:36.216468 4986 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="231ed674-1d19-4ee2-b29c-a1b7453ed531" Dec 03 13:00:36 crc kubenswrapper[4986]: I1203 13:00:36.216494 4986 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="231ed674-1d19-4ee2-b29c-a1b7453ed531" Dec 03 13:00:36 crc kubenswrapper[4986]: E1203 13:00:36.216896 4986 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.112:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 13:00:36 crc kubenswrapper[4986]: I1203 13:00:36.217002 4986 status_manager.go:851] "Failed to get status for pod" podUID="b2dbfdc4-7122-4b7f-bfdd-189396fb1c77" pod="openshift-marketplace/redhat-marketplace-l6bpp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-l6bpp\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:36 crc kubenswrapper[4986]: I1203 13:00:36.217335 4986 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:36 crc kubenswrapper[4986]: I1203 13:00:36.217960 4986 status_manager.go:851] "Failed to get status for pod" podUID="df90b001-135e-4c75-a67e-3084e905378a" pod="openshift-marketplace/redhat-marketplace-6mz6x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-6mz6x\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:36 crc kubenswrapper[4986]: I1203 13:00:36.218382 4986 status_manager.go:851] "Failed to get status for pod" podUID="fb60d5f6-dca0-4f66-91c4-00dd32d26bcf" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:36 crc kubenswrapper[4986]: I1203 13:00:36.218670 4986 status_manager.go:851] "Failed to get status for pod" podUID="9e009819-9f9d-48db-a20b-dc29cef30887" pod="openshift-marketplace/redhat-operators-wnfnb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-wnfnb\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:36 crc kubenswrapper[4986]: I1203 13:00:36.218960 4986 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:36 crc kubenswrapper[4986]: I1203 13:00:36.219243 4986 status_manager.go:851] "Failed to get status for pod" podUID="0636c9e2-4b00-4d01-8426-5fbfd9f9fa88" pod="openshift-authentication/oauth-openshift-558db77b4-2k592" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-2k592\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:36 crc kubenswrapper[4986]: I1203 13:00:36.219540 4986 status_manager.go:851] "Failed to get status for pod" podUID="d28745c8-a7a6-4a38-acb2-f7b6ff1ef54f" pod="openshift-marketplace/certified-operators-vzt5m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-vzt5m\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:36 crc kubenswrapper[4986]: I1203 13:00:36.219842 4986 status_manager.go:851] "Failed to get status for pod" podUID="37f6dc70-6f91-4162-a225-239a999e4320" pod="openshift-marketplace/community-operators-bzs74" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bzs74\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:36 crc kubenswrapper[4986]: I1203 13:00:36.220106 4986 status_manager.go:851] "Failed to get status for pod" podUID="bb0a57df-0313-4698-9e73-373c97e2fb72" pod="openshift-marketplace/certified-operators-9w6ts" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-9w6ts\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:36 crc kubenswrapper[4986]: I1203 13:00:36.220550 4986 status_manager.go:851] "Failed to get status for pod" podUID="49d6e247-25ce-45e1-b2fe-2e3ec70cf966" pod="openshift-marketplace/redhat-operators-dgwd8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-dgwd8\": dial tcp 38.129.56.112:6443: connect: connection refused" Dec 03 13:00:36 crc kubenswrapper[4986]: I1203 13:00:36.823185 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 13:00:36 crc kubenswrapper[4986]: I1203 13:00:36.823881 4986 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Dec 03 13:00:36 crc kubenswrapper[4986]: I1203 13:00:36.823933 4986 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Dec 03 13:00:36 crc kubenswrapper[4986]: I1203 13:00:36.945097 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412780-c4mlz" Dec 03 13:00:36 crc kubenswrapper[4986]: I1203 13:00:36.945413 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412780-c4mlz" Dec 03 13:00:37 crc kubenswrapper[4986]: I1203 13:00:37.232720 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"514936996be114d37a8a15bba8340d7ac073258b446387c8333a666139a36387"} Dec 03 13:00:37 crc kubenswrapper[4986]: I1203 13:00:37.233522 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"826270d2971ade5c583917f2d2b5709ff8f76ebc8021ed10c4be31445f6c8d64"} Dec 03 13:00:38 crc kubenswrapper[4986]: I1203 13:00:38.240999 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"b244155aaae6e61db74f6cfa14ba30cfaca641fb22e461b3ecd295876238f732"} Dec 03 13:00:38 crc kubenswrapper[4986]: I1203 13:00:38.241327 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"1b90ffeef91b03c2bf9926c52c11d1961f7f770027d1254e8918a70d3dd1f544"} Dec 03 13:00:38 crc kubenswrapper[4986]: I1203 13:00:38.241247 4986 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="231ed674-1d19-4ee2-b29c-a1b7453ed531" Dec 03 13:00:38 crc kubenswrapper[4986]: I1203 13:00:38.241362 4986 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="231ed674-1d19-4ee2-b29c-a1b7453ed531" Dec 03 13:00:38 crc kubenswrapper[4986]: I1203 13:00:38.241479 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 13:00:38 crc kubenswrapper[4986]: I1203 13:00:38.241514 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"b54dfb8efb1b14c6b1d246b072b7168e07e13c268973081c67ecc1c32450f99e"} Dec 03 13:00:38 crc kubenswrapper[4986]: I1203 13:00:38.369086 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 13:00:40 crc kubenswrapper[4986]: I1203 13:00:40.761912 4986 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Dec 03 13:00:41 crc kubenswrapper[4986]: I1203 13:00:41.965480 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 13:00:41 crc kubenswrapper[4986]: I1203 13:00:41.966011 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 13:00:41 crc kubenswrapper[4986]: I1203 13:00:41.972775 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 13:00:43 crc kubenswrapper[4986]: I1203 13:00:43.263552 4986 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 13:00:43 crc kubenswrapper[4986]: I1203 13:00:43.267591 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412780-c4mlz" event={"ID":"f68bd6da-f9ec-44ee-9a80-1b5820c75d8e","Type":"ContainerStarted","Data":"ab85483bfdb98efcd91d904fcee038260faad1fc6b695ad376c3e94754816dd2"} Dec 03 13:00:43 crc kubenswrapper[4986]: I1203 13:00:43.516371 4986 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="564bdfe1-4f89-4ab8-8a9e-c7140f763a83" Dec 03 13:00:44 crc kubenswrapper[4986]: I1203 13:00:44.274143 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_collect-profiles-29412780-c4mlz_f68bd6da-f9ec-44ee-9a80-1b5820c75d8e/collect-profiles/0.log" Dec 03 13:00:44 crc kubenswrapper[4986]: I1203 13:00:44.274436 4986 generic.go:334] "Generic (PLEG): container finished" podID="f68bd6da-f9ec-44ee-9a80-1b5820c75d8e" containerID="e4f38f90fa9d5d2989041bb12d203e764d18eb4cf3e4ff97a25e583616d8b82b" exitCode=1 Dec 03 13:00:44 crc kubenswrapper[4986]: I1203 13:00:44.274514 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412780-c4mlz" event={"ID":"f68bd6da-f9ec-44ee-9a80-1b5820c75d8e","Type":"ContainerDied","Data":"e4f38f90fa9d5d2989041bb12d203e764d18eb4cf3e4ff97a25e583616d8b82b"} Dec 03 13:00:44 crc kubenswrapper[4986]: I1203 13:00:44.274706 4986 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="231ed674-1d19-4ee2-b29c-a1b7453ed531" Dec 03 13:00:44 crc kubenswrapper[4986]: I1203 13:00:44.274717 4986 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="231ed674-1d19-4ee2-b29c-a1b7453ed531" Dec 03 13:00:44 crc kubenswrapper[4986]: I1203 13:00:44.278351 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 13:00:44 crc kubenswrapper[4986]: I1203 13:00:44.290963 4986 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="564bdfe1-4f89-4ab8-8a9e-c7140f763a83" Dec 03 13:00:45 crc kubenswrapper[4986]: I1203 13:00:45.290432 4986 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="231ed674-1d19-4ee2-b29c-a1b7453ed531" Dec 03 13:00:45 crc kubenswrapper[4986]: I1203 13:00:45.290474 4986 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="231ed674-1d19-4ee2-b29c-a1b7453ed531" Dec 03 13:00:45 crc kubenswrapper[4986]: I1203 13:00:45.296084 4986 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="564bdfe1-4f89-4ab8-8a9e-c7140f763a83" Dec 03 13:00:45 crc kubenswrapper[4986]: I1203 13:00:45.604161 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_collect-profiles-29412780-c4mlz_f68bd6da-f9ec-44ee-9a80-1b5820c75d8e/collect-profiles/0.log" Dec 03 13:00:45 crc kubenswrapper[4986]: I1203 13:00:45.604676 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412780-c4mlz" Dec 03 13:00:45 crc kubenswrapper[4986]: I1203 13:00:45.733150 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ts4l\" (UniqueName: \"kubernetes.io/projected/f68bd6da-f9ec-44ee-9a80-1b5820c75d8e-kube-api-access-2ts4l\") pod \"f68bd6da-f9ec-44ee-9a80-1b5820c75d8e\" (UID: \"f68bd6da-f9ec-44ee-9a80-1b5820c75d8e\") " Dec 03 13:00:45 crc kubenswrapper[4986]: I1203 13:00:45.733477 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f68bd6da-f9ec-44ee-9a80-1b5820c75d8e-config-volume\") pod \"f68bd6da-f9ec-44ee-9a80-1b5820c75d8e\" (UID: \"f68bd6da-f9ec-44ee-9a80-1b5820c75d8e\") " Dec 03 13:00:45 crc kubenswrapper[4986]: I1203 13:00:45.733606 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f68bd6da-f9ec-44ee-9a80-1b5820c75d8e-secret-volume\") pod \"f68bd6da-f9ec-44ee-9a80-1b5820c75d8e\" (UID: \"f68bd6da-f9ec-44ee-9a80-1b5820c75d8e\") " Dec 03 13:00:45 crc kubenswrapper[4986]: I1203 13:00:45.734256 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f68bd6da-f9ec-44ee-9a80-1b5820c75d8e-config-volume" (OuterVolumeSpecName: "config-volume") pod "f68bd6da-f9ec-44ee-9a80-1b5820c75d8e" (UID: "f68bd6da-f9ec-44ee-9a80-1b5820c75d8e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:00:45 crc kubenswrapper[4986]: I1203 13:00:45.735004 4986 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f68bd6da-f9ec-44ee-9a80-1b5820c75d8e-config-volume\") on node \"crc\" DevicePath \"\"" Dec 03 13:00:45 crc kubenswrapper[4986]: I1203 13:00:45.741416 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f68bd6da-f9ec-44ee-9a80-1b5820c75d8e-kube-api-access-2ts4l" (OuterVolumeSpecName: "kube-api-access-2ts4l") pod "f68bd6da-f9ec-44ee-9a80-1b5820c75d8e" (UID: "f68bd6da-f9ec-44ee-9a80-1b5820c75d8e"). InnerVolumeSpecName "kube-api-access-2ts4l". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:00:45 crc kubenswrapper[4986]: I1203 13:00:45.750357 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f68bd6da-f9ec-44ee-9a80-1b5820c75d8e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f68bd6da-f9ec-44ee-9a80-1b5820c75d8e" (UID: "f68bd6da-f9ec-44ee-9a80-1b5820c75d8e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:00:45 crc kubenswrapper[4986]: I1203 13:00:45.836397 4986 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f68bd6da-f9ec-44ee-9a80-1b5820c75d8e-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 03 13:00:45 crc kubenswrapper[4986]: I1203 13:00:45.836448 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ts4l\" (UniqueName: \"kubernetes.io/projected/f68bd6da-f9ec-44ee-9a80-1b5820c75d8e-kube-api-access-2ts4l\") on node \"crc\" DevicePath \"\"" Dec 03 13:00:46 crc kubenswrapper[4986]: I1203 13:00:46.300175 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_collect-profiles-29412780-c4mlz_f68bd6da-f9ec-44ee-9a80-1b5820c75d8e/collect-profiles/0.log" Dec 03 13:00:46 crc kubenswrapper[4986]: I1203 13:00:46.300245 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412780-c4mlz" event={"ID":"f68bd6da-f9ec-44ee-9a80-1b5820c75d8e","Type":"ContainerDied","Data":"ab85483bfdb98efcd91d904fcee038260faad1fc6b695ad376c3e94754816dd2"} Dec 03 13:00:46 crc kubenswrapper[4986]: I1203 13:00:46.300276 4986 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab85483bfdb98efcd91d904fcee038260faad1fc6b695ad376c3e94754816dd2" Dec 03 13:00:46 crc kubenswrapper[4986]: I1203 13:00:46.300377 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412780-c4mlz" Dec 03 13:00:46 crc kubenswrapper[4986]: I1203 13:00:46.824128 4986 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Dec 03 13:00:46 crc kubenswrapper[4986]: I1203 13:00:46.824224 4986 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Dec 03 13:00:52 crc kubenswrapper[4986]: I1203 13:00:52.782173 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 03 13:00:52 crc kubenswrapper[4986]: I1203 13:00:52.925330 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 03 13:00:53 crc kubenswrapper[4986]: I1203 13:00:53.394642 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 03 13:00:53 crc kubenswrapper[4986]: I1203 13:00:53.597639 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 03 13:00:53 crc kubenswrapper[4986]: I1203 13:00:53.820916 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 03 13:00:54 crc kubenswrapper[4986]: I1203 13:00:54.007852 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 03 13:00:54 crc kubenswrapper[4986]: I1203 13:00:54.588476 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 03 13:00:54 crc kubenswrapper[4986]: I1203 13:00:54.650398 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 03 13:00:55 crc kubenswrapper[4986]: I1203 13:00:55.163745 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 03 13:00:55 crc kubenswrapper[4986]: I1203 13:00:55.167668 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 03 13:00:55 crc kubenswrapper[4986]: I1203 13:00:55.171006 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 03 13:00:55 crc kubenswrapper[4986]: I1203 13:00:55.404520 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 03 13:00:55 crc kubenswrapper[4986]: I1203 13:00:55.740894 4986 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 03 13:00:56 crc kubenswrapper[4986]: I1203 13:00:56.110670 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 03 13:00:56 crc kubenswrapper[4986]: I1203 13:00:56.166870 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 03 13:00:56 crc kubenswrapper[4986]: I1203 13:00:56.167717 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 03 13:00:56 crc kubenswrapper[4986]: I1203 13:00:56.285059 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 03 13:00:56 crc kubenswrapper[4986]: I1203 13:00:56.388034 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 03 13:00:56 crc kubenswrapper[4986]: I1203 13:00:56.399512 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 03 13:00:56 crc kubenswrapper[4986]: I1203 13:00:56.544712 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 03 13:00:56 crc kubenswrapper[4986]: I1203 13:00:56.819744 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 03 13:00:56 crc kubenswrapper[4986]: I1203 13:00:56.823261 4986 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Dec 03 13:00:56 crc kubenswrapper[4986]: I1203 13:00:56.823374 4986 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Dec 03 13:00:56 crc kubenswrapper[4986]: I1203 13:00:56.823443 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 13:00:56 crc kubenswrapper[4986]: I1203 13:00:56.824472 4986 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"768cfc71a151effff9936ec41adefea5f29723a2da73a86842ab285859280cdf"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Dec 03 13:00:56 crc kubenswrapper[4986]: I1203 13:00:56.824662 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://768cfc71a151effff9936ec41adefea5f29723a2da73a86842ab285859280cdf" gracePeriod=30 Dec 03 13:00:56 crc kubenswrapper[4986]: I1203 13:00:56.909579 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 03 13:00:56 crc kubenswrapper[4986]: I1203 13:00:56.972865 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 03 13:00:57 crc kubenswrapper[4986]: I1203 13:00:57.053381 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 03 13:00:57 crc kubenswrapper[4986]: I1203 13:00:57.185902 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 03 13:00:57 crc kubenswrapper[4986]: I1203 13:00:57.201204 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 03 13:00:57 crc kubenswrapper[4986]: I1203 13:00:57.280144 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 03 13:00:57 crc kubenswrapper[4986]: I1203 13:00:57.471324 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 03 13:00:57 crc kubenswrapper[4986]: I1203 13:00:57.561780 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 03 13:00:57 crc kubenswrapper[4986]: I1203 13:00:57.614459 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 03 13:00:57 crc kubenswrapper[4986]: I1203 13:00:57.692460 4986 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 03 13:00:57 crc kubenswrapper[4986]: I1203 13:00:57.694457 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vzt5m" podStartSLOduration=40.180905296 podStartE2EDuration="2m30.694433372s" podCreationTimestamp="2025-12-03 12:58:27 +0000 UTC" firstStartedPulling="2025-12-03 12:58:31.070696812 +0000 UTC m=+170.537128003" lastFinishedPulling="2025-12-03 13:00:21.584224888 +0000 UTC m=+281.050656079" observedRunningTime="2025-12-03 13:00:43.343121573 +0000 UTC m=+302.809552754" watchObservedRunningTime="2025-12-03 13:00:57.694433372 +0000 UTC m=+317.160864603" Dec 03 13:00:57 crc kubenswrapper[4986]: I1203 13:00:57.695198 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-l6bpp" podStartSLOduration=39.248686259 podStartE2EDuration="2m28.695188538s" podCreationTimestamp="2025-12-03 12:58:29 +0000 UTC" firstStartedPulling="2025-12-03 12:58:31.081479517 +0000 UTC m=+170.547910708" lastFinishedPulling="2025-12-03 13:00:20.527981796 +0000 UTC m=+279.994412987" observedRunningTime="2025-12-03 13:00:43.414568097 +0000 UTC m=+302.880999288" watchObservedRunningTime="2025-12-03 13:00:57.695188538 +0000 UTC m=+317.161619769" Dec 03 13:00:57 crc kubenswrapper[4986]: I1203 13:00:57.696183 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dgwd8" podStartSLOduration=36.80749419 podStartE2EDuration="2m27.696175331s" podCreationTimestamp="2025-12-03 12:58:30 +0000 UTC" firstStartedPulling="2025-12-03 12:58:32.143174013 +0000 UTC m=+171.609605204" lastFinishedPulling="2025-12-03 13:00:23.031855154 +0000 UTC m=+282.498286345" observedRunningTime="2025-12-03 13:00:43.400783047 +0000 UTC m=+302.867214248" watchObservedRunningTime="2025-12-03 13:00:57.696175331 +0000 UTC m=+317.162606562" Dec 03 13:00:57 crc kubenswrapper[4986]: I1203 13:00:57.696375 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wnfnb" podStartSLOduration=40.448033357 podStartE2EDuration="2m27.696368147s" podCreationTimestamp="2025-12-03 12:58:30 +0000 UTC" firstStartedPulling="2025-12-03 12:58:33.161827722 +0000 UTC m=+172.628258913" lastFinishedPulling="2025-12-03 13:00:20.410162512 +0000 UTC m=+279.876593703" observedRunningTime="2025-12-03 13:00:43.472482261 +0000 UTC m=+302.938913452" watchObservedRunningTime="2025-12-03 13:00:57.696368147 +0000 UTC m=+317.162799378" Dec 03 13:00:57 crc kubenswrapper[4986]: I1203 13:00:57.698060 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=40.698050305 podStartE2EDuration="40.698050305s" podCreationTimestamp="2025-12-03 13:00:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:00:43.423782791 +0000 UTC m=+302.890213982" watchObservedRunningTime="2025-12-03 13:00:57.698050305 +0000 UTC m=+317.164481536" Dec 03 13:00:57 crc kubenswrapper[4986]: I1203 13:00:57.700897 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-2k592","openshift-marketplace/certified-operators-9w6ts","openshift-marketplace/redhat-marketplace-6mz6x","openshift-kube-apiserver/kube-apiserver-crc","openshift-marketplace/community-operators-bzs74"] Dec 03 13:00:57 crc kubenswrapper[4986]: I1203 13:00:57.700992 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 03 13:00:57 crc kubenswrapper[4986]: I1203 13:00:57.701025 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412780-c4mlz"] Dec 03 13:00:57 crc kubenswrapper[4986]: I1203 13:00:57.705675 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 13:00:57 crc kubenswrapper[4986]: I1203 13:00:57.726720 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=14.726693020999999 podStartE2EDuration="14.726693021s" podCreationTimestamp="2025-12-03 13:00:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:00:57.720047594 +0000 UTC m=+317.186478835" watchObservedRunningTime="2025-12-03 13:00:57.726693021 +0000 UTC m=+317.193124222" Dec 03 13:00:57 crc kubenswrapper[4986]: I1203 13:00:57.788489 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 03 13:00:57 crc kubenswrapper[4986]: I1203 13:00:57.808149 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 03 13:00:57 crc kubenswrapper[4986]: I1203 13:00:57.953474 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 03 13:00:58 crc kubenswrapper[4986]: I1203 13:00:58.028263 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 03 13:00:58 crc kubenswrapper[4986]: I1203 13:00:58.256921 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 03 13:00:58 crc kubenswrapper[4986]: I1203 13:00:58.333955 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 03 13:00:58 crc kubenswrapper[4986]: I1203 13:00:58.395812 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 03 13:00:58 crc kubenswrapper[4986]: I1203 13:00:58.412526 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 03 13:00:58 crc kubenswrapper[4986]: I1203 13:00:58.630820 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 03 13:00:58 crc kubenswrapper[4986]: I1203 13:00:58.715517 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 03 13:00:58 crc kubenswrapper[4986]: I1203 13:00:58.905316 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 03 13:00:58 crc kubenswrapper[4986]: I1203 13:00:58.955558 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0636c9e2-4b00-4d01-8426-5fbfd9f9fa88" path="/var/lib/kubelet/pods/0636c9e2-4b00-4d01-8426-5fbfd9f9fa88/volumes" Dec 03 13:00:58 crc kubenswrapper[4986]: I1203 13:00:58.956828 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37f6dc70-6f91-4162-a225-239a999e4320" path="/var/lib/kubelet/pods/37f6dc70-6f91-4162-a225-239a999e4320/volumes" Dec 03 13:00:58 crc kubenswrapper[4986]: I1203 13:00:58.958750 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb0a57df-0313-4698-9e73-373c97e2fb72" path="/var/lib/kubelet/pods/bb0a57df-0313-4698-9e73-373c97e2fb72/volumes" Dec 03 13:00:58 crc kubenswrapper[4986]: I1203 13:00:58.961326 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df90b001-135e-4c75-a67e-3084e905378a" path="/var/lib/kubelet/pods/df90b001-135e-4c75-a67e-3084e905378a/volumes" Dec 03 13:00:59 crc kubenswrapper[4986]: I1203 13:00:59.185227 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 03 13:00:59 crc kubenswrapper[4986]: I1203 13:00:59.216041 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 03 13:00:59 crc kubenswrapper[4986]: I1203 13:00:59.245082 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 03 13:00:59 crc kubenswrapper[4986]: I1203 13:00:59.403637 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 03 13:00:59 crc kubenswrapper[4986]: I1203 13:00:59.471092 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 03 13:00:59 crc kubenswrapper[4986]: I1203 13:00:59.548918 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 03 13:00:59 crc kubenswrapper[4986]: I1203 13:00:59.667934 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 03 13:00:59 crc kubenswrapper[4986]: I1203 13:00:59.685989 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 03 13:00:59 crc kubenswrapper[4986]: I1203 13:00:59.714966 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 03 13:00:59 crc kubenswrapper[4986]: I1203 13:00:59.982313 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 03 13:01:00 crc kubenswrapper[4986]: I1203 13:01:00.067919 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 03 13:01:00 crc kubenswrapper[4986]: I1203 13:01:00.243097 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 03 13:01:00 crc kubenswrapper[4986]: I1203 13:01:00.290541 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 03 13:01:00 crc kubenswrapper[4986]: I1203 13:01:00.351736 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 03 13:01:00 crc kubenswrapper[4986]: I1203 13:01:00.792757 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 03 13:01:00 crc kubenswrapper[4986]: I1203 13:01:00.826847 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 03 13:01:00 crc kubenswrapper[4986]: I1203 13:01:00.843886 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 03 13:01:00 crc kubenswrapper[4986]: I1203 13:01:00.876445 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 03 13:01:00 crc kubenswrapper[4986]: I1203 13:01:00.988867 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 03 13:01:01 crc kubenswrapper[4986]: I1203 13:01:01.091776 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 03 13:01:01 crc kubenswrapper[4986]: I1203 13:01:01.200863 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 03 13:01:01 crc kubenswrapper[4986]: I1203 13:01:01.238962 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 03 13:01:01 crc kubenswrapper[4986]: I1203 13:01:01.271351 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 03 13:01:01 crc kubenswrapper[4986]: I1203 13:01:01.271474 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 03 13:01:01 crc kubenswrapper[4986]: I1203 13:01:01.305617 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 03 13:01:01 crc kubenswrapper[4986]: I1203 13:01:01.383441 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 03 13:01:01 crc kubenswrapper[4986]: I1203 13:01:01.436873 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 03 13:01:01 crc kubenswrapper[4986]: I1203 13:01:01.514623 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 03 13:01:01 crc kubenswrapper[4986]: I1203 13:01:01.646379 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 03 13:01:01 crc kubenswrapper[4986]: I1203 13:01:01.728549 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 03 13:01:01 crc kubenswrapper[4986]: I1203 13:01:01.856450 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 03 13:01:01 crc kubenswrapper[4986]: I1203 13:01:01.865988 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 03 13:01:02 crc kubenswrapper[4986]: I1203 13:01:02.086900 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 03 13:01:02 crc kubenswrapper[4986]: I1203 13:01:02.184588 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 03 13:01:02 crc kubenswrapper[4986]: I1203 13:01:02.249559 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 03 13:01:02 crc kubenswrapper[4986]: I1203 13:01:02.285837 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 03 13:01:02 crc kubenswrapper[4986]: I1203 13:01:02.286923 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 03 13:01:02 crc kubenswrapper[4986]: I1203 13:01:02.322746 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 03 13:01:02 crc kubenswrapper[4986]: I1203 13:01:02.338463 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 03 13:01:02 crc kubenswrapper[4986]: I1203 13:01:02.423601 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 03 13:01:02 crc kubenswrapper[4986]: I1203 13:01:02.498721 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 03 13:01:02 crc kubenswrapper[4986]: I1203 13:01:02.510403 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 03 13:01:02 crc kubenswrapper[4986]: I1203 13:01:02.518358 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 03 13:01:02 crc kubenswrapper[4986]: I1203 13:01:02.672926 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 03 13:01:02 crc kubenswrapper[4986]: I1203 13:01:02.936387 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 03 13:01:02 crc kubenswrapper[4986]: I1203 13:01:02.949960 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 03 13:01:02 crc kubenswrapper[4986]: I1203 13:01:02.971309 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 03 13:01:03 crc kubenswrapper[4986]: I1203 13:01:03.033663 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 03 13:01:03 crc kubenswrapper[4986]: I1203 13:01:03.040251 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 03 13:01:03 crc kubenswrapper[4986]: I1203 13:01:03.190625 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 03 13:01:03 crc kubenswrapper[4986]: I1203 13:01:03.210558 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 03 13:01:03 crc kubenswrapper[4986]: I1203 13:01:03.322607 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 03 13:01:03 crc kubenswrapper[4986]: I1203 13:01:03.380578 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 03 13:01:03 crc kubenswrapper[4986]: I1203 13:01:03.428740 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 03 13:01:03 crc kubenswrapper[4986]: I1203 13:01:03.502514 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 03 13:01:03 crc kubenswrapper[4986]: I1203 13:01:03.561248 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 03 13:01:03 crc kubenswrapper[4986]: I1203 13:01:03.568237 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 03 13:01:03 crc kubenswrapper[4986]: I1203 13:01:03.639252 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 03 13:01:03 crc kubenswrapper[4986]: I1203 13:01:03.678452 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 03 13:01:03 crc kubenswrapper[4986]: I1203 13:01:03.823595 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 03 13:01:03 crc kubenswrapper[4986]: I1203 13:01:03.862453 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 03 13:01:04 crc kubenswrapper[4986]: I1203 13:01:04.020566 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 03 13:01:04 crc kubenswrapper[4986]: I1203 13:01:04.060484 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 03 13:01:04 crc kubenswrapper[4986]: I1203 13:01:04.104681 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 03 13:01:04 crc kubenswrapper[4986]: I1203 13:01:04.194201 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 03 13:01:04 crc kubenswrapper[4986]: I1203 13:01:04.544610 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 03 13:01:04 crc kubenswrapper[4986]: I1203 13:01:04.657333 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 03 13:01:04 crc kubenswrapper[4986]: I1203 13:01:04.799817 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 03 13:01:04 crc kubenswrapper[4986]: I1203 13:01:04.887759 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 03 13:01:04 crc kubenswrapper[4986]: I1203 13:01:04.935853 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 03 13:01:04 crc kubenswrapper[4986]: I1203 13:01:04.948103 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 03 13:01:05 crc kubenswrapper[4986]: I1203 13:01:05.063901 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 03 13:01:05 crc kubenswrapper[4986]: I1203 13:01:05.176810 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 03 13:01:05 crc kubenswrapper[4986]: I1203 13:01:05.262370 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 03 13:01:05 crc kubenswrapper[4986]: I1203 13:01:05.301511 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 03 13:01:05 crc kubenswrapper[4986]: I1203 13:01:05.456506 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 03 13:01:05 crc kubenswrapper[4986]: I1203 13:01:05.588743 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 03 13:01:05 crc kubenswrapper[4986]: I1203 13:01:05.594053 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 03 13:01:05 crc kubenswrapper[4986]: I1203 13:01:05.597846 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 03 13:01:05 crc kubenswrapper[4986]: I1203 13:01:05.614858 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 03 13:01:05 crc kubenswrapper[4986]: I1203 13:01:05.655343 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 03 13:01:05 crc kubenswrapper[4986]: I1203 13:01:05.775215 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 03 13:01:05 crc kubenswrapper[4986]: I1203 13:01:05.909152 4986 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 03 13:01:05 crc kubenswrapper[4986]: I1203 13:01:05.909403 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://8aafe9b6b4a35d06b8c404e5fbc5b2c15dd3fddca0abd812941dcdbf3f1fe62a" gracePeriod=5 Dec 03 13:01:06 crc kubenswrapper[4986]: I1203 13:01:06.111055 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 03 13:01:06 crc kubenswrapper[4986]: I1203 13:01:06.270144 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 03 13:01:06 crc kubenswrapper[4986]: I1203 13:01:06.523883 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 03 13:01:06 crc kubenswrapper[4986]: I1203 13:01:06.635680 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 03 13:01:06 crc kubenswrapper[4986]: I1203 13:01:06.639809 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 03 13:01:06 crc kubenswrapper[4986]: I1203 13:01:06.914066 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 03 13:01:06 crc kubenswrapper[4986]: I1203 13:01:06.991420 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 03 13:01:07 crc kubenswrapper[4986]: I1203 13:01:07.083189 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 03 13:01:07 crc kubenswrapper[4986]: I1203 13:01:07.117195 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 03 13:01:07 crc kubenswrapper[4986]: I1203 13:01:07.134864 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 03 13:01:07 crc kubenswrapper[4986]: I1203 13:01:07.171019 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 03 13:01:07 crc kubenswrapper[4986]: I1203 13:01:07.316142 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 03 13:01:07 crc kubenswrapper[4986]: I1203 13:01:07.407339 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 03 13:01:07 crc kubenswrapper[4986]: I1203 13:01:07.533325 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 03 13:01:07 crc kubenswrapper[4986]: I1203 13:01:07.586013 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 03 13:01:07 crc kubenswrapper[4986]: I1203 13:01:07.627664 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 03 13:01:07 crc kubenswrapper[4986]: I1203 13:01:07.813539 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 03 13:01:07 crc kubenswrapper[4986]: I1203 13:01:07.909922 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 03 13:01:07 crc kubenswrapper[4986]: I1203 13:01:07.969976 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 03 13:01:08 crc kubenswrapper[4986]: I1203 13:01:08.168245 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 03 13:01:08 crc kubenswrapper[4986]: I1203 13:01:08.437962 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 03 13:01:08 crc kubenswrapper[4986]: I1203 13:01:08.975516 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 03 13:01:13 crc kubenswrapper[4986]: I1203 13:01:13.489439 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 03 13:01:13 crc kubenswrapper[4986]: I1203 13:01:13.490302 4986 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="8aafe9b6b4a35d06b8c404e5fbc5b2c15dd3fddca0abd812941dcdbf3f1fe62a" exitCode=137 Dec 03 13:01:14 crc kubenswrapper[4986]: I1203 13:01:14.250099 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 03 13:01:14 crc kubenswrapper[4986]: I1203 13:01:14.250166 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 13:01:14 crc kubenswrapper[4986]: I1203 13:01:14.315338 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 03 13:01:14 crc kubenswrapper[4986]: I1203 13:01:14.315396 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 03 13:01:14 crc kubenswrapper[4986]: I1203 13:01:14.315443 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 03 13:01:14 crc kubenswrapper[4986]: I1203 13:01:14.315464 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 03 13:01:14 crc kubenswrapper[4986]: I1203 13:01:14.315485 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 03 13:01:14 crc kubenswrapper[4986]: I1203 13:01:14.315502 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 13:01:14 crc kubenswrapper[4986]: I1203 13:01:14.315650 4986 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Dec 03 13:01:14 crc kubenswrapper[4986]: I1203 13:01:14.315693 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 13:01:14 crc kubenswrapper[4986]: I1203 13:01:14.315723 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 13:01:14 crc kubenswrapper[4986]: I1203 13:01:14.315739 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 13:01:14 crc kubenswrapper[4986]: I1203 13:01:14.322955 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 13:01:14 crc kubenswrapper[4986]: I1203 13:01:14.416623 4986 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Dec 03 13:01:14 crc kubenswrapper[4986]: I1203 13:01:14.416886 4986 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Dec 03 13:01:14 crc kubenswrapper[4986]: I1203 13:01:14.416966 4986 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 03 13:01:14 crc kubenswrapper[4986]: I1203 13:01:14.417030 4986 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 03 13:01:14 crc kubenswrapper[4986]: I1203 13:01:14.496174 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 03 13:01:14 crc kubenswrapper[4986]: I1203 13:01:14.496246 4986 scope.go:117] "RemoveContainer" containerID="8aafe9b6b4a35d06b8c404e5fbc5b2c15dd3fddca0abd812941dcdbf3f1fe62a" Dec 03 13:01:14 crc kubenswrapper[4986]: I1203 13:01:14.496397 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 13:01:14 crc kubenswrapper[4986]: I1203 13:01:14.952160 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Dec 03 13:01:14 crc kubenswrapper[4986]: I1203 13:01:14.952559 4986 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Dec 03 13:01:14 crc kubenswrapper[4986]: I1203 13:01:14.964483 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 03 13:01:14 crc kubenswrapper[4986]: I1203 13:01:14.964786 4986 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="f02c726f-8e55-4bab-b76e-bc7abf3b59d0" Dec 03 13:01:14 crc kubenswrapper[4986]: I1203 13:01:14.969252 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 03 13:01:14 crc kubenswrapper[4986]: I1203 13:01:14.969392 4986 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="f02c726f-8e55-4bab-b76e-bc7abf3b59d0" Dec 03 13:01:15 crc kubenswrapper[4986]: I1203 13:01:15.978313 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 03 13:01:18 crc kubenswrapper[4986]: I1203 13:01:18.224620 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 03 13:01:18 crc kubenswrapper[4986]: I1203 13:01:18.703003 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 03 13:01:18 crc kubenswrapper[4986]: I1203 13:01:18.862737 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 03 13:01:19 crc kubenswrapper[4986]: I1203 13:01:19.221617 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 03 13:01:19 crc kubenswrapper[4986]: I1203 13:01:19.457326 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 03 13:01:19 crc kubenswrapper[4986]: I1203 13:01:19.545752 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 03 13:01:19 crc kubenswrapper[4986]: I1203 13:01:19.805343 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 03 13:01:20 crc kubenswrapper[4986]: I1203 13:01:20.687790 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 03 13:01:21 crc kubenswrapper[4986]: I1203 13:01:21.541472 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 03 13:01:22 crc kubenswrapper[4986]: I1203 13:01:22.166476 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 03 13:01:22 crc kubenswrapper[4986]: I1203 13:01:22.265066 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 03 13:01:22 crc kubenswrapper[4986]: I1203 13:01:22.343605 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 03 13:01:22 crc kubenswrapper[4986]: I1203 13:01:22.932357 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 03 13:01:23 crc kubenswrapper[4986]: I1203 13:01:23.048195 4986 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 03 13:01:23 crc kubenswrapper[4986]: I1203 13:01:23.053973 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 03 13:01:23 crc kubenswrapper[4986]: I1203 13:01:23.487514 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 03 13:01:23 crc kubenswrapper[4986]: I1203 13:01:23.563248 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 03 13:01:23 crc kubenswrapper[4986]: I1203 13:01:23.748700 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 03 13:01:24 crc kubenswrapper[4986]: I1203 13:01:24.494113 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 03 13:01:24 crc kubenswrapper[4986]: I1203 13:01:24.788587 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 03 13:01:25 crc kubenswrapper[4986]: I1203 13:01:25.023994 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 03 13:01:25 crc kubenswrapper[4986]: I1203 13:01:25.242254 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 03 13:01:25 crc kubenswrapper[4986]: I1203 13:01:25.545302 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 03 13:01:26 crc kubenswrapper[4986]: I1203 13:01:26.167460 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 03 13:01:26 crc kubenswrapper[4986]: I1203 13:01:26.666969 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 03 13:01:26 crc kubenswrapper[4986]: I1203 13:01:26.703905 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 03 13:01:26 crc kubenswrapper[4986]: I1203 13:01:26.802247 4986 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 03 13:01:26 crc kubenswrapper[4986]: I1203 13:01:26.849716 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 03 13:01:27 crc kubenswrapper[4986]: I1203 13:01:27.308916 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 03 13:01:27 crc kubenswrapper[4986]: I1203 13:01:27.339030 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 03 13:01:27 crc kubenswrapper[4986]: I1203 13:01:27.968125 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 03 13:01:28 crc kubenswrapper[4986]: I1203 13:01:28.116074 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 03 13:01:28 crc kubenswrapper[4986]: I1203 13:01:28.151562 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 03 13:01:28 crc kubenswrapper[4986]: I1203 13:01:28.589820 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Dec 03 13:01:28 crc kubenswrapper[4986]: I1203 13:01:28.592689 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 03 13:01:28 crc kubenswrapper[4986]: I1203 13:01:28.592763 4986 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="768cfc71a151effff9936ec41adefea5f29723a2da73a86842ab285859280cdf" exitCode=137 Dec 03 13:01:28 crc kubenswrapper[4986]: I1203 13:01:28.592807 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"768cfc71a151effff9936ec41adefea5f29723a2da73a86842ab285859280cdf"} Dec 03 13:01:28 crc kubenswrapper[4986]: I1203 13:01:28.592850 4986 scope.go:117] "RemoveContainer" containerID="b86d62d991d6516ed42f7cad655018e1ebc85c8c1b100d624c1c2e5e8f616cf4" Dec 03 13:01:28 crc kubenswrapper[4986]: I1203 13:01:28.629335 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 03 13:01:29 crc kubenswrapper[4986]: I1203 13:01:29.073566 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 03 13:01:29 crc kubenswrapper[4986]: I1203 13:01:29.474638 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 03 13:01:29 crc kubenswrapper[4986]: I1203 13:01:29.602160 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Dec 03 13:01:29 crc kubenswrapper[4986]: I1203 13:01:29.758207 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 03 13:01:30 crc kubenswrapper[4986]: I1203 13:01:30.613504 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Dec 03 13:01:30 crc kubenswrapper[4986]: I1203 13:01:30.614937 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"75b111c18f7112197111151b64f55d7f2f467b870a5f64c8986ece75ba0ef621"} Dec 03 13:01:30 crc kubenswrapper[4986]: I1203 13:01:30.663179 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 03 13:01:30 crc kubenswrapper[4986]: I1203 13:01:30.808082 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 03 13:01:31 crc kubenswrapper[4986]: I1203 13:01:31.257261 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 03 13:01:31 crc kubenswrapper[4986]: I1203 13:01:31.689515 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 03 13:01:31 crc kubenswrapper[4986]: I1203 13:01:31.728958 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 03 13:01:31 crc kubenswrapper[4986]: I1203 13:01:31.848953 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 03 13:01:31 crc kubenswrapper[4986]: I1203 13:01:31.904437 4986 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 03 13:01:32 crc kubenswrapper[4986]: I1203 13:01:32.156371 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-5b4bb77c4-bmtsq"] Dec 03 13:01:32 crc kubenswrapper[4986]: E1203 13:01:32.156629 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37f6dc70-6f91-4162-a225-239a999e4320" containerName="extract-content" Dec 03 13:01:32 crc kubenswrapper[4986]: I1203 13:01:32.156646 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="37f6dc70-6f91-4162-a225-239a999e4320" containerName="extract-content" Dec 03 13:01:32 crc kubenswrapper[4986]: E1203 13:01:32.156665 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df90b001-135e-4c75-a67e-3084e905378a" containerName="extract-utilities" Dec 03 13:01:32 crc kubenswrapper[4986]: I1203 13:01:32.156676 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="df90b001-135e-4c75-a67e-3084e905378a" containerName="extract-utilities" Dec 03 13:01:32 crc kubenswrapper[4986]: E1203 13:01:32.156695 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0636c9e2-4b00-4d01-8426-5fbfd9f9fa88" containerName="oauth-openshift" Dec 03 13:01:32 crc kubenswrapper[4986]: I1203 13:01:32.156705 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="0636c9e2-4b00-4d01-8426-5fbfd9f9fa88" containerName="oauth-openshift" Dec 03 13:01:32 crc kubenswrapper[4986]: E1203 13:01:32.156716 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb0a57df-0313-4698-9e73-373c97e2fb72" containerName="extract-utilities" Dec 03 13:01:32 crc kubenswrapper[4986]: I1203 13:01:32.156727 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb0a57df-0313-4698-9e73-373c97e2fb72" containerName="extract-utilities" Dec 03 13:01:32 crc kubenswrapper[4986]: E1203 13:01:32.156741 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df90b001-135e-4c75-a67e-3084e905378a" containerName="extract-content" Dec 03 13:01:32 crc kubenswrapper[4986]: I1203 13:01:32.156751 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="df90b001-135e-4c75-a67e-3084e905378a" containerName="extract-content" Dec 03 13:01:32 crc kubenswrapper[4986]: E1203 13:01:32.156770 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb0a57df-0313-4698-9e73-373c97e2fb72" containerName="registry-server" Dec 03 13:01:32 crc kubenswrapper[4986]: I1203 13:01:32.156779 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb0a57df-0313-4698-9e73-373c97e2fb72" containerName="registry-server" Dec 03 13:01:32 crc kubenswrapper[4986]: E1203 13:01:32.156790 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df90b001-135e-4c75-a67e-3084e905378a" containerName="registry-server" Dec 03 13:01:32 crc kubenswrapper[4986]: I1203 13:01:32.156797 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="df90b001-135e-4c75-a67e-3084e905378a" containerName="registry-server" Dec 03 13:01:32 crc kubenswrapper[4986]: E1203 13:01:32.156809 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f68bd6da-f9ec-44ee-9a80-1b5820c75d8e" containerName="collect-profiles" Dec 03 13:01:32 crc kubenswrapper[4986]: I1203 13:01:32.156817 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="f68bd6da-f9ec-44ee-9a80-1b5820c75d8e" containerName="collect-profiles" Dec 03 13:01:32 crc kubenswrapper[4986]: E1203 13:01:32.156833 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb0a57df-0313-4698-9e73-373c97e2fb72" containerName="extract-content" Dec 03 13:01:32 crc kubenswrapper[4986]: I1203 13:01:32.156840 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb0a57df-0313-4698-9e73-373c97e2fb72" containerName="extract-content" Dec 03 13:01:32 crc kubenswrapper[4986]: E1203 13:01:32.156850 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37f6dc70-6f91-4162-a225-239a999e4320" containerName="extract-utilities" Dec 03 13:01:32 crc kubenswrapper[4986]: I1203 13:01:32.156858 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="37f6dc70-6f91-4162-a225-239a999e4320" containerName="extract-utilities" Dec 03 13:01:32 crc kubenswrapper[4986]: E1203 13:01:32.156870 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 03 13:01:32 crc kubenswrapper[4986]: I1203 13:01:32.156878 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 03 13:01:32 crc kubenswrapper[4986]: E1203 13:01:32.156888 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb60d5f6-dca0-4f66-91c4-00dd32d26bcf" containerName="installer" Dec 03 13:01:32 crc kubenswrapper[4986]: I1203 13:01:32.156897 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb60d5f6-dca0-4f66-91c4-00dd32d26bcf" containerName="installer" Dec 03 13:01:32 crc kubenswrapper[4986]: E1203 13:01:32.156909 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37f6dc70-6f91-4162-a225-239a999e4320" containerName="registry-server" Dec 03 13:01:32 crc kubenswrapper[4986]: I1203 13:01:32.156917 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="37f6dc70-6f91-4162-a225-239a999e4320" containerName="registry-server" Dec 03 13:01:32 crc kubenswrapper[4986]: I1203 13:01:32.157038 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="f68bd6da-f9ec-44ee-9a80-1b5820c75d8e" containerName="collect-profiles" Dec 03 13:01:32 crc kubenswrapper[4986]: I1203 13:01:32.157051 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="37f6dc70-6f91-4162-a225-239a999e4320" containerName="registry-server" Dec 03 13:01:32 crc kubenswrapper[4986]: I1203 13:01:32.157064 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb60d5f6-dca0-4f66-91c4-00dd32d26bcf" containerName="installer" Dec 03 13:01:32 crc kubenswrapper[4986]: I1203 13:01:32.157074 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="df90b001-135e-4c75-a67e-3084e905378a" containerName="registry-server" Dec 03 13:01:32 crc kubenswrapper[4986]: I1203 13:01:32.157086 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb0a57df-0313-4698-9e73-373c97e2fb72" containerName="registry-server" Dec 03 13:01:32 crc kubenswrapper[4986]: I1203 13:01:32.157098 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 03 13:01:32 crc kubenswrapper[4986]: I1203 13:01:32.157112 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="0636c9e2-4b00-4d01-8426-5fbfd9f9fa88" containerName="oauth-openshift" Dec 03 13:01:32 crc kubenswrapper[4986]: I1203 13:01:32.157573 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5b4bb77c4-bmtsq" Dec 03 13:01:32 crc kubenswrapper[4986]: I1203 13:01:32.165122 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 03 13:01:32 crc kubenswrapper[4986]: I1203 13:01:32.165428 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 03 13:01:32 crc kubenswrapper[4986]: I1203 13:01:32.165650 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 03 13:01:32 crc kubenswrapper[4986]: I1203 13:01:32.166682 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 03 13:01:32 crc kubenswrapper[4986]: I1203 13:01:32.166901 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 03 13:01:32 crc kubenswrapper[4986]: I1203 13:01:32.167318 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 03 13:01:32 crc kubenswrapper[4986]: I1203 13:01:32.167796 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 03 13:01:32 crc kubenswrapper[4986]: I1203 13:01:32.167999 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 03 13:01:32 crc kubenswrapper[4986]: I1203 13:01:32.169879 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 03 13:01:32 crc kubenswrapper[4986]: I1203 13:01:32.170223 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 03 13:01:32 crc kubenswrapper[4986]: I1203 13:01:32.170504 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 03 13:01:32 crc kubenswrapper[4986]: I1203 13:01:32.170982 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 03 13:01:32 crc kubenswrapper[4986]: I1203 13:01:32.179821 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 03 13:01:32 crc kubenswrapper[4986]: I1203 13:01:32.180103 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 03 13:01:32 crc kubenswrapper[4986]: I1203 13:01:32.186715 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 03 13:01:32 crc kubenswrapper[4986]: I1203 13:01:32.206837 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 03 13:01:32 crc kubenswrapper[4986]: I1203 13:01:32.254758 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0a738312-fa84-4688-badd-903937f6d496-v4-0-config-system-service-ca\") pod \"oauth-openshift-5b4bb77c4-bmtsq\" (UID: \"0a738312-fa84-4688-badd-903937f6d496\") " pod="openshift-authentication/oauth-openshift-5b4bb77c4-bmtsq" Dec 03 13:01:32 crc kubenswrapper[4986]: I1203 13:01:32.254992 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0a738312-fa84-4688-badd-903937f6d496-v4-0-config-system-session\") pod \"oauth-openshift-5b4bb77c4-bmtsq\" (UID: \"0a738312-fa84-4688-badd-903937f6d496\") " pod="openshift-authentication/oauth-openshift-5b4bb77c4-bmtsq" Dec 03 13:01:32 crc kubenswrapper[4986]: I1203 13:01:32.255042 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0a738312-fa84-4688-badd-903937f6d496-v4-0-config-user-template-login\") pod \"oauth-openshift-5b4bb77c4-bmtsq\" (UID: \"0a738312-fa84-4688-badd-903937f6d496\") " pod="openshift-authentication/oauth-openshift-5b4bb77c4-bmtsq" Dec 03 13:01:32 crc kubenswrapper[4986]: I1203 13:01:32.255068 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0a738312-fa84-4688-badd-903937f6d496-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5b4bb77c4-bmtsq\" (UID: \"0a738312-fa84-4688-badd-903937f6d496\") " pod="openshift-authentication/oauth-openshift-5b4bb77c4-bmtsq" Dec 03 13:01:32 crc kubenswrapper[4986]: I1203 13:01:32.255236 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0a738312-fa84-4688-badd-903937f6d496-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5b4bb77c4-bmtsq\" (UID: \"0a738312-fa84-4688-badd-903937f6d496\") " pod="openshift-authentication/oauth-openshift-5b4bb77c4-bmtsq" Dec 03 13:01:32 crc kubenswrapper[4986]: I1203 13:01:32.255269 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0a738312-fa84-4688-badd-903937f6d496-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5b4bb77c4-bmtsq\" (UID: \"0a738312-fa84-4688-badd-903937f6d496\") " pod="openshift-authentication/oauth-openshift-5b4bb77c4-bmtsq" Dec 03 13:01:32 crc kubenswrapper[4986]: I1203 13:01:32.255314 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0a738312-fa84-4688-badd-903937f6d496-v4-0-config-user-template-error\") pod \"oauth-openshift-5b4bb77c4-bmtsq\" (UID: \"0a738312-fa84-4688-badd-903937f6d496\") " pod="openshift-authentication/oauth-openshift-5b4bb77c4-bmtsq" Dec 03 13:01:32 crc kubenswrapper[4986]: I1203 13:01:32.255344 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kd554\" (UniqueName: \"kubernetes.io/projected/0a738312-fa84-4688-badd-903937f6d496-kube-api-access-kd554\") pod \"oauth-openshift-5b4bb77c4-bmtsq\" (UID: \"0a738312-fa84-4688-badd-903937f6d496\") " pod="openshift-authentication/oauth-openshift-5b4bb77c4-bmtsq" Dec 03 13:01:32 crc kubenswrapper[4986]: I1203 13:01:32.255370 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0a738312-fa84-4688-badd-903937f6d496-v4-0-config-system-router-certs\") pod \"oauth-openshift-5b4bb77c4-bmtsq\" (UID: \"0a738312-fa84-4688-badd-903937f6d496\") " pod="openshift-authentication/oauth-openshift-5b4bb77c4-bmtsq" Dec 03 13:01:32 crc kubenswrapper[4986]: I1203 13:01:32.255399 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0a738312-fa84-4688-badd-903937f6d496-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5b4bb77c4-bmtsq\" (UID: \"0a738312-fa84-4688-badd-903937f6d496\") " pod="openshift-authentication/oauth-openshift-5b4bb77c4-bmtsq" Dec 03 13:01:32 crc kubenswrapper[4986]: I1203 13:01:32.255428 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0a738312-fa84-4688-badd-903937f6d496-audit-dir\") pod \"oauth-openshift-5b4bb77c4-bmtsq\" (UID: \"0a738312-fa84-4688-badd-903937f6d496\") " pod="openshift-authentication/oauth-openshift-5b4bb77c4-bmtsq" Dec 03 13:01:32 crc kubenswrapper[4986]: I1203 13:01:32.255460 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0a738312-fa84-4688-badd-903937f6d496-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5b4bb77c4-bmtsq\" (UID: \"0a738312-fa84-4688-badd-903937f6d496\") " pod="openshift-authentication/oauth-openshift-5b4bb77c4-bmtsq" Dec 03 13:01:32 crc kubenswrapper[4986]: I1203 13:01:32.255500 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0a738312-fa84-4688-badd-903937f6d496-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5b4bb77c4-bmtsq\" (UID: \"0a738312-fa84-4688-badd-903937f6d496\") " pod="openshift-authentication/oauth-openshift-5b4bb77c4-bmtsq" Dec 03 13:01:32 crc kubenswrapper[4986]: I1203 13:01:32.255535 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0a738312-fa84-4688-badd-903937f6d496-audit-policies\") pod \"oauth-openshift-5b4bb77c4-bmtsq\" (UID: \"0a738312-fa84-4688-badd-903937f6d496\") " pod="openshift-authentication/oauth-openshift-5b4bb77c4-bmtsq" Dec 03 13:01:32 crc kubenswrapper[4986]: I1203 13:01:32.299885 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 03 13:01:32 crc kubenswrapper[4986]: I1203 13:01:32.325451 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 03 13:01:32 crc kubenswrapper[4986]: I1203 13:01:32.356755 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0a738312-fa84-4688-badd-903937f6d496-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5b4bb77c4-bmtsq\" (UID: \"0a738312-fa84-4688-badd-903937f6d496\") " pod="openshift-authentication/oauth-openshift-5b4bb77c4-bmtsq" Dec 03 13:01:32 crc kubenswrapper[4986]: I1203 13:01:32.356800 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0a738312-fa84-4688-badd-903937f6d496-audit-dir\") pod \"oauth-openshift-5b4bb77c4-bmtsq\" (UID: \"0a738312-fa84-4688-badd-903937f6d496\") " pod="openshift-authentication/oauth-openshift-5b4bb77c4-bmtsq" Dec 03 13:01:32 crc kubenswrapper[4986]: I1203 13:01:32.356825 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0a738312-fa84-4688-badd-903937f6d496-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5b4bb77c4-bmtsq\" (UID: \"0a738312-fa84-4688-badd-903937f6d496\") " pod="openshift-authentication/oauth-openshift-5b4bb77c4-bmtsq" Dec 03 13:01:32 crc kubenswrapper[4986]: I1203 13:01:32.356850 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0a738312-fa84-4688-badd-903937f6d496-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5b4bb77c4-bmtsq\" (UID: \"0a738312-fa84-4688-badd-903937f6d496\") " pod="openshift-authentication/oauth-openshift-5b4bb77c4-bmtsq" Dec 03 13:01:32 crc kubenswrapper[4986]: I1203 13:01:32.356872 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0a738312-fa84-4688-badd-903937f6d496-audit-policies\") pod \"oauth-openshift-5b4bb77c4-bmtsq\" (UID: \"0a738312-fa84-4688-badd-903937f6d496\") " pod="openshift-authentication/oauth-openshift-5b4bb77c4-bmtsq" Dec 03 13:01:32 crc kubenswrapper[4986]: I1203 13:01:32.356895 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0a738312-fa84-4688-badd-903937f6d496-v4-0-config-system-service-ca\") pod \"oauth-openshift-5b4bb77c4-bmtsq\" (UID: \"0a738312-fa84-4688-badd-903937f6d496\") " pod="openshift-authentication/oauth-openshift-5b4bb77c4-bmtsq" Dec 03 13:01:32 crc kubenswrapper[4986]: I1203 13:01:32.356918 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0a738312-fa84-4688-badd-903937f6d496-v4-0-config-system-session\") pod \"oauth-openshift-5b4bb77c4-bmtsq\" (UID: \"0a738312-fa84-4688-badd-903937f6d496\") " pod="openshift-authentication/oauth-openshift-5b4bb77c4-bmtsq" Dec 03 13:01:32 crc kubenswrapper[4986]: I1203 13:01:32.356936 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0a738312-fa84-4688-badd-903937f6d496-v4-0-config-user-template-login\") pod \"oauth-openshift-5b4bb77c4-bmtsq\" (UID: \"0a738312-fa84-4688-badd-903937f6d496\") " pod="openshift-authentication/oauth-openshift-5b4bb77c4-bmtsq" Dec 03 13:01:32 crc kubenswrapper[4986]: I1203 13:01:32.356953 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0a738312-fa84-4688-badd-903937f6d496-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5b4bb77c4-bmtsq\" (UID: \"0a738312-fa84-4688-badd-903937f6d496\") " pod="openshift-authentication/oauth-openshift-5b4bb77c4-bmtsq" Dec 03 13:01:32 crc kubenswrapper[4986]: I1203 13:01:32.356988 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0a738312-fa84-4688-badd-903937f6d496-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5b4bb77c4-bmtsq\" (UID: \"0a738312-fa84-4688-badd-903937f6d496\") " pod="openshift-authentication/oauth-openshift-5b4bb77c4-bmtsq" Dec 03 13:01:32 crc kubenswrapper[4986]: I1203 13:01:32.357005 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0a738312-fa84-4688-badd-903937f6d496-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5b4bb77c4-bmtsq\" (UID: \"0a738312-fa84-4688-badd-903937f6d496\") " pod="openshift-authentication/oauth-openshift-5b4bb77c4-bmtsq" Dec 03 13:01:32 crc kubenswrapper[4986]: I1203 13:01:32.357020 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0a738312-fa84-4688-badd-903937f6d496-v4-0-config-user-template-error\") pod \"oauth-openshift-5b4bb77c4-bmtsq\" (UID: \"0a738312-fa84-4688-badd-903937f6d496\") " pod="openshift-authentication/oauth-openshift-5b4bb77c4-bmtsq" Dec 03 13:01:32 crc kubenswrapper[4986]: I1203 13:01:32.357036 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kd554\" (UniqueName: \"kubernetes.io/projected/0a738312-fa84-4688-badd-903937f6d496-kube-api-access-kd554\") pod \"oauth-openshift-5b4bb77c4-bmtsq\" (UID: \"0a738312-fa84-4688-badd-903937f6d496\") " pod="openshift-authentication/oauth-openshift-5b4bb77c4-bmtsq" Dec 03 13:01:32 crc kubenswrapper[4986]: I1203 13:01:32.357059 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0a738312-fa84-4688-badd-903937f6d496-v4-0-config-system-router-certs\") pod \"oauth-openshift-5b4bb77c4-bmtsq\" (UID: \"0a738312-fa84-4688-badd-903937f6d496\") " pod="openshift-authentication/oauth-openshift-5b4bb77c4-bmtsq" Dec 03 13:01:32 crc kubenswrapper[4986]: I1203 13:01:32.357780 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0a738312-fa84-4688-badd-903937f6d496-audit-dir\") pod \"oauth-openshift-5b4bb77c4-bmtsq\" (UID: \"0a738312-fa84-4688-badd-903937f6d496\") " pod="openshift-authentication/oauth-openshift-5b4bb77c4-bmtsq" Dec 03 13:01:32 crc kubenswrapper[4986]: I1203 13:01:32.358641 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0a738312-fa84-4688-badd-903937f6d496-v4-0-config-system-service-ca\") pod \"oauth-openshift-5b4bb77c4-bmtsq\" (UID: \"0a738312-fa84-4688-badd-903937f6d496\") " pod="openshift-authentication/oauth-openshift-5b4bb77c4-bmtsq" Dec 03 13:01:32 crc kubenswrapper[4986]: I1203 13:01:32.358716 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0a738312-fa84-4688-badd-903937f6d496-audit-policies\") pod \"oauth-openshift-5b4bb77c4-bmtsq\" (UID: \"0a738312-fa84-4688-badd-903937f6d496\") " pod="openshift-authentication/oauth-openshift-5b4bb77c4-bmtsq" Dec 03 13:01:32 crc kubenswrapper[4986]: I1203 13:01:32.358820 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0a738312-fa84-4688-badd-903937f6d496-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5b4bb77c4-bmtsq\" (UID: \"0a738312-fa84-4688-badd-903937f6d496\") " pod="openshift-authentication/oauth-openshift-5b4bb77c4-bmtsq" Dec 03 13:01:32 crc kubenswrapper[4986]: I1203 13:01:32.358859 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0a738312-fa84-4688-badd-903937f6d496-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5b4bb77c4-bmtsq\" (UID: \"0a738312-fa84-4688-badd-903937f6d496\") " pod="openshift-authentication/oauth-openshift-5b4bb77c4-bmtsq" Dec 03 13:01:32 crc kubenswrapper[4986]: I1203 13:01:32.362717 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0a738312-fa84-4688-badd-903937f6d496-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5b4bb77c4-bmtsq\" (UID: \"0a738312-fa84-4688-badd-903937f6d496\") " pod="openshift-authentication/oauth-openshift-5b4bb77c4-bmtsq" Dec 03 13:01:32 crc kubenswrapper[4986]: I1203 13:01:32.362822 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0a738312-fa84-4688-badd-903937f6d496-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5b4bb77c4-bmtsq\" (UID: \"0a738312-fa84-4688-badd-903937f6d496\") " pod="openshift-authentication/oauth-openshift-5b4bb77c4-bmtsq" Dec 03 13:01:32 crc kubenswrapper[4986]: I1203 13:01:32.363238 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0a738312-fa84-4688-badd-903937f6d496-v4-0-config-system-router-certs\") pod \"oauth-openshift-5b4bb77c4-bmtsq\" (UID: \"0a738312-fa84-4688-badd-903937f6d496\") " pod="openshift-authentication/oauth-openshift-5b4bb77c4-bmtsq" Dec 03 13:01:32 crc kubenswrapper[4986]: I1203 13:01:32.363663 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0a738312-fa84-4688-badd-903937f6d496-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5b4bb77c4-bmtsq\" (UID: \"0a738312-fa84-4688-badd-903937f6d496\") " pod="openshift-authentication/oauth-openshift-5b4bb77c4-bmtsq" Dec 03 13:01:32 crc kubenswrapper[4986]: I1203 13:01:32.363670 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0a738312-fa84-4688-badd-903937f6d496-v4-0-config-system-session\") pod \"oauth-openshift-5b4bb77c4-bmtsq\" (UID: \"0a738312-fa84-4688-badd-903937f6d496\") " pod="openshift-authentication/oauth-openshift-5b4bb77c4-bmtsq" Dec 03 13:01:32 crc kubenswrapper[4986]: I1203 13:01:32.363736 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0a738312-fa84-4688-badd-903937f6d496-v4-0-config-user-template-login\") pod \"oauth-openshift-5b4bb77c4-bmtsq\" (UID: \"0a738312-fa84-4688-badd-903937f6d496\") " pod="openshift-authentication/oauth-openshift-5b4bb77c4-bmtsq" Dec 03 13:01:32 crc kubenswrapper[4986]: I1203 13:01:32.363928 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0a738312-fa84-4688-badd-903937f6d496-v4-0-config-user-template-error\") pod \"oauth-openshift-5b4bb77c4-bmtsq\" (UID: \"0a738312-fa84-4688-badd-903937f6d496\") " pod="openshift-authentication/oauth-openshift-5b4bb77c4-bmtsq" Dec 03 13:01:32 crc kubenswrapper[4986]: I1203 13:01:32.368020 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0a738312-fa84-4688-badd-903937f6d496-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5b4bb77c4-bmtsq\" (UID: \"0a738312-fa84-4688-badd-903937f6d496\") " pod="openshift-authentication/oauth-openshift-5b4bb77c4-bmtsq" Dec 03 13:01:32 crc kubenswrapper[4986]: I1203 13:01:32.373009 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kd554\" (UniqueName: \"kubernetes.io/projected/0a738312-fa84-4688-badd-903937f6d496-kube-api-access-kd554\") pod \"oauth-openshift-5b4bb77c4-bmtsq\" (UID: \"0a738312-fa84-4688-badd-903937f6d496\") " pod="openshift-authentication/oauth-openshift-5b4bb77c4-bmtsq" Dec 03 13:01:32 crc kubenswrapper[4986]: I1203 13:01:32.480983 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5b4bb77c4-bmtsq" Dec 03 13:01:32 crc kubenswrapper[4986]: I1203 13:01:32.904697 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 03 13:01:33 crc kubenswrapper[4986]: I1203 13:01:33.042327 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 03 13:01:33 crc kubenswrapper[4986]: I1203 13:01:33.404999 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 03 13:01:33 crc kubenswrapper[4986]: I1203 13:01:33.484264 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 03 13:01:33 crc kubenswrapper[4986]: I1203 13:01:33.770113 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 03 13:01:33 crc kubenswrapper[4986]: I1203 13:01:33.910833 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 03 13:01:33 crc kubenswrapper[4986]: I1203 13:01:33.948937 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 03 13:01:34 crc kubenswrapper[4986]: I1203 13:01:34.393238 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 03 13:01:34 crc kubenswrapper[4986]: I1203 13:01:34.473217 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 03 13:01:34 crc kubenswrapper[4986]: I1203 13:01:34.526748 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 03 13:01:34 crc kubenswrapper[4986]: I1203 13:01:34.783403 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 03 13:01:34 crc kubenswrapper[4986]: I1203 13:01:34.847606 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 03 13:01:35 crc kubenswrapper[4986]: I1203 13:01:35.674577 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 03 13:01:36 crc kubenswrapper[4986]: I1203 13:01:36.060565 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 03 13:01:36 crc kubenswrapper[4986]: I1203 13:01:36.136271 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 03 13:01:36 crc kubenswrapper[4986]: I1203 13:01:36.823711 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 13:01:36 crc kubenswrapper[4986]: I1203 13:01:36.828612 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 13:01:36 crc kubenswrapper[4986]: I1203 13:01:36.970655 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 03 13:01:37 crc kubenswrapper[4986]: I1203 13:01:37.001166 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 03 13:01:37 crc kubenswrapper[4986]: I1203 13:01:37.194732 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 03 13:01:37 crc kubenswrapper[4986]: I1203 13:01:37.650851 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 13:01:37 crc kubenswrapper[4986]: I1203 13:01:37.796750 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 03 13:01:37 crc kubenswrapper[4986]: I1203 13:01:37.864397 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 03 13:01:38 crc kubenswrapper[4986]: I1203 13:01:38.018823 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 03 13:01:38 crc kubenswrapper[4986]: I1203 13:01:38.333100 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 03 13:01:39 crc kubenswrapper[4986]: I1203 13:01:39.392040 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 03 13:01:39 crc kubenswrapper[4986]: I1203 13:01:39.748063 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 03 13:01:40 crc kubenswrapper[4986]: I1203 13:01:40.349882 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 03 13:01:41 crc kubenswrapper[4986]: I1203 13:01:41.537935 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 03 13:01:41 crc kubenswrapper[4986]: I1203 13:01:41.782839 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 03 13:01:42 crc kubenswrapper[4986]: I1203 13:01:42.094719 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 03 13:01:42 crc kubenswrapper[4986]: I1203 13:01:42.850844 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 03 13:01:42 crc kubenswrapper[4986]: I1203 13:01:42.978178 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 03 13:01:43 crc kubenswrapper[4986]: I1203 13:01:43.160799 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 03 13:01:43 crc kubenswrapper[4986]: I1203 13:01:43.449001 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 03 13:01:43 crc kubenswrapper[4986]: I1203 13:01:43.689501 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 03 13:01:44 crc kubenswrapper[4986]: I1203 13:01:44.647637 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 03 13:01:44 crc kubenswrapper[4986]: I1203 13:01:44.923413 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 03 13:01:45 crc kubenswrapper[4986]: I1203 13:01:45.152867 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 03 13:01:45 crc kubenswrapper[4986]: I1203 13:01:45.630209 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 03 13:01:45 crc kubenswrapper[4986]: I1203 13:01:45.730684 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 03 13:01:47 crc kubenswrapper[4986]: I1203 13:01:47.036805 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412780-2k56t"] Dec 03 13:01:47 crc kubenswrapper[4986]: I1203 13:01:47.037928 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412780-2k56t" Dec 03 13:01:47 crc kubenswrapper[4986]: I1203 13:01:47.039485 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 03 13:01:47 crc kubenswrapper[4986]: I1203 13:01:47.045714 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 03 13:01:47 crc kubenswrapper[4986]: I1203 13:01:47.052430 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3474dcb3-8f5b-4eed-ada7-ec711dae3b1a-secret-volume\") pod \"collect-profiles-29412780-2k56t\" (UID: \"3474dcb3-8f5b-4eed-ada7-ec711dae3b1a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412780-2k56t" Dec 03 13:01:47 crc kubenswrapper[4986]: I1203 13:01:47.052481 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9gzh\" (UniqueName: \"kubernetes.io/projected/3474dcb3-8f5b-4eed-ada7-ec711dae3b1a-kube-api-access-c9gzh\") pod \"collect-profiles-29412780-2k56t\" (UID: \"3474dcb3-8f5b-4eed-ada7-ec711dae3b1a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412780-2k56t" Dec 03 13:01:47 crc kubenswrapper[4986]: I1203 13:01:47.052501 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3474dcb3-8f5b-4eed-ada7-ec711dae3b1a-config-volume\") pod \"collect-profiles-29412780-2k56t\" (UID: \"3474dcb3-8f5b-4eed-ada7-ec711dae3b1a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412780-2k56t" Dec 03 13:01:47 crc kubenswrapper[4986]: I1203 13:01:47.137701 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 03 13:01:47 crc kubenswrapper[4986]: I1203 13:01:47.151934 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-npjwk"] Dec 03 13:01:47 crc kubenswrapper[4986]: I1203 13:01:47.152155 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-npjwk" podUID="01d4de81-8b50-4231-912d-3a65797a9754" containerName="route-controller-manager" containerID="cri-o://f48c2e9e3f9e0aeb1d7e95a20bdfa24181b8279264127fa3af63b75e655a036c" gracePeriod=30 Dec 03 13:01:47 crc kubenswrapper[4986]: I1203 13:01:47.153333 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3474dcb3-8f5b-4eed-ada7-ec711dae3b1a-secret-volume\") pod \"collect-profiles-29412780-2k56t\" (UID: \"3474dcb3-8f5b-4eed-ada7-ec711dae3b1a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412780-2k56t" Dec 03 13:01:47 crc kubenswrapper[4986]: I1203 13:01:47.154265 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9gzh\" (UniqueName: \"kubernetes.io/projected/3474dcb3-8f5b-4eed-ada7-ec711dae3b1a-kube-api-access-c9gzh\") pod \"collect-profiles-29412780-2k56t\" (UID: \"3474dcb3-8f5b-4eed-ada7-ec711dae3b1a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412780-2k56t" Dec 03 13:01:47 crc kubenswrapper[4986]: I1203 13:01:47.154325 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3474dcb3-8f5b-4eed-ada7-ec711dae3b1a-config-volume\") pod \"collect-profiles-29412780-2k56t\" (UID: \"3474dcb3-8f5b-4eed-ada7-ec711dae3b1a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412780-2k56t" Dec 03 13:01:47 crc kubenswrapper[4986]: I1203 13:01:47.155791 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3474dcb3-8f5b-4eed-ada7-ec711dae3b1a-config-volume\") pod \"collect-profiles-29412780-2k56t\" (UID: \"3474dcb3-8f5b-4eed-ada7-ec711dae3b1a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412780-2k56t" Dec 03 13:01:47 crc kubenswrapper[4986]: I1203 13:01:47.165517 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3474dcb3-8f5b-4eed-ada7-ec711dae3b1a-secret-volume\") pod \"collect-profiles-29412780-2k56t\" (UID: \"3474dcb3-8f5b-4eed-ada7-ec711dae3b1a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412780-2k56t" Dec 03 13:01:47 crc kubenswrapper[4986]: I1203 13:01:47.182809 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9gzh\" (UniqueName: \"kubernetes.io/projected/3474dcb3-8f5b-4eed-ada7-ec711dae3b1a-kube-api-access-c9gzh\") pod \"collect-profiles-29412780-2k56t\" (UID: \"3474dcb3-8f5b-4eed-ada7-ec711dae3b1a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412780-2k56t" Dec 03 13:01:47 crc kubenswrapper[4986]: I1203 13:01:47.195092 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-ptgv8"] Dec 03 13:01:47 crc kubenswrapper[4986]: I1203 13:01:47.195511 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-ptgv8" podUID="0725188c-6c61-4369-8247-ffdda7e830e8" containerName="controller-manager" containerID="cri-o://154e1be646011d38a9254d5e57fdc5c8c3dbc6398f950700f4a0ce4d2893f756" gracePeriod=30 Dec 03 13:01:47 crc kubenswrapper[4986]: I1203 13:01:47.352252 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412780-2k56t" Dec 03 13:01:47 crc kubenswrapper[4986]: I1203 13:01:47.676477 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 03 13:01:47 crc kubenswrapper[4986]: I1203 13:01:47.713112 4986 generic.go:334] "Generic (PLEG): container finished" podID="01d4de81-8b50-4231-912d-3a65797a9754" containerID="f48c2e9e3f9e0aeb1d7e95a20bdfa24181b8279264127fa3af63b75e655a036c" exitCode=0 Dec 03 13:01:47 crc kubenswrapper[4986]: I1203 13:01:47.713161 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-npjwk" event={"ID":"01d4de81-8b50-4231-912d-3a65797a9754","Type":"ContainerDied","Data":"f48c2e9e3f9e0aeb1d7e95a20bdfa24181b8279264127fa3af63b75e655a036c"} Dec 03 13:01:47 crc kubenswrapper[4986]: I1203 13:01:47.715425 4986 generic.go:334] "Generic (PLEG): container finished" podID="0725188c-6c61-4369-8247-ffdda7e830e8" containerID="154e1be646011d38a9254d5e57fdc5c8c3dbc6398f950700f4a0ce4d2893f756" exitCode=0 Dec 03 13:01:47 crc kubenswrapper[4986]: I1203 13:01:47.715487 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-ptgv8" event={"ID":"0725188c-6c61-4369-8247-ffdda7e830e8","Type":"ContainerDied","Data":"154e1be646011d38a9254d5e57fdc5c8c3dbc6398f950700f4a0ce4d2893f756"} Dec 03 13:01:47 crc kubenswrapper[4986]: I1203 13:01:47.983055 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-npjwk" Dec 03 13:01:48 crc kubenswrapper[4986]: I1203 13:01:48.050679 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-ptgv8" Dec 03 13:01:48 crc kubenswrapper[4986]: I1203 13:01:48.067962 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kr6m9\" (UniqueName: \"kubernetes.io/projected/0725188c-6c61-4369-8247-ffdda7e830e8-kube-api-access-kr6m9\") pod \"0725188c-6c61-4369-8247-ffdda7e830e8\" (UID: \"0725188c-6c61-4369-8247-ffdda7e830e8\") " Dec 03 13:01:48 crc kubenswrapper[4986]: I1203 13:01:48.068019 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0725188c-6c61-4369-8247-ffdda7e830e8-proxy-ca-bundles\") pod \"0725188c-6c61-4369-8247-ffdda7e830e8\" (UID: \"0725188c-6c61-4369-8247-ffdda7e830e8\") " Dec 03 13:01:48 crc kubenswrapper[4986]: I1203 13:01:48.068052 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0725188c-6c61-4369-8247-ffdda7e830e8-client-ca\") pod \"0725188c-6c61-4369-8247-ffdda7e830e8\" (UID: \"0725188c-6c61-4369-8247-ffdda7e830e8\") " Dec 03 13:01:48 crc kubenswrapper[4986]: I1203 13:01:48.069069 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0725188c-6c61-4369-8247-ffdda7e830e8-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "0725188c-6c61-4369-8247-ffdda7e830e8" (UID: "0725188c-6c61-4369-8247-ffdda7e830e8"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:01:48 crc kubenswrapper[4986]: I1203 13:01:48.069083 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0725188c-6c61-4369-8247-ffdda7e830e8-client-ca" (OuterVolumeSpecName: "client-ca") pod "0725188c-6c61-4369-8247-ffdda7e830e8" (UID: "0725188c-6c61-4369-8247-ffdda7e830e8"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:01:48 crc kubenswrapper[4986]: I1203 13:01:48.073272 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0725188c-6c61-4369-8247-ffdda7e830e8-kube-api-access-kr6m9" (OuterVolumeSpecName: "kube-api-access-kr6m9") pod "0725188c-6c61-4369-8247-ffdda7e830e8" (UID: "0725188c-6c61-4369-8247-ffdda7e830e8"). InnerVolumeSpecName "kube-api-access-kr6m9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:01:48 crc kubenswrapper[4986]: I1203 13:01:48.169187 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01d4de81-8b50-4231-912d-3a65797a9754-serving-cert\") pod \"01d4de81-8b50-4231-912d-3a65797a9754\" (UID: \"01d4de81-8b50-4231-912d-3a65797a9754\") " Dec 03 13:01:48 crc kubenswrapper[4986]: I1203 13:01:48.169272 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0725188c-6c61-4369-8247-ffdda7e830e8-config\") pod \"0725188c-6c61-4369-8247-ffdda7e830e8\" (UID: \"0725188c-6c61-4369-8247-ffdda7e830e8\") " Dec 03 13:01:48 crc kubenswrapper[4986]: I1203 13:01:48.169441 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01d4de81-8b50-4231-912d-3a65797a9754-config\") pod \"01d4de81-8b50-4231-912d-3a65797a9754\" (UID: \"01d4de81-8b50-4231-912d-3a65797a9754\") " Dec 03 13:01:48 crc kubenswrapper[4986]: I1203 13:01:48.169516 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0725188c-6c61-4369-8247-ffdda7e830e8-serving-cert\") pod \"0725188c-6c61-4369-8247-ffdda7e830e8\" (UID: \"0725188c-6c61-4369-8247-ffdda7e830e8\") " Dec 03 13:01:48 crc kubenswrapper[4986]: I1203 13:01:48.169542 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/01d4de81-8b50-4231-912d-3a65797a9754-client-ca\") pod \"01d4de81-8b50-4231-912d-3a65797a9754\" (UID: \"01d4de81-8b50-4231-912d-3a65797a9754\") " Dec 03 13:01:48 crc kubenswrapper[4986]: I1203 13:01:48.169577 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vrj84\" (UniqueName: \"kubernetes.io/projected/01d4de81-8b50-4231-912d-3a65797a9754-kube-api-access-vrj84\") pod \"01d4de81-8b50-4231-912d-3a65797a9754\" (UID: \"01d4de81-8b50-4231-912d-3a65797a9754\") " Dec 03 13:01:48 crc kubenswrapper[4986]: I1203 13:01:48.169788 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kr6m9\" (UniqueName: \"kubernetes.io/projected/0725188c-6c61-4369-8247-ffdda7e830e8-kube-api-access-kr6m9\") on node \"crc\" DevicePath \"\"" Dec 03 13:01:48 crc kubenswrapper[4986]: I1203 13:01:48.169806 4986 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0725188c-6c61-4369-8247-ffdda7e830e8-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 03 13:01:48 crc kubenswrapper[4986]: I1203 13:01:48.169818 4986 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0725188c-6c61-4369-8247-ffdda7e830e8-client-ca\") on node \"crc\" DevicePath \"\"" Dec 03 13:01:48 crc kubenswrapper[4986]: I1203 13:01:48.170245 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01d4de81-8b50-4231-912d-3a65797a9754-client-ca" (OuterVolumeSpecName: "client-ca") pod "01d4de81-8b50-4231-912d-3a65797a9754" (UID: "01d4de81-8b50-4231-912d-3a65797a9754"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:01:48 crc kubenswrapper[4986]: I1203 13:01:48.170308 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01d4de81-8b50-4231-912d-3a65797a9754-config" (OuterVolumeSpecName: "config") pod "01d4de81-8b50-4231-912d-3a65797a9754" (UID: "01d4de81-8b50-4231-912d-3a65797a9754"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:01:48 crc kubenswrapper[4986]: I1203 13:01:48.170317 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0725188c-6c61-4369-8247-ffdda7e830e8-config" (OuterVolumeSpecName: "config") pod "0725188c-6c61-4369-8247-ffdda7e830e8" (UID: "0725188c-6c61-4369-8247-ffdda7e830e8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:01:48 crc kubenswrapper[4986]: I1203 13:01:48.173448 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0725188c-6c61-4369-8247-ffdda7e830e8-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0725188c-6c61-4369-8247-ffdda7e830e8" (UID: "0725188c-6c61-4369-8247-ffdda7e830e8"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:01:48 crc kubenswrapper[4986]: I1203 13:01:48.173507 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01d4de81-8b50-4231-912d-3a65797a9754-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01d4de81-8b50-4231-912d-3a65797a9754" (UID: "01d4de81-8b50-4231-912d-3a65797a9754"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:01:48 crc kubenswrapper[4986]: I1203 13:01:48.179539 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01d4de81-8b50-4231-912d-3a65797a9754-kube-api-access-vrj84" (OuterVolumeSpecName: "kube-api-access-vrj84") pod "01d4de81-8b50-4231-912d-3a65797a9754" (UID: "01d4de81-8b50-4231-912d-3a65797a9754"). InnerVolumeSpecName "kube-api-access-vrj84". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:01:48 crc kubenswrapper[4986]: I1203 13:01:48.270900 4986 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0725188c-6c61-4369-8247-ffdda7e830e8-config\") on node \"crc\" DevicePath \"\"" Dec 03 13:01:48 crc kubenswrapper[4986]: I1203 13:01:48.270925 4986 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01d4de81-8b50-4231-912d-3a65797a9754-config\") on node \"crc\" DevicePath \"\"" Dec 03 13:01:48 crc kubenswrapper[4986]: I1203 13:01:48.270937 4986 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0725188c-6c61-4369-8247-ffdda7e830e8-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 13:01:48 crc kubenswrapper[4986]: I1203 13:01:48.270947 4986 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/01d4de81-8b50-4231-912d-3a65797a9754-client-ca\") on node \"crc\" DevicePath \"\"" Dec 03 13:01:48 crc kubenswrapper[4986]: I1203 13:01:48.270956 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vrj84\" (UniqueName: \"kubernetes.io/projected/01d4de81-8b50-4231-912d-3a65797a9754-kube-api-access-vrj84\") on node \"crc\" DevicePath \"\"" Dec 03 13:01:48 crc kubenswrapper[4986]: I1203 13:01:48.270966 4986 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01d4de81-8b50-4231-912d-3a65797a9754-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 13:01:48 crc kubenswrapper[4986]: I1203 13:01:48.372659 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 13:01:48 crc kubenswrapper[4986]: I1203 13:01:48.642013 4986 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 03 13:01:48 crc kubenswrapper[4986]: I1203 13:01:48.720706 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-npjwk" event={"ID":"01d4de81-8b50-4231-912d-3a65797a9754","Type":"ContainerDied","Data":"ccfefa315ff2d6812be0e1f814af932bef15b070c7eb3fcc742d7ad17bdf3865"} Dec 03 13:01:48 crc kubenswrapper[4986]: I1203 13:01:48.720722 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-npjwk" Dec 03 13:01:48 crc kubenswrapper[4986]: I1203 13:01:48.720758 4986 scope.go:117] "RemoveContainer" containerID="f48c2e9e3f9e0aeb1d7e95a20bdfa24181b8279264127fa3af63b75e655a036c" Dec 03 13:01:48 crc kubenswrapper[4986]: I1203 13:01:48.722730 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-ptgv8" event={"ID":"0725188c-6c61-4369-8247-ffdda7e830e8","Type":"ContainerDied","Data":"255fe330a4f361e0c59fbe19ee2be463a0f8ff38bfb19a24985f3c7c7d03a2ef"} Dec 03 13:01:48 crc kubenswrapper[4986]: I1203 13:01:48.722809 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-ptgv8" Dec 03 13:01:48 crc kubenswrapper[4986]: I1203 13:01:48.743887 4986 scope.go:117] "RemoveContainer" containerID="154e1be646011d38a9254d5e57fdc5c8c3dbc6398f950700f4a0ce4d2893f756" Dec 03 13:01:48 crc kubenswrapper[4986]: I1203 13:01:48.765233 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-npjwk"] Dec 03 13:01:48 crc kubenswrapper[4986]: I1203 13:01:48.771979 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-npjwk"] Dec 03 13:01:48 crc kubenswrapper[4986]: I1203 13:01:48.776846 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-ptgv8"] Dec 03 13:01:48 crc kubenswrapper[4986]: I1203 13:01:48.780911 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-ptgv8"] Dec 03 13:01:48 crc kubenswrapper[4986]: I1203 13:01:48.874862 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-57d9976b48-g2h9h"] Dec 03 13:01:48 crc kubenswrapper[4986]: E1203 13:01:48.875187 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01d4de81-8b50-4231-912d-3a65797a9754" containerName="route-controller-manager" Dec 03 13:01:48 crc kubenswrapper[4986]: I1203 13:01:48.875207 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="01d4de81-8b50-4231-912d-3a65797a9754" containerName="route-controller-manager" Dec 03 13:01:48 crc kubenswrapper[4986]: E1203 13:01:48.875239 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0725188c-6c61-4369-8247-ffdda7e830e8" containerName="controller-manager" Dec 03 13:01:48 crc kubenswrapper[4986]: I1203 13:01:48.875249 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="0725188c-6c61-4369-8247-ffdda7e830e8" containerName="controller-manager" Dec 03 13:01:48 crc kubenswrapper[4986]: I1203 13:01:48.875416 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="0725188c-6c61-4369-8247-ffdda7e830e8" containerName="controller-manager" Dec 03 13:01:48 crc kubenswrapper[4986]: I1203 13:01:48.875436 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="01d4de81-8b50-4231-912d-3a65797a9754" containerName="route-controller-manager" Dec 03 13:01:48 crc kubenswrapper[4986]: I1203 13:01:48.875883 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-57d9976b48-g2h9h" Dec 03 13:01:48 crc kubenswrapper[4986]: I1203 13:01:48.878623 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 03 13:01:48 crc kubenswrapper[4986]: I1203 13:01:48.879154 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 03 13:01:48 crc kubenswrapper[4986]: I1203 13:01:48.879314 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 03 13:01:48 crc kubenswrapper[4986]: I1203 13:01:48.879470 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 03 13:01:48 crc kubenswrapper[4986]: I1203 13:01:48.879391 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 03 13:01:48 crc kubenswrapper[4986]: I1203 13:01:48.879432 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 03 13:01:48 crc kubenswrapper[4986]: I1203 13:01:48.884064 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-df59944f9-hswk5"] Dec 03 13:01:48 crc kubenswrapper[4986]: I1203 13:01:48.884884 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-df59944f9-hswk5" Dec 03 13:01:48 crc kubenswrapper[4986]: I1203 13:01:48.887908 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 03 13:01:48 crc kubenswrapper[4986]: I1203 13:01:48.889118 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 03 13:01:48 crc kubenswrapper[4986]: I1203 13:01:48.889434 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 03 13:01:48 crc kubenswrapper[4986]: I1203 13:01:48.889709 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 03 13:01:48 crc kubenswrapper[4986]: I1203 13:01:48.890019 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 03 13:01:48 crc kubenswrapper[4986]: I1203 13:01:48.890929 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 03 13:01:48 crc kubenswrapper[4986]: I1203 13:01:48.891952 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 03 13:01:48 crc kubenswrapper[4986]: I1203 13:01:48.948484 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01d4de81-8b50-4231-912d-3a65797a9754" path="/var/lib/kubelet/pods/01d4de81-8b50-4231-912d-3a65797a9754/volumes" Dec 03 13:01:48 crc kubenswrapper[4986]: I1203 13:01:48.948963 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0725188c-6c61-4369-8247-ffdda7e830e8" path="/var/lib/kubelet/pods/0725188c-6c61-4369-8247-ffdda7e830e8/volumes" Dec 03 13:01:48 crc kubenswrapper[4986]: I1203 13:01:48.980167 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cd8c2aa7-50ef-450f-b223-94a252c0870c-proxy-ca-bundles\") pod \"controller-manager-57d9976b48-g2h9h\" (UID: \"cd8c2aa7-50ef-450f-b223-94a252c0870c\") " pod="openshift-controller-manager/controller-manager-57d9976b48-g2h9h" Dec 03 13:01:48 crc kubenswrapper[4986]: I1203 13:01:48.980216 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cd8c2aa7-50ef-450f-b223-94a252c0870c-client-ca\") pod \"controller-manager-57d9976b48-g2h9h\" (UID: \"cd8c2aa7-50ef-450f-b223-94a252c0870c\") " pod="openshift-controller-manager/controller-manager-57d9976b48-g2h9h" Dec 03 13:01:48 crc kubenswrapper[4986]: I1203 13:01:48.980235 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd8c2aa7-50ef-450f-b223-94a252c0870c-config\") pod \"controller-manager-57d9976b48-g2h9h\" (UID: \"cd8c2aa7-50ef-450f-b223-94a252c0870c\") " pod="openshift-controller-manager/controller-manager-57d9976b48-g2h9h" Dec 03 13:01:48 crc kubenswrapper[4986]: I1203 13:01:48.980759 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d899q\" (UniqueName: \"kubernetes.io/projected/cd8c2aa7-50ef-450f-b223-94a252c0870c-kube-api-access-d899q\") pod \"controller-manager-57d9976b48-g2h9h\" (UID: \"cd8c2aa7-50ef-450f-b223-94a252c0870c\") " pod="openshift-controller-manager/controller-manager-57d9976b48-g2h9h" Dec 03 13:01:48 crc kubenswrapper[4986]: I1203 13:01:48.980802 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cd8c2aa7-50ef-450f-b223-94a252c0870c-serving-cert\") pod \"controller-manager-57d9976b48-g2h9h\" (UID: \"cd8c2aa7-50ef-450f-b223-94a252c0870c\") " pod="openshift-controller-manager/controller-manager-57d9976b48-g2h9h" Dec 03 13:01:49 crc kubenswrapper[4986]: I1203 13:01:49.082318 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7040df4-cee5-4ccc-8a8f-93f65c1e3137-serving-cert\") pod \"route-controller-manager-df59944f9-hswk5\" (UID: \"d7040df4-cee5-4ccc-8a8f-93f65c1e3137\") " pod="openshift-route-controller-manager/route-controller-manager-df59944f9-hswk5" Dec 03 13:01:49 crc kubenswrapper[4986]: I1203 13:01:49.082397 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cd8c2aa7-50ef-450f-b223-94a252c0870c-proxy-ca-bundles\") pod \"controller-manager-57d9976b48-g2h9h\" (UID: \"cd8c2aa7-50ef-450f-b223-94a252c0870c\") " pod="openshift-controller-manager/controller-manager-57d9976b48-g2h9h" Dec 03 13:01:49 crc kubenswrapper[4986]: I1203 13:01:49.082439 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgzzg\" (UniqueName: \"kubernetes.io/projected/d7040df4-cee5-4ccc-8a8f-93f65c1e3137-kube-api-access-qgzzg\") pod \"route-controller-manager-df59944f9-hswk5\" (UID: \"d7040df4-cee5-4ccc-8a8f-93f65c1e3137\") " pod="openshift-route-controller-manager/route-controller-manager-df59944f9-hswk5" Dec 03 13:01:49 crc kubenswrapper[4986]: I1203 13:01:49.082507 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cd8c2aa7-50ef-450f-b223-94a252c0870c-client-ca\") pod \"controller-manager-57d9976b48-g2h9h\" (UID: \"cd8c2aa7-50ef-450f-b223-94a252c0870c\") " pod="openshift-controller-manager/controller-manager-57d9976b48-g2h9h" Dec 03 13:01:49 crc kubenswrapper[4986]: I1203 13:01:49.082543 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd8c2aa7-50ef-450f-b223-94a252c0870c-config\") pod \"controller-manager-57d9976b48-g2h9h\" (UID: \"cd8c2aa7-50ef-450f-b223-94a252c0870c\") " pod="openshift-controller-manager/controller-manager-57d9976b48-g2h9h" Dec 03 13:01:49 crc kubenswrapper[4986]: I1203 13:01:49.082594 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7040df4-cee5-4ccc-8a8f-93f65c1e3137-config\") pod \"route-controller-manager-df59944f9-hswk5\" (UID: \"d7040df4-cee5-4ccc-8a8f-93f65c1e3137\") " pod="openshift-route-controller-manager/route-controller-manager-df59944f9-hswk5" Dec 03 13:01:49 crc kubenswrapper[4986]: I1203 13:01:49.082682 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d899q\" (UniqueName: \"kubernetes.io/projected/cd8c2aa7-50ef-450f-b223-94a252c0870c-kube-api-access-d899q\") pod \"controller-manager-57d9976b48-g2h9h\" (UID: \"cd8c2aa7-50ef-450f-b223-94a252c0870c\") " pod="openshift-controller-manager/controller-manager-57d9976b48-g2h9h" Dec 03 13:01:49 crc kubenswrapper[4986]: I1203 13:01:49.082754 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cd8c2aa7-50ef-450f-b223-94a252c0870c-serving-cert\") pod \"controller-manager-57d9976b48-g2h9h\" (UID: \"cd8c2aa7-50ef-450f-b223-94a252c0870c\") " pod="openshift-controller-manager/controller-manager-57d9976b48-g2h9h" Dec 03 13:01:49 crc kubenswrapper[4986]: I1203 13:01:49.082815 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d7040df4-cee5-4ccc-8a8f-93f65c1e3137-client-ca\") pod \"route-controller-manager-df59944f9-hswk5\" (UID: \"d7040df4-cee5-4ccc-8a8f-93f65c1e3137\") " pod="openshift-route-controller-manager/route-controller-manager-df59944f9-hswk5" Dec 03 13:01:49 crc kubenswrapper[4986]: I1203 13:01:49.084393 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd8c2aa7-50ef-450f-b223-94a252c0870c-config\") pod \"controller-manager-57d9976b48-g2h9h\" (UID: \"cd8c2aa7-50ef-450f-b223-94a252c0870c\") " pod="openshift-controller-manager/controller-manager-57d9976b48-g2h9h" Dec 03 13:01:49 crc kubenswrapper[4986]: I1203 13:01:49.084543 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cd8c2aa7-50ef-450f-b223-94a252c0870c-proxy-ca-bundles\") pod \"controller-manager-57d9976b48-g2h9h\" (UID: \"cd8c2aa7-50ef-450f-b223-94a252c0870c\") " pod="openshift-controller-manager/controller-manager-57d9976b48-g2h9h" Dec 03 13:01:49 crc kubenswrapper[4986]: I1203 13:01:49.084772 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cd8c2aa7-50ef-450f-b223-94a252c0870c-client-ca\") pod \"controller-manager-57d9976b48-g2h9h\" (UID: \"cd8c2aa7-50ef-450f-b223-94a252c0870c\") " pod="openshift-controller-manager/controller-manager-57d9976b48-g2h9h" Dec 03 13:01:49 crc kubenswrapper[4986]: I1203 13:01:49.093642 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cd8c2aa7-50ef-450f-b223-94a252c0870c-serving-cert\") pod \"controller-manager-57d9976b48-g2h9h\" (UID: \"cd8c2aa7-50ef-450f-b223-94a252c0870c\") " pod="openshift-controller-manager/controller-manager-57d9976b48-g2h9h" Dec 03 13:01:49 crc kubenswrapper[4986]: I1203 13:01:49.103212 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d899q\" (UniqueName: \"kubernetes.io/projected/cd8c2aa7-50ef-450f-b223-94a252c0870c-kube-api-access-d899q\") pod \"controller-manager-57d9976b48-g2h9h\" (UID: \"cd8c2aa7-50ef-450f-b223-94a252c0870c\") " pod="openshift-controller-manager/controller-manager-57d9976b48-g2h9h" Dec 03 13:01:49 crc kubenswrapper[4986]: I1203 13:01:49.184315 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d7040df4-cee5-4ccc-8a8f-93f65c1e3137-client-ca\") pod \"route-controller-manager-df59944f9-hswk5\" (UID: \"d7040df4-cee5-4ccc-8a8f-93f65c1e3137\") " pod="openshift-route-controller-manager/route-controller-manager-df59944f9-hswk5" Dec 03 13:01:49 crc kubenswrapper[4986]: I1203 13:01:49.184677 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7040df4-cee5-4ccc-8a8f-93f65c1e3137-serving-cert\") pod \"route-controller-manager-df59944f9-hswk5\" (UID: \"d7040df4-cee5-4ccc-8a8f-93f65c1e3137\") " pod="openshift-route-controller-manager/route-controller-manager-df59944f9-hswk5" Dec 03 13:01:49 crc kubenswrapper[4986]: I1203 13:01:49.184771 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgzzg\" (UniqueName: \"kubernetes.io/projected/d7040df4-cee5-4ccc-8a8f-93f65c1e3137-kube-api-access-qgzzg\") pod \"route-controller-manager-df59944f9-hswk5\" (UID: \"d7040df4-cee5-4ccc-8a8f-93f65c1e3137\") " pod="openshift-route-controller-manager/route-controller-manager-df59944f9-hswk5" Dec 03 13:01:49 crc kubenswrapper[4986]: I1203 13:01:49.184899 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7040df4-cee5-4ccc-8a8f-93f65c1e3137-config\") pod \"route-controller-manager-df59944f9-hswk5\" (UID: \"d7040df4-cee5-4ccc-8a8f-93f65c1e3137\") " pod="openshift-route-controller-manager/route-controller-manager-df59944f9-hswk5" Dec 03 13:01:49 crc kubenswrapper[4986]: I1203 13:01:49.186160 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7040df4-cee5-4ccc-8a8f-93f65c1e3137-config\") pod \"route-controller-manager-df59944f9-hswk5\" (UID: \"d7040df4-cee5-4ccc-8a8f-93f65c1e3137\") " pod="openshift-route-controller-manager/route-controller-manager-df59944f9-hswk5" Dec 03 13:01:49 crc kubenswrapper[4986]: I1203 13:01:49.186253 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d7040df4-cee5-4ccc-8a8f-93f65c1e3137-client-ca\") pod \"route-controller-manager-df59944f9-hswk5\" (UID: \"d7040df4-cee5-4ccc-8a8f-93f65c1e3137\") " pod="openshift-route-controller-manager/route-controller-manager-df59944f9-hswk5" Dec 03 13:01:49 crc kubenswrapper[4986]: I1203 13:01:49.189393 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7040df4-cee5-4ccc-8a8f-93f65c1e3137-serving-cert\") pod \"route-controller-manager-df59944f9-hswk5\" (UID: \"d7040df4-cee5-4ccc-8a8f-93f65c1e3137\") " pod="openshift-route-controller-manager/route-controller-manager-df59944f9-hswk5" Dec 03 13:01:49 crc kubenswrapper[4986]: I1203 13:01:49.194236 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-57d9976b48-g2h9h" Dec 03 13:01:49 crc kubenswrapper[4986]: I1203 13:01:49.207408 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgzzg\" (UniqueName: \"kubernetes.io/projected/d7040df4-cee5-4ccc-8a8f-93f65c1e3137-kube-api-access-qgzzg\") pod \"route-controller-manager-df59944f9-hswk5\" (UID: \"d7040df4-cee5-4ccc-8a8f-93f65c1e3137\") " pod="openshift-route-controller-manager/route-controller-manager-df59944f9-hswk5" Dec 03 13:01:49 crc kubenswrapper[4986]: I1203 13:01:49.501847 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-df59944f9-hswk5" Dec 03 13:01:50 crc kubenswrapper[4986]: I1203 13:01:50.028428 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 03 13:01:50 crc kubenswrapper[4986]: I1203 13:01:50.601128 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 03 13:01:51 crc kubenswrapper[4986]: I1203 13:01:51.992355 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-57d9976b48-g2h9h"] Dec 03 13:01:51 crc kubenswrapper[4986]: I1203 13:01:51.997486 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5b4bb77c4-bmtsq"] Dec 03 13:01:52 crc kubenswrapper[4986]: I1203 13:01:52.006498 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412780-2k56t"] Dec 03 13:01:52 crc kubenswrapper[4986]: I1203 13:01:52.011999 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-df59944f9-hswk5"] Dec 03 13:01:52 crc kubenswrapper[4986]: I1203 13:01:52.495592 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-57d9976b48-g2h9h"] Dec 03 13:01:52 crc kubenswrapper[4986]: I1203 13:01:52.498677 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412780-2k56t"] Dec 03 13:01:52 crc kubenswrapper[4986]: I1203 13:01:52.579256 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-df59944f9-hswk5"] Dec 03 13:01:52 crc kubenswrapper[4986]: I1203 13:01:52.589167 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5b4bb77c4-bmtsq"] Dec 03 13:01:52 crc kubenswrapper[4986]: I1203 13:01:52.754091 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5b4bb77c4-bmtsq" event={"ID":"0a738312-fa84-4688-badd-903937f6d496","Type":"ContainerStarted","Data":"f4b646a515039233bd0fa5712f56e755faf54aa4b6f953ec8234cda02ede0acb"} Dec 03 13:01:52 crc kubenswrapper[4986]: I1203 13:01:52.754938 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-57d9976b48-g2h9h" event={"ID":"cd8c2aa7-50ef-450f-b223-94a252c0870c","Type":"ContainerStarted","Data":"e050c18158bc4f944630618d692b122a753e0c035484bbae1d31544dda655de3"} Dec 03 13:01:52 crc kubenswrapper[4986]: I1203 13:01:52.755831 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412780-2k56t" event={"ID":"3474dcb3-8f5b-4eed-ada7-ec711dae3b1a","Type":"ContainerStarted","Data":"6f385c1572da630d3b6e8d43273eae482122ca9d1ffaafb2253da570441bcc15"} Dec 03 13:01:52 crc kubenswrapper[4986]: I1203 13:01:52.756902 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-df59944f9-hswk5" event={"ID":"d7040df4-cee5-4ccc-8a8f-93f65c1e3137","Type":"ContainerStarted","Data":"b92176ecd10989d41235de9ca317e9f0004d9ae0205d0f5ef587af31443c8b09"} Dec 03 13:01:55 crc kubenswrapper[4986]: I1203 13:01:55.778228 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-df59944f9-hswk5" event={"ID":"d7040df4-cee5-4ccc-8a8f-93f65c1e3137","Type":"ContainerStarted","Data":"2b29da8e137436ad3acc355c7820a3466bd59cea6bb461171075e215863b361c"} Dec 03 13:01:55 crc kubenswrapper[4986]: I1203 13:01:55.778602 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-df59944f9-hswk5" Dec 03 13:01:55 crc kubenswrapper[4986]: I1203 13:01:55.780823 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5b4bb77c4-bmtsq" event={"ID":"0a738312-fa84-4688-badd-903937f6d496","Type":"ContainerStarted","Data":"08d993dcaa9fc8c298efd771a3452f222be2ec0b8c9569db338d5fcd1881f7de"} Dec 03 13:01:55 crc kubenswrapper[4986]: I1203 13:01:55.781132 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-5b4bb77c4-bmtsq" Dec 03 13:01:55 crc kubenswrapper[4986]: I1203 13:01:55.782789 4986 generic.go:334] "Generic (PLEG): container finished" podID="3474dcb3-8f5b-4eed-ada7-ec711dae3b1a" containerID="f4daa17ef3f81eb15e49faf9923d47649cbb27674c0588e430e42b009f1926ac" exitCode=0 Dec 03 13:01:55 crc kubenswrapper[4986]: I1203 13:01:55.782845 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412780-2k56t" event={"ID":"3474dcb3-8f5b-4eed-ada7-ec711dae3b1a","Type":"ContainerDied","Data":"f4daa17ef3f81eb15e49faf9923d47649cbb27674c0588e430e42b009f1926ac"} Dec 03 13:01:55 crc kubenswrapper[4986]: I1203 13:01:55.784568 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-57d9976b48-g2h9h" event={"ID":"cd8c2aa7-50ef-450f-b223-94a252c0870c","Type":"ContainerStarted","Data":"d852ee8cb310fbb8ff5ebfec1493cdf657495f287ec25e320e8b882e09bb982c"} Dec 03 13:01:55 crc kubenswrapper[4986]: I1203 13:01:55.785637 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-57d9976b48-g2h9h" Dec 03 13:01:55 crc kubenswrapper[4986]: I1203 13:01:55.785918 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-df59944f9-hswk5" Dec 03 13:01:55 crc kubenswrapper[4986]: I1203 13:01:55.786677 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-5b4bb77c4-bmtsq" Dec 03 13:01:55 crc kubenswrapper[4986]: I1203 13:01:55.790347 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-57d9976b48-g2h9h" Dec 03 13:01:55 crc kubenswrapper[4986]: I1203 13:01:55.801399 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-df59944f9-hswk5" podStartSLOduration=8.801372636 podStartE2EDuration="8.801372636s" podCreationTimestamp="2025-12-03 13:01:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:01:55.797061988 +0000 UTC m=+375.263493179" watchObservedRunningTime="2025-12-03 13:01:55.801372636 +0000 UTC m=+375.267803827" Dec 03 13:01:55 crc kubenswrapper[4986]: I1203 13:01:55.841695 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-5b4bb77c4-bmtsq" podStartSLOduration=118.841673371 podStartE2EDuration="1m58.841673371s" podCreationTimestamp="2025-12-03 12:59:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:01:55.841046254 +0000 UTC m=+375.307477445" watchObservedRunningTime="2025-12-03 13:01:55.841673371 +0000 UTC m=+375.308104572" Dec 03 13:01:55 crc kubenswrapper[4986]: I1203 13:01:55.907017 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-57d9976b48-g2h9h" podStartSLOduration=8.906991582 podStartE2EDuration="8.906991582s" podCreationTimestamp="2025-12-03 13:01:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:01:55.903072194 +0000 UTC m=+375.369503375" watchObservedRunningTime="2025-12-03 13:01:55.906991582 +0000 UTC m=+375.373422773" Dec 03 13:01:57 crc kubenswrapper[4986]: I1203 13:01:57.060162 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412780-2k56t" Dec 03 13:01:57 crc kubenswrapper[4986]: I1203 13:01:57.191461 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3474dcb3-8f5b-4eed-ada7-ec711dae3b1a-config-volume\") pod \"3474dcb3-8f5b-4eed-ada7-ec711dae3b1a\" (UID: \"3474dcb3-8f5b-4eed-ada7-ec711dae3b1a\") " Dec 03 13:01:57 crc kubenswrapper[4986]: I1203 13:01:57.192216 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3474dcb3-8f5b-4eed-ada7-ec711dae3b1a-secret-volume\") pod \"3474dcb3-8f5b-4eed-ada7-ec711dae3b1a\" (UID: \"3474dcb3-8f5b-4eed-ada7-ec711dae3b1a\") " Dec 03 13:01:57 crc kubenswrapper[4986]: I1203 13:01:57.192163 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3474dcb3-8f5b-4eed-ada7-ec711dae3b1a-config-volume" (OuterVolumeSpecName: "config-volume") pod "3474dcb3-8f5b-4eed-ada7-ec711dae3b1a" (UID: "3474dcb3-8f5b-4eed-ada7-ec711dae3b1a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:01:57 crc kubenswrapper[4986]: I1203 13:01:57.192850 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9gzh\" (UniqueName: \"kubernetes.io/projected/3474dcb3-8f5b-4eed-ada7-ec711dae3b1a-kube-api-access-c9gzh\") pod \"3474dcb3-8f5b-4eed-ada7-ec711dae3b1a\" (UID: \"3474dcb3-8f5b-4eed-ada7-ec711dae3b1a\") " Dec 03 13:01:57 crc kubenswrapper[4986]: I1203 13:01:57.193124 4986 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3474dcb3-8f5b-4eed-ada7-ec711dae3b1a-config-volume\") on node \"crc\" DevicePath \"\"" Dec 03 13:01:57 crc kubenswrapper[4986]: I1203 13:01:57.196948 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3474dcb3-8f5b-4eed-ada7-ec711dae3b1a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3474dcb3-8f5b-4eed-ada7-ec711dae3b1a" (UID: "3474dcb3-8f5b-4eed-ada7-ec711dae3b1a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:01:57 crc kubenswrapper[4986]: I1203 13:01:57.197487 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3474dcb3-8f5b-4eed-ada7-ec711dae3b1a-kube-api-access-c9gzh" (OuterVolumeSpecName: "kube-api-access-c9gzh") pod "3474dcb3-8f5b-4eed-ada7-ec711dae3b1a" (UID: "3474dcb3-8f5b-4eed-ada7-ec711dae3b1a"). InnerVolumeSpecName "kube-api-access-c9gzh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:01:57 crc kubenswrapper[4986]: I1203 13:01:57.293891 4986 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3474dcb3-8f5b-4eed-ada7-ec711dae3b1a-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 03 13:01:57 crc kubenswrapper[4986]: I1203 13:01:57.293938 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c9gzh\" (UniqueName: \"kubernetes.io/projected/3474dcb3-8f5b-4eed-ada7-ec711dae3b1a-kube-api-access-c9gzh\") on node \"crc\" DevicePath \"\"" Dec 03 13:01:57 crc kubenswrapper[4986]: I1203 13:01:57.796818 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412780-2k56t" event={"ID":"3474dcb3-8f5b-4eed-ada7-ec711dae3b1a","Type":"ContainerDied","Data":"6f385c1572da630d3b6e8d43273eae482122ca9d1ffaafb2253da570441bcc15"} Dec 03 13:01:57 crc kubenswrapper[4986]: I1203 13:01:57.796883 4986 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f385c1572da630d3b6e8d43273eae482122ca9d1ffaafb2253da570441bcc15" Dec 03 13:01:57 crc kubenswrapper[4986]: I1203 13:01:57.796895 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412780-2k56t" Dec 03 13:02:02 crc kubenswrapper[4986]: I1203 13:02:02.308530 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wnfnb"] Dec 03 13:02:02 crc kubenswrapper[4986]: I1203 13:02:02.309358 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-wnfnb" podUID="9e009819-9f9d-48db-a20b-dc29cef30887" containerName="registry-server" containerID="cri-o://d2af21c038905513a752bed20b95b4d5e9d30cab21f74aa4cd9212130f15534f" gracePeriod=2 Dec 03 13:02:03 crc kubenswrapper[4986]: I1203 13:02:03.491313 4986 patch_prober.go:28] interesting pod/machine-config-daemon-xggpj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 13:02:03 crc kubenswrapper[4986]: I1203 13:02:03.491377 4986 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 13:02:03 crc kubenswrapper[4986]: I1203 13:02:03.832752 4986 generic.go:334] "Generic (PLEG): container finished" podID="9e009819-9f9d-48db-a20b-dc29cef30887" containerID="d2af21c038905513a752bed20b95b4d5e9d30cab21f74aa4cd9212130f15534f" exitCode=0 Dec 03 13:02:03 crc kubenswrapper[4986]: I1203 13:02:03.832926 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wnfnb" event={"ID":"9e009819-9f9d-48db-a20b-dc29cef30887","Type":"ContainerDied","Data":"d2af21c038905513a752bed20b95b4d5e9d30cab21f74aa4cd9212130f15534f"} Dec 03 13:02:03 crc kubenswrapper[4986]: I1203 13:02:03.833049 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wnfnb" event={"ID":"9e009819-9f9d-48db-a20b-dc29cef30887","Type":"ContainerDied","Data":"267c6f74775a8b31d9caf0a8630fa48e20a5bcf3e5aebc5f58dbac27e9b12f38"} Dec 03 13:02:03 crc kubenswrapper[4986]: I1203 13:02:03.833063 4986 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="267c6f74775a8b31d9caf0a8630fa48e20a5bcf3e5aebc5f58dbac27e9b12f38" Dec 03 13:02:03 crc kubenswrapper[4986]: I1203 13:02:03.854320 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wnfnb" Dec 03 13:02:03 crc kubenswrapper[4986]: I1203 13:02:03.875058 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2j9q5\" (UniqueName: \"kubernetes.io/projected/9e009819-9f9d-48db-a20b-dc29cef30887-kube-api-access-2j9q5\") pod \"9e009819-9f9d-48db-a20b-dc29cef30887\" (UID: \"9e009819-9f9d-48db-a20b-dc29cef30887\") " Dec 03 13:02:03 crc kubenswrapper[4986]: I1203 13:02:03.875144 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e009819-9f9d-48db-a20b-dc29cef30887-utilities\") pod \"9e009819-9f9d-48db-a20b-dc29cef30887\" (UID: \"9e009819-9f9d-48db-a20b-dc29cef30887\") " Dec 03 13:02:03 crc kubenswrapper[4986]: I1203 13:02:03.875168 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e009819-9f9d-48db-a20b-dc29cef30887-catalog-content\") pod \"9e009819-9f9d-48db-a20b-dc29cef30887\" (UID: \"9e009819-9f9d-48db-a20b-dc29cef30887\") " Dec 03 13:02:03 crc kubenswrapper[4986]: I1203 13:02:03.877442 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e009819-9f9d-48db-a20b-dc29cef30887-utilities" (OuterVolumeSpecName: "utilities") pod "9e009819-9f9d-48db-a20b-dc29cef30887" (UID: "9e009819-9f9d-48db-a20b-dc29cef30887"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 13:02:03 crc kubenswrapper[4986]: I1203 13:02:03.885840 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e009819-9f9d-48db-a20b-dc29cef30887-kube-api-access-2j9q5" (OuterVolumeSpecName: "kube-api-access-2j9q5") pod "9e009819-9f9d-48db-a20b-dc29cef30887" (UID: "9e009819-9f9d-48db-a20b-dc29cef30887"). InnerVolumeSpecName "kube-api-access-2j9q5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:02:03 crc kubenswrapper[4986]: I1203 13:02:03.976408 4986 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e009819-9f9d-48db-a20b-dc29cef30887-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 13:02:03 crc kubenswrapper[4986]: I1203 13:02:03.976453 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2j9q5\" (UniqueName: \"kubernetes.io/projected/9e009819-9f9d-48db-a20b-dc29cef30887-kube-api-access-2j9q5\") on node \"crc\" DevicePath \"\"" Dec 03 13:02:03 crc kubenswrapper[4986]: I1203 13:02:03.995816 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e009819-9f9d-48db-a20b-dc29cef30887-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9e009819-9f9d-48db-a20b-dc29cef30887" (UID: "9e009819-9f9d-48db-a20b-dc29cef30887"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 13:02:04 crc kubenswrapper[4986]: I1203 13:02:04.077551 4986 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e009819-9f9d-48db-a20b-dc29cef30887-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 13:02:04 crc kubenswrapper[4986]: I1203 13:02:04.837887 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wnfnb" Dec 03 13:02:04 crc kubenswrapper[4986]: I1203 13:02:04.873849 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wnfnb"] Dec 03 13:02:04 crc kubenswrapper[4986]: I1203 13:02:04.879263 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-wnfnb"] Dec 03 13:02:04 crc kubenswrapper[4986]: E1203 13:02:04.921518 4986 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e009819_9f9d_48db_a20b_dc29cef30887.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e009819_9f9d_48db_a20b_dc29cef30887.slice/crio-267c6f74775a8b31d9caf0a8630fa48e20a5bcf3e5aebc5f58dbac27e9b12f38\": RecentStats: unable to find data in memory cache]" Dec 03 13:02:04 crc kubenswrapper[4986]: I1203 13:02:04.951833 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e009819-9f9d-48db-a20b-dc29cef30887" path="/var/lib/kubelet/pods/9e009819-9f9d-48db-a20b-dc29cef30887/volumes" Dec 03 13:02:07 crc kubenswrapper[4986]: I1203 13:02:07.141621 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-t9p5n"] Dec 03 13:02:07 crc kubenswrapper[4986]: E1203 13:02:07.141875 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e009819-9f9d-48db-a20b-dc29cef30887" containerName="registry-server" Dec 03 13:02:07 crc kubenswrapper[4986]: I1203 13:02:07.141892 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e009819-9f9d-48db-a20b-dc29cef30887" containerName="registry-server" Dec 03 13:02:07 crc kubenswrapper[4986]: E1203 13:02:07.141906 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e009819-9f9d-48db-a20b-dc29cef30887" containerName="extract-utilities" Dec 03 13:02:07 crc kubenswrapper[4986]: I1203 13:02:07.141915 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e009819-9f9d-48db-a20b-dc29cef30887" containerName="extract-utilities" Dec 03 13:02:07 crc kubenswrapper[4986]: E1203 13:02:07.141924 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e009819-9f9d-48db-a20b-dc29cef30887" containerName="extract-content" Dec 03 13:02:07 crc kubenswrapper[4986]: I1203 13:02:07.141932 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e009819-9f9d-48db-a20b-dc29cef30887" containerName="extract-content" Dec 03 13:02:07 crc kubenswrapper[4986]: E1203 13:02:07.141950 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3474dcb3-8f5b-4eed-ada7-ec711dae3b1a" containerName="collect-profiles" Dec 03 13:02:07 crc kubenswrapper[4986]: I1203 13:02:07.141958 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="3474dcb3-8f5b-4eed-ada7-ec711dae3b1a" containerName="collect-profiles" Dec 03 13:02:07 crc kubenswrapper[4986]: I1203 13:02:07.142073 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e009819-9f9d-48db-a20b-dc29cef30887" containerName="registry-server" Dec 03 13:02:07 crc kubenswrapper[4986]: I1203 13:02:07.142088 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="3474dcb3-8f5b-4eed-ada7-ec711dae3b1a" containerName="collect-profiles" Dec 03 13:02:07 crc kubenswrapper[4986]: I1203 13:02:07.142561 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-t9p5n" Dec 03 13:02:07 crc kubenswrapper[4986]: I1203 13:02:07.228461 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-t9p5n\" (UID: \"27f332b0-d249-47d0-a7f5-0523752c8b99\") " pod="openshift-image-registry/image-registry-66df7c8f76-t9p5n" Dec 03 13:02:07 crc kubenswrapper[4986]: I1203 13:02:07.228511 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/27f332b0-d249-47d0-a7f5-0523752c8b99-bound-sa-token\") pod \"image-registry-66df7c8f76-t9p5n\" (UID: \"27f332b0-d249-47d0-a7f5-0523752c8b99\") " pod="openshift-image-registry/image-registry-66df7c8f76-t9p5n" Dec 03 13:02:07 crc kubenswrapper[4986]: I1203 13:02:07.228544 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/27f332b0-d249-47d0-a7f5-0523752c8b99-registry-tls\") pod \"image-registry-66df7c8f76-t9p5n\" (UID: \"27f332b0-d249-47d0-a7f5-0523752c8b99\") " pod="openshift-image-registry/image-registry-66df7c8f76-t9p5n" Dec 03 13:02:07 crc kubenswrapper[4986]: I1203 13:02:07.228597 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldzb5\" (UniqueName: \"kubernetes.io/projected/27f332b0-d249-47d0-a7f5-0523752c8b99-kube-api-access-ldzb5\") pod \"image-registry-66df7c8f76-t9p5n\" (UID: \"27f332b0-d249-47d0-a7f5-0523752c8b99\") " pod="openshift-image-registry/image-registry-66df7c8f76-t9p5n" Dec 03 13:02:07 crc kubenswrapper[4986]: I1203 13:02:07.228633 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/27f332b0-d249-47d0-a7f5-0523752c8b99-registry-certificates\") pod \"image-registry-66df7c8f76-t9p5n\" (UID: \"27f332b0-d249-47d0-a7f5-0523752c8b99\") " pod="openshift-image-registry/image-registry-66df7c8f76-t9p5n" Dec 03 13:02:07 crc kubenswrapper[4986]: I1203 13:02:07.228650 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/27f332b0-d249-47d0-a7f5-0523752c8b99-ca-trust-extracted\") pod \"image-registry-66df7c8f76-t9p5n\" (UID: \"27f332b0-d249-47d0-a7f5-0523752c8b99\") " pod="openshift-image-registry/image-registry-66df7c8f76-t9p5n" Dec 03 13:02:07 crc kubenswrapper[4986]: I1203 13:02:07.228670 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/27f332b0-d249-47d0-a7f5-0523752c8b99-installation-pull-secrets\") pod \"image-registry-66df7c8f76-t9p5n\" (UID: \"27f332b0-d249-47d0-a7f5-0523752c8b99\") " pod="openshift-image-registry/image-registry-66df7c8f76-t9p5n" Dec 03 13:02:07 crc kubenswrapper[4986]: I1203 13:02:07.228692 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/27f332b0-d249-47d0-a7f5-0523752c8b99-trusted-ca\") pod \"image-registry-66df7c8f76-t9p5n\" (UID: \"27f332b0-d249-47d0-a7f5-0523752c8b99\") " pod="openshift-image-registry/image-registry-66df7c8f76-t9p5n" Dec 03 13:02:07 crc kubenswrapper[4986]: I1203 13:02:07.243870 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-t9p5n"] Dec 03 13:02:07 crc kubenswrapper[4986]: I1203 13:02:07.265615 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-t9p5n\" (UID: \"27f332b0-d249-47d0-a7f5-0523752c8b99\") " pod="openshift-image-registry/image-registry-66df7c8f76-t9p5n" Dec 03 13:02:07 crc kubenswrapper[4986]: I1203 13:02:07.329854 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldzb5\" (UniqueName: \"kubernetes.io/projected/27f332b0-d249-47d0-a7f5-0523752c8b99-kube-api-access-ldzb5\") pod \"image-registry-66df7c8f76-t9p5n\" (UID: \"27f332b0-d249-47d0-a7f5-0523752c8b99\") " pod="openshift-image-registry/image-registry-66df7c8f76-t9p5n" Dec 03 13:02:07 crc kubenswrapper[4986]: I1203 13:02:07.329915 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/27f332b0-d249-47d0-a7f5-0523752c8b99-registry-certificates\") pod \"image-registry-66df7c8f76-t9p5n\" (UID: \"27f332b0-d249-47d0-a7f5-0523752c8b99\") " pod="openshift-image-registry/image-registry-66df7c8f76-t9p5n" Dec 03 13:02:07 crc kubenswrapper[4986]: I1203 13:02:07.329934 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/27f332b0-d249-47d0-a7f5-0523752c8b99-ca-trust-extracted\") pod \"image-registry-66df7c8f76-t9p5n\" (UID: \"27f332b0-d249-47d0-a7f5-0523752c8b99\") " pod="openshift-image-registry/image-registry-66df7c8f76-t9p5n" Dec 03 13:02:07 crc kubenswrapper[4986]: I1203 13:02:07.329956 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/27f332b0-d249-47d0-a7f5-0523752c8b99-installation-pull-secrets\") pod \"image-registry-66df7c8f76-t9p5n\" (UID: \"27f332b0-d249-47d0-a7f5-0523752c8b99\") " pod="openshift-image-registry/image-registry-66df7c8f76-t9p5n" Dec 03 13:02:07 crc kubenswrapper[4986]: I1203 13:02:07.329974 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/27f332b0-d249-47d0-a7f5-0523752c8b99-trusted-ca\") pod \"image-registry-66df7c8f76-t9p5n\" (UID: \"27f332b0-d249-47d0-a7f5-0523752c8b99\") " pod="openshift-image-registry/image-registry-66df7c8f76-t9p5n" Dec 03 13:02:07 crc kubenswrapper[4986]: I1203 13:02:07.329995 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/27f332b0-d249-47d0-a7f5-0523752c8b99-bound-sa-token\") pod \"image-registry-66df7c8f76-t9p5n\" (UID: \"27f332b0-d249-47d0-a7f5-0523752c8b99\") " pod="openshift-image-registry/image-registry-66df7c8f76-t9p5n" Dec 03 13:02:07 crc kubenswrapper[4986]: I1203 13:02:07.330017 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/27f332b0-d249-47d0-a7f5-0523752c8b99-registry-tls\") pod \"image-registry-66df7c8f76-t9p5n\" (UID: \"27f332b0-d249-47d0-a7f5-0523752c8b99\") " pod="openshift-image-registry/image-registry-66df7c8f76-t9p5n" Dec 03 13:02:07 crc kubenswrapper[4986]: I1203 13:02:07.330750 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/27f332b0-d249-47d0-a7f5-0523752c8b99-ca-trust-extracted\") pod \"image-registry-66df7c8f76-t9p5n\" (UID: \"27f332b0-d249-47d0-a7f5-0523752c8b99\") " pod="openshift-image-registry/image-registry-66df7c8f76-t9p5n" Dec 03 13:02:07 crc kubenswrapper[4986]: I1203 13:02:07.331245 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/27f332b0-d249-47d0-a7f5-0523752c8b99-registry-certificates\") pod \"image-registry-66df7c8f76-t9p5n\" (UID: \"27f332b0-d249-47d0-a7f5-0523752c8b99\") " pod="openshift-image-registry/image-registry-66df7c8f76-t9p5n" Dec 03 13:02:07 crc kubenswrapper[4986]: I1203 13:02:07.332419 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/27f332b0-d249-47d0-a7f5-0523752c8b99-trusted-ca\") pod \"image-registry-66df7c8f76-t9p5n\" (UID: \"27f332b0-d249-47d0-a7f5-0523752c8b99\") " pod="openshift-image-registry/image-registry-66df7c8f76-t9p5n" Dec 03 13:02:07 crc kubenswrapper[4986]: I1203 13:02:07.335126 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/27f332b0-d249-47d0-a7f5-0523752c8b99-installation-pull-secrets\") pod \"image-registry-66df7c8f76-t9p5n\" (UID: \"27f332b0-d249-47d0-a7f5-0523752c8b99\") " pod="openshift-image-registry/image-registry-66df7c8f76-t9p5n" Dec 03 13:02:07 crc kubenswrapper[4986]: I1203 13:02:07.335276 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/27f332b0-d249-47d0-a7f5-0523752c8b99-registry-tls\") pod \"image-registry-66df7c8f76-t9p5n\" (UID: \"27f332b0-d249-47d0-a7f5-0523752c8b99\") " pod="openshift-image-registry/image-registry-66df7c8f76-t9p5n" Dec 03 13:02:07 crc kubenswrapper[4986]: I1203 13:02:07.347618 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/27f332b0-d249-47d0-a7f5-0523752c8b99-bound-sa-token\") pod \"image-registry-66df7c8f76-t9p5n\" (UID: \"27f332b0-d249-47d0-a7f5-0523752c8b99\") " pod="openshift-image-registry/image-registry-66df7c8f76-t9p5n" Dec 03 13:02:07 crc kubenswrapper[4986]: I1203 13:02:07.347840 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldzb5\" (UniqueName: \"kubernetes.io/projected/27f332b0-d249-47d0-a7f5-0523752c8b99-kube-api-access-ldzb5\") pod \"image-registry-66df7c8f76-t9p5n\" (UID: \"27f332b0-d249-47d0-a7f5-0523752c8b99\") " pod="openshift-image-registry/image-registry-66df7c8f76-t9p5n" Dec 03 13:02:07 crc kubenswrapper[4986]: I1203 13:02:07.462559 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-t9p5n" Dec 03 13:02:07 crc kubenswrapper[4986]: I1203 13:02:07.910996 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-t9p5n"] Dec 03 13:02:08 crc kubenswrapper[4986]: I1203 13:02:08.860955 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-t9p5n" event={"ID":"27f332b0-d249-47d0-a7f5-0523752c8b99","Type":"ContainerStarted","Data":"a067db5e712ef05c311f49e14534fd29b319591a70d82d6e3523ddb16963d9b9"} Dec 03 13:02:08 crc kubenswrapper[4986]: I1203 13:02:08.861982 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-t9p5n" event={"ID":"27f332b0-d249-47d0-a7f5-0523752c8b99","Type":"ContainerStarted","Data":"48d2bc258cee8a72411483a7c1ec5afcb418bf15646a19682edd72a9cd076709"} Dec 03 13:02:08 crc kubenswrapper[4986]: I1203 13:02:08.862071 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-t9p5n" Dec 03 13:02:08 crc kubenswrapper[4986]: I1203 13:02:08.880248 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-t9p5n" podStartSLOduration=1.8802260689999999 podStartE2EDuration="1.880226069s" podCreationTimestamp="2025-12-03 13:02:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:02:08.879966512 +0000 UTC m=+388.346397713" watchObservedRunningTime="2025-12-03 13:02:08.880226069 +0000 UTC m=+388.346657260" Dec 03 13:02:24 crc kubenswrapper[4986]: I1203 13:02:24.782045 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vzt5m"] Dec 03 13:02:24 crc kubenswrapper[4986]: I1203 13:02:24.783034 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vzt5m" podUID="d28745c8-a7a6-4a38-acb2-f7b6ff1ef54f" containerName="registry-server" containerID="cri-o://eea71fc2fcb5242dbe6cf6f1a3dc20caaaa2f85d3ded122858372e2375318af1" gracePeriod=30 Dec 03 13:02:24 crc kubenswrapper[4986]: I1203 13:02:24.792971 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-b8hxm"] Dec 03 13:02:24 crc kubenswrapper[4986]: I1203 13:02:24.793539 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-b8hxm" podUID="7aceb259-65a5-45a6-acd1-8f5cac430ef7" containerName="registry-server" containerID="cri-o://9ac0de0f8cd3e208de5a97070e4f0a781356be67cc3626ae845a1a813caf696d" gracePeriod=30 Dec 03 13:02:24 crc kubenswrapper[4986]: I1203 13:02:24.807775 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zzj7p"] Dec 03 13:02:24 crc kubenswrapper[4986]: I1203 13:02:24.808061 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-zzj7p" podUID="89c50427-14ae-409d-89d5-a56be0ff97d1" containerName="marketplace-operator" containerID="cri-o://cbe598634a3b4496336e84f38190312fc9b1e457486521a5b8162e4810fe4b5b" gracePeriod=30 Dec 03 13:02:24 crc kubenswrapper[4986]: I1203 13:02:24.816380 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-l6bpp"] Dec 03 13:02:24 crc kubenswrapper[4986]: I1203 13:02:24.816655 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-l6bpp" podUID="b2dbfdc4-7122-4b7f-bfdd-189396fb1c77" containerName="registry-server" containerID="cri-o://a984f95e59e23e7eb127a6998aed6915ec511ae26702fb7a4c4e6b7f312754d2" gracePeriod=30 Dec 03 13:02:24 crc kubenswrapper[4986]: I1203 13:02:24.829037 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7lnmw"] Dec 03 13:02:24 crc kubenswrapper[4986]: I1203 13:02:24.829943 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-7lnmw" Dec 03 13:02:24 crc kubenswrapper[4986]: I1203 13:02:24.833859 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dgwd8"] Dec 03 13:02:24 crc kubenswrapper[4986]: I1203 13:02:24.834119 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dgwd8" podUID="49d6e247-25ce-45e1-b2fe-2e3ec70cf966" containerName="registry-server" containerID="cri-o://ef9c551367e0d6d4d7c142f98707fdb054ce586d24cef4af590745134e6e6a3c" gracePeriod=30 Dec 03 13:02:24 crc kubenswrapper[4986]: I1203 13:02:24.849641 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7lnmw"] Dec 03 13:02:24 crc kubenswrapper[4986]: I1203 13:02:24.960450 4986 generic.go:334] "Generic (PLEG): container finished" podID="d28745c8-a7a6-4a38-acb2-f7b6ff1ef54f" containerID="eea71fc2fcb5242dbe6cf6f1a3dc20caaaa2f85d3ded122858372e2375318af1" exitCode=0 Dec 03 13:02:24 crc kubenswrapper[4986]: I1203 13:02:24.960541 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vzt5m" event={"ID":"d28745c8-a7a6-4a38-acb2-f7b6ff1ef54f","Type":"ContainerDied","Data":"eea71fc2fcb5242dbe6cf6f1a3dc20caaaa2f85d3ded122858372e2375318af1"} Dec 03 13:02:24 crc kubenswrapper[4986]: I1203 13:02:24.963148 4986 generic.go:334] "Generic (PLEG): container finished" podID="89c50427-14ae-409d-89d5-a56be0ff97d1" containerID="cbe598634a3b4496336e84f38190312fc9b1e457486521a5b8162e4810fe4b5b" exitCode=0 Dec 03 13:02:24 crc kubenswrapper[4986]: I1203 13:02:24.963189 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zzj7p" event={"ID":"89c50427-14ae-409d-89d5-a56be0ff97d1","Type":"ContainerDied","Data":"cbe598634a3b4496336e84f38190312fc9b1e457486521a5b8162e4810fe4b5b"} Dec 03 13:02:24 crc kubenswrapper[4986]: I1203 13:02:24.965114 4986 generic.go:334] "Generic (PLEG): container finished" podID="49d6e247-25ce-45e1-b2fe-2e3ec70cf966" containerID="ef9c551367e0d6d4d7c142f98707fdb054ce586d24cef4af590745134e6e6a3c" exitCode=0 Dec 03 13:02:24 crc kubenswrapper[4986]: I1203 13:02:24.965149 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dgwd8" event={"ID":"49d6e247-25ce-45e1-b2fe-2e3ec70cf966","Type":"ContainerDied","Data":"ef9c551367e0d6d4d7c142f98707fdb054ce586d24cef4af590745134e6e6a3c"} Dec 03 13:02:25 crc kubenswrapper[4986]: I1203 13:02:25.014728 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvt59\" (UniqueName: \"kubernetes.io/projected/4083ec9d-ae1e-4b92-955d-7b2c3ee874c7-kube-api-access-lvt59\") pod \"marketplace-operator-79b997595-7lnmw\" (UID: \"4083ec9d-ae1e-4b92-955d-7b2c3ee874c7\") " pod="openshift-marketplace/marketplace-operator-79b997595-7lnmw" Dec 03 13:02:25 crc kubenswrapper[4986]: I1203 13:02:25.014832 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4083ec9d-ae1e-4b92-955d-7b2c3ee874c7-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-7lnmw\" (UID: \"4083ec9d-ae1e-4b92-955d-7b2c3ee874c7\") " pod="openshift-marketplace/marketplace-operator-79b997595-7lnmw" Dec 03 13:02:25 crc kubenswrapper[4986]: I1203 13:02:25.014870 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4083ec9d-ae1e-4b92-955d-7b2c3ee874c7-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-7lnmw\" (UID: \"4083ec9d-ae1e-4b92-955d-7b2c3ee874c7\") " pod="openshift-marketplace/marketplace-operator-79b997595-7lnmw" Dec 03 13:02:25 crc kubenswrapper[4986]: I1203 13:02:25.115827 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvt59\" (UniqueName: \"kubernetes.io/projected/4083ec9d-ae1e-4b92-955d-7b2c3ee874c7-kube-api-access-lvt59\") pod \"marketplace-operator-79b997595-7lnmw\" (UID: \"4083ec9d-ae1e-4b92-955d-7b2c3ee874c7\") " pod="openshift-marketplace/marketplace-operator-79b997595-7lnmw" Dec 03 13:02:25 crc kubenswrapper[4986]: I1203 13:02:25.115904 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4083ec9d-ae1e-4b92-955d-7b2c3ee874c7-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-7lnmw\" (UID: \"4083ec9d-ae1e-4b92-955d-7b2c3ee874c7\") " pod="openshift-marketplace/marketplace-operator-79b997595-7lnmw" Dec 03 13:02:25 crc kubenswrapper[4986]: I1203 13:02:25.115927 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4083ec9d-ae1e-4b92-955d-7b2c3ee874c7-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-7lnmw\" (UID: \"4083ec9d-ae1e-4b92-955d-7b2c3ee874c7\") " pod="openshift-marketplace/marketplace-operator-79b997595-7lnmw" Dec 03 13:02:25 crc kubenswrapper[4986]: I1203 13:02:25.117613 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4083ec9d-ae1e-4b92-955d-7b2c3ee874c7-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-7lnmw\" (UID: \"4083ec9d-ae1e-4b92-955d-7b2c3ee874c7\") " pod="openshift-marketplace/marketplace-operator-79b997595-7lnmw" Dec 03 13:02:25 crc kubenswrapper[4986]: I1203 13:02:25.122337 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4083ec9d-ae1e-4b92-955d-7b2c3ee874c7-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-7lnmw\" (UID: \"4083ec9d-ae1e-4b92-955d-7b2c3ee874c7\") " pod="openshift-marketplace/marketplace-operator-79b997595-7lnmw" Dec 03 13:02:25 crc kubenswrapper[4986]: I1203 13:02:25.134026 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvt59\" (UniqueName: \"kubernetes.io/projected/4083ec9d-ae1e-4b92-955d-7b2c3ee874c7-kube-api-access-lvt59\") pod \"marketplace-operator-79b997595-7lnmw\" (UID: \"4083ec9d-ae1e-4b92-955d-7b2c3ee874c7\") " pod="openshift-marketplace/marketplace-operator-79b997595-7lnmw" Dec 03 13:02:25 crc kubenswrapper[4986]: E1203 13:02:25.212242 4986 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7aceb259_65a5_45a6_acd1_8f5cac430ef7.slice/crio-conmon-9ac0de0f8cd3e208de5a97070e4f0a781356be67cc3626ae845a1a813caf696d.scope\": RecentStats: unable to find data in memory cache]" Dec 03 13:02:25 crc kubenswrapper[4986]: I1203 13:02:25.364831 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-7lnmw" Dec 03 13:02:25 crc kubenswrapper[4986]: I1203 13:02:25.373044 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-zzj7p" Dec 03 13:02:25 crc kubenswrapper[4986]: I1203 13:02:25.411645 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vzt5m" Dec 03 13:02:25 crc kubenswrapper[4986]: I1203 13:02:25.453273 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dgwd8" Dec 03 13:02:25 crc kubenswrapper[4986]: I1203 13:02:25.520778 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/89c50427-14ae-409d-89d5-a56be0ff97d1-marketplace-trusted-ca\") pod \"89c50427-14ae-409d-89d5-a56be0ff97d1\" (UID: \"89c50427-14ae-409d-89d5-a56be0ff97d1\") " Dec 03 13:02:25 crc kubenswrapper[4986]: I1203 13:02:25.520829 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d28745c8-a7a6-4a38-acb2-f7b6ff1ef54f-utilities\") pod \"d28745c8-a7a6-4a38-acb2-f7b6ff1ef54f\" (UID: \"d28745c8-a7a6-4a38-acb2-f7b6ff1ef54f\") " Dec 03 13:02:25 crc kubenswrapper[4986]: I1203 13:02:25.520908 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7vzb6\" (UniqueName: \"kubernetes.io/projected/d28745c8-a7a6-4a38-acb2-f7b6ff1ef54f-kube-api-access-7vzb6\") pod \"d28745c8-a7a6-4a38-acb2-f7b6ff1ef54f\" (UID: \"d28745c8-a7a6-4a38-acb2-f7b6ff1ef54f\") " Dec 03 13:02:25 crc kubenswrapper[4986]: I1203 13:02:25.520939 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/89c50427-14ae-409d-89d5-a56be0ff97d1-marketplace-operator-metrics\") pod \"89c50427-14ae-409d-89d5-a56be0ff97d1\" (UID: \"89c50427-14ae-409d-89d5-a56be0ff97d1\") " Dec 03 13:02:25 crc kubenswrapper[4986]: I1203 13:02:25.520972 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d28745c8-a7a6-4a38-acb2-f7b6ff1ef54f-catalog-content\") pod \"d28745c8-a7a6-4a38-acb2-f7b6ff1ef54f\" (UID: \"d28745c8-a7a6-4a38-acb2-f7b6ff1ef54f\") " Dec 03 13:02:25 crc kubenswrapper[4986]: I1203 13:02:25.521017 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lcvf6\" (UniqueName: \"kubernetes.io/projected/89c50427-14ae-409d-89d5-a56be0ff97d1-kube-api-access-lcvf6\") pod \"89c50427-14ae-409d-89d5-a56be0ff97d1\" (UID: \"89c50427-14ae-409d-89d5-a56be0ff97d1\") " Dec 03 13:02:25 crc kubenswrapper[4986]: I1203 13:02:25.521991 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d28745c8-a7a6-4a38-acb2-f7b6ff1ef54f-utilities" (OuterVolumeSpecName: "utilities") pod "d28745c8-a7a6-4a38-acb2-f7b6ff1ef54f" (UID: "d28745c8-a7a6-4a38-acb2-f7b6ff1ef54f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 13:02:25 crc kubenswrapper[4986]: I1203 13:02:25.524006 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b8hxm" Dec 03 13:02:25 crc kubenswrapper[4986]: I1203 13:02:25.526445 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89c50427-14ae-409d-89d5-a56be0ff97d1-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "89c50427-14ae-409d-89d5-a56be0ff97d1" (UID: "89c50427-14ae-409d-89d5-a56be0ff97d1"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:02:25 crc kubenswrapper[4986]: I1203 13:02:25.528443 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89c50427-14ae-409d-89d5-a56be0ff97d1-kube-api-access-lcvf6" (OuterVolumeSpecName: "kube-api-access-lcvf6") pod "89c50427-14ae-409d-89d5-a56be0ff97d1" (UID: "89c50427-14ae-409d-89d5-a56be0ff97d1"). InnerVolumeSpecName "kube-api-access-lcvf6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:02:25 crc kubenswrapper[4986]: I1203 13:02:25.528666 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d28745c8-a7a6-4a38-acb2-f7b6ff1ef54f-kube-api-access-7vzb6" (OuterVolumeSpecName: "kube-api-access-7vzb6") pod "d28745c8-a7a6-4a38-acb2-f7b6ff1ef54f" (UID: "d28745c8-a7a6-4a38-acb2-f7b6ff1ef54f"). InnerVolumeSpecName "kube-api-access-7vzb6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:02:25 crc kubenswrapper[4986]: I1203 13:02:25.531449 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89c50427-14ae-409d-89d5-a56be0ff97d1-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "89c50427-14ae-409d-89d5-a56be0ff97d1" (UID: "89c50427-14ae-409d-89d5-a56be0ff97d1"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:02:25 crc kubenswrapper[4986]: I1203 13:02:25.595996 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d28745c8-a7a6-4a38-acb2-f7b6ff1ef54f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d28745c8-a7a6-4a38-acb2-f7b6ff1ef54f" (UID: "d28745c8-a7a6-4a38-acb2-f7b6ff1ef54f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 13:02:25 crc kubenswrapper[4986]: I1203 13:02:25.622912 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7aceb259-65a5-45a6-acd1-8f5cac430ef7-catalog-content\") pod \"7aceb259-65a5-45a6-acd1-8f5cac430ef7\" (UID: \"7aceb259-65a5-45a6-acd1-8f5cac430ef7\") " Dec 03 13:02:25 crc kubenswrapper[4986]: I1203 13:02:25.623255 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7aceb259-65a5-45a6-acd1-8f5cac430ef7-utilities\") pod \"7aceb259-65a5-45a6-acd1-8f5cac430ef7\" (UID: \"7aceb259-65a5-45a6-acd1-8f5cac430ef7\") " Dec 03 13:02:25 crc kubenswrapper[4986]: I1203 13:02:25.623337 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49d6e247-25ce-45e1-b2fe-2e3ec70cf966-utilities\") pod \"49d6e247-25ce-45e1-b2fe-2e3ec70cf966\" (UID: \"49d6e247-25ce-45e1-b2fe-2e3ec70cf966\") " Dec 03 13:02:25 crc kubenswrapper[4986]: I1203 13:02:25.623379 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdc29\" (UniqueName: \"kubernetes.io/projected/49d6e247-25ce-45e1-b2fe-2e3ec70cf966-kube-api-access-zdc29\") pod \"49d6e247-25ce-45e1-b2fe-2e3ec70cf966\" (UID: \"49d6e247-25ce-45e1-b2fe-2e3ec70cf966\") " Dec 03 13:02:25 crc kubenswrapper[4986]: I1203 13:02:25.623406 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6lfms\" (UniqueName: \"kubernetes.io/projected/7aceb259-65a5-45a6-acd1-8f5cac430ef7-kube-api-access-6lfms\") pod \"7aceb259-65a5-45a6-acd1-8f5cac430ef7\" (UID: \"7aceb259-65a5-45a6-acd1-8f5cac430ef7\") " Dec 03 13:02:25 crc kubenswrapper[4986]: I1203 13:02:25.623473 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49d6e247-25ce-45e1-b2fe-2e3ec70cf966-catalog-content\") pod \"49d6e247-25ce-45e1-b2fe-2e3ec70cf966\" (UID: \"49d6e247-25ce-45e1-b2fe-2e3ec70cf966\") " Dec 03 13:02:25 crc kubenswrapper[4986]: I1203 13:02:25.623765 4986 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/89c50427-14ae-409d-89d5-a56be0ff97d1-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 03 13:02:25 crc kubenswrapper[4986]: I1203 13:02:25.623836 4986 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d28745c8-a7a6-4a38-acb2-f7b6ff1ef54f-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 13:02:25 crc kubenswrapper[4986]: I1203 13:02:25.623854 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7vzb6\" (UniqueName: \"kubernetes.io/projected/d28745c8-a7a6-4a38-acb2-f7b6ff1ef54f-kube-api-access-7vzb6\") on node \"crc\" DevicePath \"\"" Dec 03 13:02:25 crc kubenswrapper[4986]: I1203 13:02:25.623868 4986 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/89c50427-14ae-409d-89d5-a56be0ff97d1-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 03 13:02:25 crc kubenswrapper[4986]: I1203 13:02:25.623881 4986 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d28745c8-a7a6-4a38-acb2-f7b6ff1ef54f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 13:02:25 crc kubenswrapper[4986]: I1203 13:02:25.623893 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lcvf6\" (UniqueName: \"kubernetes.io/projected/89c50427-14ae-409d-89d5-a56be0ff97d1-kube-api-access-lcvf6\") on node \"crc\" DevicePath \"\"" Dec 03 13:02:25 crc kubenswrapper[4986]: I1203 13:02:25.625264 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49d6e247-25ce-45e1-b2fe-2e3ec70cf966-utilities" (OuterVolumeSpecName: "utilities") pod "49d6e247-25ce-45e1-b2fe-2e3ec70cf966" (UID: "49d6e247-25ce-45e1-b2fe-2e3ec70cf966"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 13:02:25 crc kubenswrapper[4986]: I1203 13:02:25.627231 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49d6e247-25ce-45e1-b2fe-2e3ec70cf966-kube-api-access-zdc29" (OuterVolumeSpecName: "kube-api-access-zdc29") pod "49d6e247-25ce-45e1-b2fe-2e3ec70cf966" (UID: "49d6e247-25ce-45e1-b2fe-2e3ec70cf966"). InnerVolumeSpecName "kube-api-access-zdc29". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:02:25 crc kubenswrapper[4986]: I1203 13:02:25.628885 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7aceb259-65a5-45a6-acd1-8f5cac430ef7-utilities" (OuterVolumeSpecName: "utilities") pod "7aceb259-65a5-45a6-acd1-8f5cac430ef7" (UID: "7aceb259-65a5-45a6-acd1-8f5cac430ef7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 13:02:25 crc kubenswrapper[4986]: I1203 13:02:25.629808 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7aceb259-65a5-45a6-acd1-8f5cac430ef7-kube-api-access-6lfms" (OuterVolumeSpecName: "kube-api-access-6lfms") pod "7aceb259-65a5-45a6-acd1-8f5cac430ef7" (UID: "7aceb259-65a5-45a6-acd1-8f5cac430ef7"). InnerVolumeSpecName "kube-api-access-6lfms". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:02:25 crc kubenswrapper[4986]: I1203 13:02:25.691493 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7aceb259-65a5-45a6-acd1-8f5cac430ef7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7aceb259-65a5-45a6-acd1-8f5cac430ef7" (UID: "7aceb259-65a5-45a6-acd1-8f5cac430ef7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 13:02:25 crc kubenswrapper[4986]: I1203 13:02:25.725650 4986 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7aceb259-65a5-45a6-acd1-8f5cac430ef7-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 13:02:25 crc kubenswrapper[4986]: I1203 13:02:25.725921 4986 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7aceb259-65a5-45a6-acd1-8f5cac430ef7-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 13:02:25 crc kubenswrapper[4986]: I1203 13:02:25.726054 4986 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49d6e247-25ce-45e1-b2fe-2e3ec70cf966-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 13:02:25 crc kubenswrapper[4986]: I1203 13:02:25.726144 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zdc29\" (UniqueName: \"kubernetes.io/projected/49d6e247-25ce-45e1-b2fe-2e3ec70cf966-kube-api-access-zdc29\") on node \"crc\" DevicePath \"\"" Dec 03 13:02:25 crc kubenswrapper[4986]: I1203 13:02:25.726228 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6lfms\" (UniqueName: \"kubernetes.io/projected/7aceb259-65a5-45a6-acd1-8f5cac430ef7-kube-api-access-6lfms\") on node \"crc\" DevicePath \"\"" Dec 03 13:02:25 crc kubenswrapper[4986]: I1203 13:02:25.795429 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l6bpp" Dec 03 13:02:25 crc kubenswrapper[4986]: I1203 13:02:25.801188 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49d6e247-25ce-45e1-b2fe-2e3ec70cf966-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "49d6e247-25ce-45e1-b2fe-2e3ec70cf966" (UID: "49d6e247-25ce-45e1-b2fe-2e3ec70cf966"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 13:02:25 crc kubenswrapper[4986]: I1203 13:02:25.827681 4986 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49d6e247-25ce-45e1-b2fe-2e3ec70cf966-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 13:02:25 crc kubenswrapper[4986]: I1203 13:02:25.918891 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7lnmw"] Dec 03 13:02:25 crc kubenswrapper[4986]: I1203 13:02:25.928247 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2dbfdc4-7122-4b7f-bfdd-189396fb1c77-catalog-content\") pod \"b2dbfdc4-7122-4b7f-bfdd-189396fb1c77\" (UID: \"b2dbfdc4-7122-4b7f-bfdd-189396fb1c77\") " Dec 03 13:02:25 crc kubenswrapper[4986]: I1203 13:02:25.928357 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2dbfdc4-7122-4b7f-bfdd-189396fb1c77-utilities\") pod \"b2dbfdc4-7122-4b7f-bfdd-189396fb1c77\" (UID: \"b2dbfdc4-7122-4b7f-bfdd-189396fb1c77\") " Dec 03 13:02:25 crc kubenswrapper[4986]: I1203 13:02:25.928450 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg2bf\" (UniqueName: \"kubernetes.io/projected/b2dbfdc4-7122-4b7f-bfdd-189396fb1c77-kube-api-access-qg2bf\") pod \"b2dbfdc4-7122-4b7f-bfdd-189396fb1c77\" (UID: \"b2dbfdc4-7122-4b7f-bfdd-189396fb1c77\") " Dec 03 13:02:25 crc kubenswrapper[4986]: I1203 13:02:25.931325 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2dbfdc4-7122-4b7f-bfdd-189396fb1c77-utilities" (OuterVolumeSpecName: "utilities") pod "b2dbfdc4-7122-4b7f-bfdd-189396fb1c77" (UID: "b2dbfdc4-7122-4b7f-bfdd-189396fb1c77"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 13:02:25 crc kubenswrapper[4986]: I1203 13:02:25.931859 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2dbfdc4-7122-4b7f-bfdd-189396fb1c77-kube-api-access-qg2bf" (OuterVolumeSpecName: "kube-api-access-qg2bf") pod "b2dbfdc4-7122-4b7f-bfdd-189396fb1c77" (UID: "b2dbfdc4-7122-4b7f-bfdd-189396fb1c77"). InnerVolumeSpecName "kube-api-access-qg2bf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:02:25 crc kubenswrapper[4986]: I1203 13:02:25.954468 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2dbfdc4-7122-4b7f-bfdd-189396fb1c77-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b2dbfdc4-7122-4b7f-bfdd-189396fb1c77" (UID: "b2dbfdc4-7122-4b7f-bfdd-189396fb1c77"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 13:02:25 crc kubenswrapper[4986]: I1203 13:02:25.976334 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dgwd8" event={"ID":"49d6e247-25ce-45e1-b2fe-2e3ec70cf966","Type":"ContainerDied","Data":"fbdbe1453f51797ceb196b9a30e5190dcf3b02b3804433a8fb1be00b9d83f943"} Dec 03 13:02:25 crc kubenswrapper[4986]: I1203 13:02:25.976391 4986 scope.go:117] "RemoveContainer" containerID="ef9c551367e0d6d4d7c142f98707fdb054ce586d24cef4af590745134e6e6a3c" Dec 03 13:02:25 crc kubenswrapper[4986]: I1203 13:02:25.976543 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dgwd8" Dec 03 13:02:25 crc kubenswrapper[4986]: I1203 13:02:25.986477 4986 generic.go:334] "Generic (PLEG): container finished" podID="7aceb259-65a5-45a6-acd1-8f5cac430ef7" containerID="9ac0de0f8cd3e208de5a97070e4f0a781356be67cc3626ae845a1a813caf696d" exitCode=0 Dec 03 13:02:25 crc kubenswrapper[4986]: I1203 13:02:25.986530 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b8hxm" event={"ID":"7aceb259-65a5-45a6-acd1-8f5cac430ef7","Type":"ContainerDied","Data":"9ac0de0f8cd3e208de5a97070e4f0a781356be67cc3626ae845a1a813caf696d"} Dec 03 13:02:25 crc kubenswrapper[4986]: I1203 13:02:25.986551 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b8hxm" event={"ID":"7aceb259-65a5-45a6-acd1-8f5cac430ef7","Type":"ContainerDied","Data":"ff6b9e42c6206e87a652f64d1ca6f411d95b6f7e31694092eb96ae93c410d4d0"} Dec 03 13:02:25 crc kubenswrapper[4986]: I1203 13:02:25.986616 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b8hxm" Dec 03 13:02:25 crc kubenswrapper[4986]: I1203 13:02:25.996006 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vzt5m" event={"ID":"d28745c8-a7a6-4a38-acb2-f7b6ff1ef54f","Type":"ContainerDied","Data":"d95b71dee3b196c08e1cbd678bcf27d6d8ff83c64bbbfed0b029bfe39b811381"} Dec 03 13:02:25 crc kubenswrapper[4986]: I1203 13:02:25.996075 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vzt5m" Dec 03 13:02:25 crc kubenswrapper[4986]: I1203 13:02:25.997130 4986 scope.go:117] "RemoveContainer" containerID="e4f8b2a4b2924c3654d9f4f01b9a9164e507313f9ade81c18fc0992782e3fae0" Dec 03 13:02:25 crc kubenswrapper[4986]: I1203 13:02:25.998602 4986 generic.go:334] "Generic (PLEG): container finished" podID="b2dbfdc4-7122-4b7f-bfdd-189396fb1c77" containerID="a984f95e59e23e7eb127a6998aed6915ec511ae26702fb7a4c4e6b7f312754d2" exitCode=0 Dec 03 13:02:25 crc kubenswrapper[4986]: I1203 13:02:25.998643 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l6bpp" event={"ID":"b2dbfdc4-7122-4b7f-bfdd-189396fb1c77","Type":"ContainerDied","Data":"a984f95e59e23e7eb127a6998aed6915ec511ae26702fb7a4c4e6b7f312754d2"} Dec 03 13:02:25 crc kubenswrapper[4986]: I1203 13:02:25.998662 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l6bpp" event={"ID":"b2dbfdc4-7122-4b7f-bfdd-189396fb1c77","Type":"ContainerDied","Data":"b762f79cfd0d25c29b9ecd73711b93125579fe42872cb3ef32070ece702a7568"} Dec 03 13:02:25 crc kubenswrapper[4986]: I1203 13:02:25.998705 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l6bpp" Dec 03 13:02:26 crc kubenswrapper[4986]: I1203 13:02:26.007090 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zzj7p" event={"ID":"89c50427-14ae-409d-89d5-a56be0ff97d1","Type":"ContainerDied","Data":"262b154a2a06696d6dfd69537f654aa2cc7785b261d0518d2549541125e0a72b"} Dec 03 13:02:26 crc kubenswrapper[4986]: I1203 13:02:26.007156 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-zzj7p" Dec 03 13:02:26 crc kubenswrapper[4986]: I1203 13:02:26.007874 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7lnmw" event={"ID":"4083ec9d-ae1e-4b92-955d-7b2c3ee874c7","Type":"ContainerStarted","Data":"8c5ec06ad41039da224ef9a84911aa16933b5ac11171db094bc285db113d23d8"} Dec 03 13:02:26 crc kubenswrapper[4986]: I1203 13:02:26.030099 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg2bf\" (UniqueName: \"kubernetes.io/projected/b2dbfdc4-7122-4b7f-bfdd-189396fb1c77-kube-api-access-qg2bf\") on node \"crc\" DevicePath \"\"" Dec 03 13:02:26 crc kubenswrapper[4986]: I1203 13:02:26.030124 4986 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2dbfdc4-7122-4b7f-bfdd-189396fb1c77-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 13:02:26 crc kubenswrapper[4986]: I1203 13:02:26.030134 4986 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2dbfdc4-7122-4b7f-bfdd-189396fb1c77-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 13:02:26 crc kubenswrapper[4986]: I1203 13:02:26.053563 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dgwd8"] Dec 03 13:02:26 crc kubenswrapper[4986]: I1203 13:02:26.057136 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dgwd8"] Dec 03 13:02:26 crc kubenswrapper[4986]: I1203 13:02:26.057954 4986 scope.go:117] "RemoveContainer" containerID="16316f9bdaba0473467c5d11d4b3f1a15b4fe15563aaa01dabbfa0fb2ebc2fb3" Dec 03 13:02:26 crc kubenswrapper[4986]: I1203 13:02:26.065112 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-b8hxm"] Dec 03 13:02:26 crc kubenswrapper[4986]: I1203 13:02:26.069948 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-b8hxm"] Dec 03 13:02:26 crc kubenswrapper[4986]: I1203 13:02:26.082466 4986 scope.go:117] "RemoveContainer" containerID="9ac0de0f8cd3e208de5a97070e4f0a781356be67cc3626ae845a1a813caf696d" Dec 03 13:02:26 crc kubenswrapper[4986]: I1203 13:02:26.082590 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vzt5m"] Dec 03 13:02:26 crc kubenswrapper[4986]: I1203 13:02:26.087274 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vzt5m"] Dec 03 13:02:26 crc kubenswrapper[4986]: I1203 13:02:26.099243 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-l6bpp"] Dec 03 13:02:26 crc kubenswrapper[4986]: I1203 13:02:26.108819 4986 scope.go:117] "RemoveContainer" containerID="1d721901368a693b8a429157c87785417e12ed63daa1ecb7ce1c7f2a2be8247e" Dec 03 13:02:26 crc kubenswrapper[4986]: I1203 13:02:26.110458 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-l6bpp"] Dec 03 13:02:26 crc kubenswrapper[4986]: I1203 13:02:26.114262 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zzj7p"] Dec 03 13:02:26 crc kubenswrapper[4986]: I1203 13:02:26.117053 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zzj7p"] Dec 03 13:02:26 crc kubenswrapper[4986]: I1203 13:02:26.125008 4986 scope.go:117] "RemoveContainer" containerID="edd9e55fae83b81e03dbd699e8b1347bb1a6471bfe3045f1fdca49370e38d7f4" Dec 03 13:02:26 crc kubenswrapper[4986]: I1203 13:02:26.138113 4986 scope.go:117] "RemoveContainer" containerID="9ac0de0f8cd3e208de5a97070e4f0a781356be67cc3626ae845a1a813caf696d" Dec 03 13:02:26 crc kubenswrapper[4986]: E1203 13:02:26.138611 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ac0de0f8cd3e208de5a97070e4f0a781356be67cc3626ae845a1a813caf696d\": container with ID starting with 9ac0de0f8cd3e208de5a97070e4f0a781356be67cc3626ae845a1a813caf696d not found: ID does not exist" containerID="9ac0de0f8cd3e208de5a97070e4f0a781356be67cc3626ae845a1a813caf696d" Dec 03 13:02:26 crc kubenswrapper[4986]: I1203 13:02:26.138649 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ac0de0f8cd3e208de5a97070e4f0a781356be67cc3626ae845a1a813caf696d"} err="failed to get container status \"9ac0de0f8cd3e208de5a97070e4f0a781356be67cc3626ae845a1a813caf696d\": rpc error: code = NotFound desc = could not find container \"9ac0de0f8cd3e208de5a97070e4f0a781356be67cc3626ae845a1a813caf696d\": container with ID starting with 9ac0de0f8cd3e208de5a97070e4f0a781356be67cc3626ae845a1a813caf696d not found: ID does not exist" Dec 03 13:02:26 crc kubenswrapper[4986]: I1203 13:02:26.138694 4986 scope.go:117] "RemoveContainer" containerID="1d721901368a693b8a429157c87785417e12ed63daa1ecb7ce1c7f2a2be8247e" Dec 03 13:02:26 crc kubenswrapper[4986]: E1203 13:02:26.139250 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d721901368a693b8a429157c87785417e12ed63daa1ecb7ce1c7f2a2be8247e\": container with ID starting with 1d721901368a693b8a429157c87785417e12ed63daa1ecb7ce1c7f2a2be8247e not found: ID does not exist" containerID="1d721901368a693b8a429157c87785417e12ed63daa1ecb7ce1c7f2a2be8247e" Dec 03 13:02:26 crc kubenswrapper[4986]: I1203 13:02:26.139330 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d721901368a693b8a429157c87785417e12ed63daa1ecb7ce1c7f2a2be8247e"} err="failed to get container status \"1d721901368a693b8a429157c87785417e12ed63daa1ecb7ce1c7f2a2be8247e\": rpc error: code = NotFound desc = could not find container \"1d721901368a693b8a429157c87785417e12ed63daa1ecb7ce1c7f2a2be8247e\": container with ID starting with 1d721901368a693b8a429157c87785417e12ed63daa1ecb7ce1c7f2a2be8247e not found: ID does not exist" Dec 03 13:02:26 crc kubenswrapper[4986]: I1203 13:02:26.139360 4986 scope.go:117] "RemoveContainer" containerID="edd9e55fae83b81e03dbd699e8b1347bb1a6471bfe3045f1fdca49370e38d7f4" Dec 03 13:02:26 crc kubenswrapper[4986]: E1203 13:02:26.139642 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"edd9e55fae83b81e03dbd699e8b1347bb1a6471bfe3045f1fdca49370e38d7f4\": container with ID starting with edd9e55fae83b81e03dbd699e8b1347bb1a6471bfe3045f1fdca49370e38d7f4 not found: ID does not exist" containerID="edd9e55fae83b81e03dbd699e8b1347bb1a6471bfe3045f1fdca49370e38d7f4" Dec 03 13:02:26 crc kubenswrapper[4986]: I1203 13:02:26.139677 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edd9e55fae83b81e03dbd699e8b1347bb1a6471bfe3045f1fdca49370e38d7f4"} err="failed to get container status \"edd9e55fae83b81e03dbd699e8b1347bb1a6471bfe3045f1fdca49370e38d7f4\": rpc error: code = NotFound desc = could not find container \"edd9e55fae83b81e03dbd699e8b1347bb1a6471bfe3045f1fdca49370e38d7f4\": container with ID starting with edd9e55fae83b81e03dbd699e8b1347bb1a6471bfe3045f1fdca49370e38d7f4 not found: ID does not exist" Dec 03 13:02:26 crc kubenswrapper[4986]: I1203 13:02:26.139700 4986 scope.go:117] "RemoveContainer" containerID="eea71fc2fcb5242dbe6cf6f1a3dc20caaaa2f85d3ded122858372e2375318af1" Dec 03 13:02:26 crc kubenswrapper[4986]: I1203 13:02:26.155480 4986 scope.go:117] "RemoveContainer" containerID="894818a368572ea3c57b48b8abcaef755f3b87b224801c8733c1833cf52cff4d" Dec 03 13:02:26 crc kubenswrapper[4986]: I1203 13:02:26.174221 4986 scope.go:117] "RemoveContainer" containerID="616e99569a0850b08628cc199c893d0e93a50f69c87cbe3eac492c0774a4d3d9" Dec 03 13:02:26 crc kubenswrapper[4986]: I1203 13:02:26.192472 4986 scope.go:117] "RemoveContainer" containerID="a984f95e59e23e7eb127a6998aed6915ec511ae26702fb7a4c4e6b7f312754d2" Dec 03 13:02:26 crc kubenswrapper[4986]: I1203 13:02:26.214059 4986 scope.go:117] "RemoveContainer" containerID="a2935678628be63eca90e6f2a6e2ec7cdb2351cb3f8ae96840d4f99a5a927124" Dec 03 13:02:26 crc kubenswrapper[4986]: I1203 13:02:26.229112 4986 scope.go:117] "RemoveContainer" containerID="a01285239b62fb2136a6a1f0e401decfca9c90870f63baff0728fd5b4d4e0ae0" Dec 03 13:02:26 crc kubenswrapper[4986]: I1203 13:02:26.243940 4986 scope.go:117] "RemoveContainer" containerID="a984f95e59e23e7eb127a6998aed6915ec511ae26702fb7a4c4e6b7f312754d2" Dec 03 13:02:26 crc kubenswrapper[4986]: E1203 13:02:26.244453 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a984f95e59e23e7eb127a6998aed6915ec511ae26702fb7a4c4e6b7f312754d2\": container with ID starting with a984f95e59e23e7eb127a6998aed6915ec511ae26702fb7a4c4e6b7f312754d2 not found: ID does not exist" containerID="a984f95e59e23e7eb127a6998aed6915ec511ae26702fb7a4c4e6b7f312754d2" Dec 03 13:02:26 crc kubenswrapper[4986]: I1203 13:02:26.244498 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a984f95e59e23e7eb127a6998aed6915ec511ae26702fb7a4c4e6b7f312754d2"} err="failed to get container status \"a984f95e59e23e7eb127a6998aed6915ec511ae26702fb7a4c4e6b7f312754d2\": rpc error: code = NotFound desc = could not find container \"a984f95e59e23e7eb127a6998aed6915ec511ae26702fb7a4c4e6b7f312754d2\": container with ID starting with a984f95e59e23e7eb127a6998aed6915ec511ae26702fb7a4c4e6b7f312754d2 not found: ID does not exist" Dec 03 13:02:26 crc kubenswrapper[4986]: I1203 13:02:26.244530 4986 scope.go:117] "RemoveContainer" containerID="a2935678628be63eca90e6f2a6e2ec7cdb2351cb3f8ae96840d4f99a5a927124" Dec 03 13:02:26 crc kubenswrapper[4986]: E1203 13:02:26.244932 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2935678628be63eca90e6f2a6e2ec7cdb2351cb3f8ae96840d4f99a5a927124\": container with ID starting with a2935678628be63eca90e6f2a6e2ec7cdb2351cb3f8ae96840d4f99a5a927124 not found: ID does not exist" containerID="a2935678628be63eca90e6f2a6e2ec7cdb2351cb3f8ae96840d4f99a5a927124" Dec 03 13:02:26 crc kubenswrapper[4986]: I1203 13:02:26.244993 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2935678628be63eca90e6f2a6e2ec7cdb2351cb3f8ae96840d4f99a5a927124"} err="failed to get container status \"a2935678628be63eca90e6f2a6e2ec7cdb2351cb3f8ae96840d4f99a5a927124\": rpc error: code = NotFound desc = could not find container \"a2935678628be63eca90e6f2a6e2ec7cdb2351cb3f8ae96840d4f99a5a927124\": container with ID starting with a2935678628be63eca90e6f2a6e2ec7cdb2351cb3f8ae96840d4f99a5a927124 not found: ID does not exist" Dec 03 13:02:26 crc kubenswrapper[4986]: I1203 13:02:26.245046 4986 scope.go:117] "RemoveContainer" containerID="a01285239b62fb2136a6a1f0e401decfca9c90870f63baff0728fd5b4d4e0ae0" Dec 03 13:02:26 crc kubenswrapper[4986]: E1203 13:02:26.245366 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a01285239b62fb2136a6a1f0e401decfca9c90870f63baff0728fd5b4d4e0ae0\": container with ID starting with a01285239b62fb2136a6a1f0e401decfca9c90870f63baff0728fd5b4d4e0ae0 not found: ID does not exist" containerID="a01285239b62fb2136a6a1f0e401decfca9c90870f63baff0728fd5b4d4e0ae0" Dec 03 13:02:26 crc kubenswrapper[4986]: I1203 13:02:26.245398 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a01285239b62fb2136a6a1f0e401decfca9c90870f63baff0728fd5b4d4e0ae0"} err="failed to get container status \"a01285239b62fb2136a6a1f0e401decfca9c90870f63baff0728fd5b4d4e0ae0\": rpc error: code = NotFound desc = could not find container \"a01285239b62fb2136a6a1f0e401decfca9c90870f63baff0728fd5b4d4e0ae0\": container with ID starting with a01285239b62fb2136a6a1f0e401decfca9c90870f63baff0728fd5b4d4e0ae0 not found: ID does not exist" Dec 03 13:02:26 crc kubenswrapper[4986]: I1203 13:02:26.245417 4986 scope.go:117] "RemoveContainer" containerID="cbe598634a3b4496336e84f38190312fc9b1e457486521a5b8162e4810fe4b5b" Dec 03 13:02:26 crc kubenswrapper[4986]: I1203 13:02:26.949852 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49d6e247-25ce-45e1-b2fe-2e3ec70cf966" path="/var/lib/kubelet/pods/49d6e247-25ce-45e1-b2fe-2e3ec70cf966/volumes" Dec 03 13:02:26 crc kubenswrapper[4986]: I1203 13:02:26.950550 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7aceb259-65a5-45a6-acd1-8f5cac430ef7" path="/var/lib/kubelet/pods/7aceb259-65a5-45a6-acd1-8f5cac430ef7/volumes" Dec 03 13:02:26 crc kubenswrapper[4986]: I1203 13:02:26.951169 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89c50427-14ae-409d-89d5-a56be0ff97d1" path="/var/lib/kubelet/pods/89c50427-14ae-409d-89d5-a56be0ff97d1/volumes" Dec 03 13:02:26 crc kubenswrapper[4986]: I1203 13:02:26.952000 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2dbfdc4-7122-4b7f-bfdd-189396fb1c77" path="/var/lib/kubelet/pods/b2dbfdc4-7122-4b7f-bfdd-189396fb1c77/volumes" Dec 03 13:02:26 crc kubenswrapper[4986]: I1203 13:02:26.952539 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d28745c8-a7a6-4a38-acb2-f7b6ff1ef54f" path="/var/lib/kubelet/pods/d28745c8-a7a6-4a38-acb2-f7b6ff1ef54f/volumes" Dec 03 13:02:26 crc kubenswrapper[4986]: I1203 13:02:26.994386 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-cwp4r"] Dec 03 13:02:26 crc kubenswrapper[4986]: E1203 13:02:26.994582 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d28745c8-a7a6-4a38-acb2-f7b6ff1ef54f" containerName="extract-content" Dec 03 13:02:26 crc kubenswrapper[4986]: I1203 13:02:26.994593 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="d28745c8-a7a6-4a38-acb2-f7b6ff1ef54f" containerName="extract-content" Dec 03 13:02:26 crc kubenswrapper[4986]: E1203 13:02:26.994606 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7aceb259-65a5-45a6-acd1-8f5cac430ef7" containerName="extract-content" Dec 03 13:02:26 crc kubenswrapper[4986]: I1203 13:02:26.994612 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="7aceb259-65a5-45a6-acd1-8f5cac430ef7" containerName="extract-content" Dec 03 13:02:26 crc kubenswrapper[4986]: E1203 13:02:26.994623 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2dbfdc4-7122-4b7f-bfdd-189396fb1c77" containerName="registry-server" Dec 03 13:02:26 crc kubenswrapper[4986]: I1203 13:02:26.994630 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2dbfdc4-7122-4b7f-bfdd-189396fb1c77" containerName="registry-server" Dec 03 13:02:26 crc kubenswrapper[4986]: E1203 13:02:26.994638 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49d6e247-25ce-45e1-b2fe-2e3ec70cf966" containerName="extract-content" Dec 03 13:02:26 crc kubenswrapper[4986]: I1203 13:02:26.994644 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="49d6e247-25ce-45e1-b2fe-2e3ec70cf966" containerName="extract-content" Dec 03 13:02:26 crc kubenswrapper[4986]: E1203 13:02:26.994652 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89c50427-14ae-409d-89d5-a56be0ff97d1" containerName="marketplace-operator" Dec 03 13:02:26 crc kubenswrapper[4986]: I1203 13:02:26.994657 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="89c50427-14ae-409d-89d5-a56be0ff97d1" containerName="marketplace-operator" Dec 03 13:02:26 crc kubenswrapper[4986]: E1203 13:02:26.994666 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49d6e247-25ce-45e1-b2fe-2e3ec70cf966" containerName="extract-utilities" Dec 03 13:02:26 crc kubenswrapper[4986]: I1203 13:02:26.994672 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="49d6e247-25ce-45e1-b2fe-2e3ec70cf966" containerName="extract-utilities" Dec 03 13:02:26 crc kubenswrapper[4986]: E1203 13:02:26.994682 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7aceb259-65a5-45a6-acd1-8f5cac430ef7" containerName="extract-utilities" Dec 03 13:02:26 crc kubenswrapper[4986]: I1203 13:02:26.994688 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="7aceb259-65a5-45a6-acd1-8f5cac430ef7" containerName="extract-utilities" Dec 03 13:02:26 crc kubenswrapper[4986]: E1203 13:02:26.994697 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d28745c8-a7a6-4a38-acb2-f7b6ff1ef54f" containerName="registry-server" Dec 03 13:02:26 crc kubenswrapper[4986]: I1203 13:02:26.994703 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="d28745c8-a7a6-4a38-acb2-f7b6ff1ef54f" containerName="registry-server" Dec 03 13:02:26 crc kubenswrapper[4986]: E1203 13:02:26.994713 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7aceb259-65a5-45a6-acd1-8f5cac430ef7" containerName="registry-server" Dec 03 13:02:26 crc kubenswrapper[4986]: I1203 13:02:26.994718 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="7aceb259-65a5-45a6-acd1-8f5cac430ef7" containerName="registry-server" Dec 03 13:02:26 crc kubenswrapper[4986]: E1203 13:02:26.994727 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d28745c8-a7a6-4a38-acb2-f7b6ff1ef54f" containerName="extract-utilities" Dec 03 13:02:26 crc kubenswrapper[4986]: I1203 13:02:26.994733 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="d28745c8-a7a6-4a38-acb2-f7b6ff1ef54f" containerName="extract-utilities" Dec 03 13:02:26 crc kubenswrapper[4986]: E1203 13:02:26.994743 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2dbfdc4-7122-4b7f-bfdd-189396fb1c77" containerName="extract-content" Dec 03 13:02:26 crc kubenswrapper[4986]: I1203 13:02:26.994750 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2dbfdc4-7122-4b7f-bfdd-189396fb1c77" containerName="extract-content" Dec 03 13:02:26 crc kubenswrapper[4986]: E1203 13:02:26.994757 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2dbfdc4-7122-4b7f-bfdd-189396fb1c77" containerName="extract-utilities" Dec 03 13:02:26 crc kubenswrapper[4986]: I1203 13:02:26.994763 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2dbfdc4-7122-4b7f-bfdd-189396fb1c77" containerName="extract-utilities" Dec 03 13:02:26 crc kubenswrapper[4986]: E1203 13:02:26.994771 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49d6e247-25ce-45e1-b2fe-2e3ec70cf966" containerName="registry-server" Dec 03 13:02:26 crc kubenswrapper[4986]: I1203 13:02:26.994777 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="49d6e247-25ce-45e1-b2fe-2e3ec70cf966" containerName="registry-server" Dec 03 13:02:26 crc kubenswrapper[4986]: I1203 13:02:26.994855 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2dbfdc4-7122-4b7f-bfdd-189396fb1c77" containerName="registry-server" Dec 03 13:02:26 crc kubenswrapper[4986]: I1203 13:02:26.994863 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="7aceb259-65a5-45a6-acd1-8f5cac430ef7" containerName="registry-server" Dec 03 13:02:26 crc kubenswrapper[4986]: I1203 13:02:26.994873 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="d28745c8-a7a6-4a38-acb2-f7b6ff1ef54f" containerName="registry-server" Dec 03 13:02:26 crc kubenswrapper[4986]: I1203 13:02:26.994884 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="89c50427-14ae-409d-89d5-a56be0ff97d1" containerName="marketplace-operator" Dec 03 13:02:26 crc kubenswrapper[4986]: I1203 13:02:26.994893 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="49d6e247-25ce-45e1-b2fe-2e3ec70cf966" containerName="registry-server" Dec 03 13:02:26 crc kubenswrapper[4986]: I1203 13:02:26.995569 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cwp4r" Dec 03 13:02:26 crc kubenswrapper[4986]: I1203 13:02:26.997826 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 03 13:02:27 crc kubenswrapper[4986]: I1203 13:02:27.003983 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cwp4r"] Dec 03 13:02:27 crc kubenswrapper[4986]: I1203 13:02:27.020331 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7lnmw" event={"ID":"4083ec9d-ae1e-4b92-955d-7b2c3ee874c7","Type":"ContainerStarted","Data":"8c9c8cac40105d9c39a5d6a4d8a2f5a609943369e01e7bf4a54f7894e5136c0d"} Dec 03 13:02:27 crc kubenswrapper[4986]: I1203 13:02:27.020800 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-7lnmw" Dec 03 13:02:27 crc kubenswrapper[4986]: I1203 13:02:27.024123 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-7lnmw" Dec 03 13:02:27 crc kubenswrapper[4986]: I1203 13:02:27.041456 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-7lnmw" podStartSLOduration=3.041438943 podStartE2EDuration="3.041438943s" podCreationTimestamp="2025-12-03 13:02:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:02:27.037639853 +0000 UTC m=+406.504071044" watchObservedRunningTime="2025-12-03 13:02:27.041438943 +0000 UTC m=+406.507870134" Dec 03 13:02:27 crc kubenswrapper[4986]: I1203 13:02:27.147528 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5ee5a5d-5ed3-46fa-acee-0fa1d5f5048b-utilities\") pod \"community-operators-cwp4r\" (UID: \"d5ee5a5d-5ed3-46fa-acee-0fa1d5f5048b\") " pod="openshift-marketplace/community-operators-cwp4r" Dec 03 13:02:27 crc kubenswrapper[4986]: I1203 13:02:27.147627 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5ee5a5d-5ed3-46fa-acee-0fa1d5f5048b-catalog-content\") pod \"community-operators-cwp4r\" (UID: \"d5ee5a5d-5ed3-46fa-acee-0fa1d5f5048b\") " pod="openshift-marketplace/community-operators-cwp4r" Dec 03 13:02:27 crc kubenswrapper[4986]: I1203 13:02:27.147677 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fvtp\" (UniqueName: \"kubernetes.io/projected/d5ee5a5d-5ed3-46fa-acee-0fa1d5f5048b-kube-api-access-9fvtp\") pod \"community-operators-cwp4r\" (UID: \"d5ee5a5d-5ed3-46fa-acee-0fa1d5f5048b\") " pod="openshift-marketplace/community-operators-cwp4r" Dec 03 13:02:27 crc kubenswrapper[4986]: I1203 13:02:27.192829 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-84w5m"] Dec 03 13:02:27 crc kubenswrapper[4986]: I1203 13:02:27.194098 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-84w5m" Dec 03 13:02:27 crc kubenswrapper[4986]: I1203 13:02:27.197313 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 03 13:02:27 crc kubenswrapper[4986]: I1203 13:02:27.201053 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-84w5m"] Dec 03 13:02:27 crc kubenswrapper[4986]: I1203 13:02:27.249195 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fvtp\" (UniqueName: \"kubernetes.io/projected/d5ee5a5d-5ed3-46fa-acee-0fa1d5f5048b-kube-api-access-9fvtp\") pod \"community-operators-cwp4r\" (UID: \"d5ee5a5d-5ed3-46fa-acee-0fa1d5f5048b\") " pod="openshift-marketplace/community-operators-cwp4r" Dec 03 13:02:27 crc kubenswrapper[4986]: I1203 13:02:27.249250 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5ee5a5d-5ed3-46fa-acee-0fa1d5f5048b-utilities\") pod \"community-operators-cwp4r\" (UID: \"d5ee5a5d-5ed3-46fa-acee-0fa1d5f5048b\") " pod="openshift-marketplace/community-operators-cwp4r" Dec 03 13:02:27 crc kubenswrapper[4986]: I1203 13:02:27.249350 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5ee5a5d-5ed3-46fa-acee-0fa1d5f5048b-catalog-content\") pod \"community-operators-cwp4r\" (UID: \"d5ee5a5d-5ed3-46fa-acee-0fa1d5f5048b\") " pod="openshift-marketplace/community-operators-cwp4r" Dec 03 13:02:27 crc kubenswrapper[4986]: I1203 13:02:27.249727 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5ee5a5d-5ed3-46fa-acee-0fa1d5f5048b-utilities\") pod \"community-operators-cwp4r\" (UID: \"d5ee5a5d-5ed3-46fa-acee-0fa1d5f5048b\") " pod="openshift-marketplace/community-operators-cwp4r" Dec 03 13:02:27 crc kubenswrapper[4986]: I1203 13:02:27.249757 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5ee5a5d-5ed3-46fa-acee-0fa1d5f5048b-catalog-content\") pod \"community-operators-cwp4r\" (UID: \"d5ee5a5d-5ed3-46fa-acee-0fa1d5f5048b\") " pod="openshift-marketplace/community-operators-cwp4r" Dec 03 13:02:27 crc kubenswrapper[4986]: I1203 13:02:27.272580 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fvtp\" (UniqueName: \"kubernetes.io/projected/d5ee5a5d-5ed3-46fa-acee-0fa1d5f5048b-kube-api-access-9fvtp\") pod \"community-operators-cwp4r\" (UID: \"d5ee5a5d-5ed3-46fa-acee-0fa1d5f5048b\") " pod="openshift-marketplace/community-operators-cwp4r" Dec 03 13:02:27 crc kubenswrapper[4986]: I1203 13:02:27.328562 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cwp4r" Dec 03 13:02:27 crc kubenswrapper[4986]: I1203 13:02:27.351125 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57e3297d-ec2e-4cc1-8939-4d2dd78c6a8e-catalog-content\") pod \"redhat-marketplace-84w5m\" (UID: \"57e3297d-ec2e-4cc1-8939-4d2dd78c6a8e\") " pod="openshift-marketplace/redhat-marketplace-84w5m" Dec 03 13:02:27 crc kubenswrapper[4986]: I1203 13:02:27.351304 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57e3297d-ec2e-4cc1-8939-4d2dd78c6a8e-utilities\") pod \"redhat-marketplace-84w5m\" (UID: \"57e3297d-ec2e-4cc1-8939-4d2dd78c6a8e\") " pod="openshift-marketplace/redhat-marketplace-84w5m" Dec 03 13:02:27 crc kubenswrapper[4986]: I1203 13:02:27.351486 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzvf5\" (UniqueName: \"kubernetes.io/projected/57e3297d-ec2e-4cc1-8939-4d2dd78c6a8e-kube-api-access-lzvf5\") pod \"redhat-marketplace-84w5m\" (UID: \"57e3297d-ec2e-4cc1-8939-4d2dd78c6a8e\") " pod="openshift-marketplace/redhat-marketplace-84w5m" Dec 03 13:02:27 crc kubenswrapper[4986]: I1203 13:02:27.453006 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzvf5\" (UniqueName: \"kubernetes.io/projected/57e3297d-ec2e-4cc1-8939-4d2dd78c6a8e-kube-api-access-lzvf5\") pod \"redhat-marketplace-84w5m\" (UID: \"57e3297d-ec2e-4cc1-8939-4d2dd78c6a8e\") " pod="openshift-marketplace/redhat-marketplace-84w5m" Dec 03 13:02:27 crc kubenswrapper[4986]: I1203 13:02:27.453089 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57e3297d-ec2e-4cc1-8939-4d2dd78c6a8e-catalog-content\") pod \"redhat-marketplace-84w5m\" (UID: \"57e3297d-ec2e-4cc1-8939-4d2dd78c6a8e\") " pod="openshift-marketplace/redhat-marketplace-84w5m" Dec 03 13:02:27 crc kubenswrapper[4986]: I1203 13:02:27.453121 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57e3297d-ec2e-4cc1-8939-4d2dd78c6a8e-utilities\") pod \"redhat-marketplace-84w5m\" (UID: \"57e3297d-ec2e-4cc1-8939-4d2dd78c6a8e\") " pod="openshift-marketplace/redhat-marketplace-84w5m" Dec 03 13:02:27 crc kubenswrapper[4986]: I1203 13:02:27.453778 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57e3297d-ec2e-4cc1-8939-4d2dd78c6a8e-utilities\") pod \"redhat-marketplace-84w5m\" (UID: \"57e3297d-ec2e-4cc1-8939-4d2dd78c6a8e\") " pod="openshift-marketplace/redhat-marketplace-84w5m" Dec 03 13:02:27 crc kubenswrapper[4986]: I1203 13:02:27.453831 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57e3297d-ec2e-4cc1-8939-4d2dd78c6a8e-catalog-content\") pod \"redhat-marketplace-84w5m\" (UID: \"57e3297d-ec2e-4cc1-8939-4d2dd78c6a8e\") " pod="openshift-marketplace/redhat-marketplace-84w5m" Dec 03 13:02:27 crc kubenswrapper[4986]: I1203 13:02:27.474144 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-t9p5n" Dec 03 13:02:27 crc kubenswrapper[4986]: I1203 13:02:27.496665 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzvf5\" (UniqueName: \"kubernetes.io/projected/57e3297d-ec2e-4cc1-8939-4d2dd78c6a8e-kube-api-access-lzvf5\") pod \"redhat-marketplace-84w5m\" (UID: \"57e3297d-ec2e-4cc1-8939-4d2dd78c6a8e\") " pod="openshift-marketplace/redhat-marketplace-84w5m" Dec 03 13:02:27 crc kubenswrapper[4986]: I1203 13:02:27.513065 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-84w5m" Dec 03 13:02:27 crc kubenswrapper[4986]: I1203 13:02:27.535541 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-zn5lj"] Dec 03 13:02:27 crc kubenswrapper[4986]: I1203 13:02:27.756971 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cwp4r"] Dec 03 13:02:27 crc kubenswrapper[4986]: W1203 13:02:27.765168 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd5ee5a5d_5ed3_46fa_acee_0fa1d5f5048b.slice/crio-1a39c3b90db48224fb9d237a0bd2d0da1174fd6f1d4a45f87d60da4e17b3b411 WatchSource:0}: Error finding container 1a39c3b90db48224fb9d237a0bd2d0da1174fd6f1d4a45f87d60da4e17b3b411: Status 404 returned error can't find the container with id 1a39c3b90db48224fb9d237a0bd2d0da1174fd6f1d4a45f87d60da4e17b3b411 Dec 03 13:02:27 crc kubenswrapper[4986]: I1203 13:02:27.946747 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-84w5m"] Dec 03 13:02:28 crc kubenswrapper[4986]: W1203 13:02:28.014993 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57e3297d_ec2e_4cc1_8939_4d2dd78c6a8e.slice/crio-79d29576b4a242ec2e95589927ce4ee881818bccc8e50a5a433bc1c145297ce7 WatchSource:0}: Error finding container 79d29576b4a242ec2e95589927ce4ee881818bccc8e50a5a433bc1c145297ce7: Status 404 returned error can't find the container with id 79d29576b4a242ec2e95589927ce4ee881818bccc8e50a5a433bc1c145297ce7 Dec 03 13:02:28 crc kubenswrapper[4986]: I1203 13:02:28.030379 4986 generic.go:334] "Generic (PLEG): container finished" podID="d5ee5a5d-5ed3-46fa-acee-0fa1d5f5048b" containerID="23110cd818855c4c690adfdc264178cefaa71e6695bbc9e7792167577ade86c7" exitCode=0 Dec 03 13:02:28 crc kubenswrapper[4986]: I1203 13:02:28.030479 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cwp4r" event={"ID":"d5ee5a5d-5ed3-46fa-acee-0fa1d5f5048b","Type":"ContainerDied","Data":"23110cd818855c4c690adfdc264178cefaa71e6695bbc9e7792167577ade86c7"} Dec 03 13:02:28 crc kubenswrapper[4986]: I1203 13:02:28.030506 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cwp4r" event={"ID":"d5ee5a5d-5ed3-46fa-acee-0fa1d5f5048b","Type":"ContainerStarted","Data":"1a39c3b90db48224fb9d237a0bd2d0da1174fd6f1d4a45f87d60da4e17b3b411"} Dec 03 13:02:28 crc kubenswrapper[4986]: I1203 13:02:28.031865 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-84w5m" event={"ID":"57e3297d-ec2e-4cc1-8939-4d2dd78c6a8e","Type":"ContainerStarted","Data":"79d29576b4a242ec2e95589927ce4ee881818bccc8e50a5a433bc1c145297ce7"} Dec 03 13:02:29 crc kubenswrapper[4986]: I1203 13:02:29.039390 4986 generic.go:334] "Generic (PLEG): container finished" podID="57e3297d-ec2e-4cc1-8939-4d2dd78c6a8e" containerID="91e525b8793213915ac6cc4ccd61cf1d4f2c12191ca8da7635f628896be4a972" exitCode=0 Dec 03 13:02:29 crc kubenswrapper[4986]: I1203 13:02:29.039458 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-84w5m" event={"ID":"57e3297d-ec2e-4cc1-8939-4d2dd78c6a8e","Type":"ContainerDied","Data":"91e525b8793213915ac6cc4ccd61cf1d4f2c12191ca8da7635f628896be4a972"} Dec 03 13:02:29 crc kubenswrapper[4986]: I1203 13:02:29.044320 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cwp4r" event={"ID":"d5ee5a5d-5ed3-46fa-acee-0fa1d5f5048b","Type":"ContainerStarted","Data":"78b1679207a4c6da28b52085839efad9ee2ef2ff74da605645544eb218b207a8"} Dec 03 13:02:29 crc kubenswrapper[4986]: I1203 13:02:29.389394 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5jd64"] Dec 03 13:02:29 crc kubenswrapper[4986]: I1203 13:02:29.391852 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5jd64" Dec 03 13:02:29 crc kubenswrapper[4986]: I1203 13:02:29.393877 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 03 13:02:29 crc kubenswrapper[4986]: I1203 13:02:29.404861 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5jd64"] Dec 03 13:02:29 crc kubenswrapper[4986]: I1203 13:02:29.489202 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d90ad1cf-edbe-42b5-84c6-0c0568fafd43-catalog-content\") pod \"redhat-operators-5jd64\" (UID: \"d90ad1cf-edbe-42b5-84c6-0c0568fafd43\") " pod="openshift-marketplace/redhat-operators-5jd64" Dec 03 13:02:29 crc kubenswrapper[4986]: I1203 13:02:29.489323 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pv897\" (UniqueName: \"kubernetes.io/projected/d90ad1cf-edbe-42b5-84c6-0c0568fafd43-kube-api-access-pv897\") pod \"redhat-operators-5jd64\" (UID: \"d90ad1cf-edbe-42b5-84c6-0c0568fafd43\") " pod="openshift-marketplace/redhat-operators-5jd64" Dec 03 13:02:29 crc kubenswrapper[4986]: I1203 13:02:29.489486 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d90ad1cf-edbe-42b5-84c6-0c0568fafd43-utilities\") pod \"redhat-operators-5jd64\" (UID: \"d90ad1cf-edbe-42b5-84c6-0c0568fafd43\") " pod="openshift-marketplace/redhat-operators-5jd64" Dec 03 13:02:29 crc kubenswrapper[4986]: I1203 13:02:29.592794 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d90ad1cf-edbe-42b5-84c6-0c0568fafd43-utilities\") pod \"redhat-operators-5jd64\" (UID: \"d90ad1cf-edbe-42b5-84c6-0c0568fafd43\") " pod="openshift-marketplace/redhat-operators-5jd64" Dec 03 13:02:29 crc kubenswrapper[4986]: I1203 13:02:29.592853 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d90ad1cf-edbe-42b5-84c6-0c0568fafd43-catalog-content\") pod \"redhat-operators-5jd64\" (UID: \"d90ad1cf-edbe-42b5-84c6-0c0568fafd43\") " pod="openshift-marketplace/redhat-operators-5jd64" Dec 03 13:02:29 crc kubenswrapper[4986]: I1203 13:02:29.592890 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pv897\" (UniqueName: \"kubernetes.io/projected/d90ad1cf-edbe-42b5-84c6-0c0568fafd43-kube-api-access-pv897\") pod \"redhat-operators-5jd64\" (UID: \"d90ad1cf-edbe-42b5-84c6-0c0568fafd43\") " pod="openshift-marketplace/redhat-operators-5jd64" Dec 03 13:02:29 crc kubenswrapper[4986]: I1203 13:02:29.592941 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-x6bw7"] Dec 03 13:02:29 crc kubenswrapper[4986]: I1203 13:02:29.593745 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d90ad1cf-edbe-42b5-84c6-0c0568fafd43-utilities\") pod \"redhat-operators-5jd64\" (UID: \"d90ad1cf-edbe-42b5-84c6-0c0568fafd43\") " pod="openshift-marketplace/redhat-operators-5jd64" Dec 03 13:02:29 crc kubenswrapper[4986]: I1203 13:02:29.594443 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d90ad1cf-edbe-42b5-84c6-0c0568fafd43-catalog-content\") pod \"redhat-operators-5jd64\" (UID: \"d90ad1cf-edbe-42b5-84c6-0c0568fafd43\") " pod="openshift-marketplace/redhat-operators-5jd64" Dec 03 13:02:29 crc kubenswrapper[4986]: I1203 13:02:29.594688 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x6bw7" Dec 03 13:02:29 crc kubenswrapper[4986]: I1203 13:02:29.597939 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 03 13:02:29 crc kubenswrapper[4986]: I1203 13:02:29.603031 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-x6bw7"] Dec 03 13:02:29 crc kubenswrapper[4986]: I1203 13:02:29.614800 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pv897\" (UniqueName: \"kubernetes.io/projected/d90ad1cf-edbe-42b5-84c6-0c0568fafd43-kube-api-access-pv897\") pod \"redhat-operators-5jd64\" (UID: \"d90ad1cf-edbe-42b5-84c6-0c0568fafd43\") " pod="openshift-marketplace/redhat-operators-5jd64" Dec 03 13:02:29 crc kubenswrapper[4986]: I1203 13:02:29.695218 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c13ffdcc-9a64-45ba-8fec-96d700c3a387-catalog-content\") pod \"certified-operators-x6bw7\" (UID: \"c13ffdcc-9a64-45ba-8fec-96d700c3a387\") " pod="openshift-marketplace/certified-operators-x6bw7" Dec 03 13:02:29 crc kubenswrapper[4986]: I1203 13:02:29.695349 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfm9w\" (UniqueName: \"kubernetes.io/projected/c13ffdcc-9a64-45ba-8fec-96d700c3a387-kube-api-access-hfm9w\") pod \"certified-operators-x6bw7\" (UID: \"c13ffdcc-9a64-45ba-8fec-96d700c3a387\") " pod="openshift-marketplace/certified-operators-x6bw7" Dec 03 13:02:29 crc kubenswrapper[4986]: I1203 13:02:29.695378 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c13ffdcc-9a64-45ba-8fec-96d700c3a387-utilities\") pod \"certified-operators-x6bw7\" (UID: \"c13ffdcc-9a64-45ba-8fec-96d700c3a387\") " pod="openshift-marketplace/certified-operators-x6bw7" Dec 03 13:02:29 crc kubenswrapper[4986]: I1203 13:02:29.717160 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5jd64" Dec 03 13:02:29 crc kubenswrapper[4986]: I1203 13:02:29.796123 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c13ffdcc-9a64-45ba-8fec-96d700c3a387-catalog-content\") pod \"certified-operators-x6bw7\" (UID: \"c13ffdcc-9a64-45ba-8fec-96d700c3a387\") " pod="openshift-marketplace/certified-operators-x6bw7" Dec 03 13:02:29 crc kubenswrapper[4986]: I1203 13:02:29.796489 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfm9w\" (UniqueName: \"kubernetes.io/projected/c13ffdcc-9a64-45ba-8fec-96d700c3a387-kube-api-access-hfm9w\") pod \"certified-operators-x6bw7\" (UID: \"c13ffdcc-9a64-45ba-8fec-96d700c3a387\") " pod="openshift-marketplace/certified-operators-x6bw7" Dec 03 13:02:29 crc kubenswrapper[4986]: I1203 13:02:29.796519 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c13ffdcc-9a64-45ba-8fec-96d700c3a387-utilities\") pod \"certified-operators-x6bw7\" (UID: \"c13ffdcc-9a64-45ba-8fec-96d700c3a387\") " pod="openshift-marketplace/certified-operators-x6bw7" Dec 03 13:02:29 crc kubenswrapper[4986]: I1203 13:02:29.798691 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c13ffdcc-9a64-45ba-8fec-96d700c3a387-catalog-content\") pod \"certified-operators-x6bw7\" (UID: \"c13ffdcc-9a64-45ba-8fec-96d700c3a387\") " pod="openshift-marketplace/certified-operators-x6bw7" Dec 03 13:02:29 crc kubenswrapper[4986]: I1203 13:02:29.799151 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c13ffdcc-9a64-45ba-8fec-96d700c3a387-utilities\") pod \"certified-operators-x6bw7\" (UID: \"c13ffdcc-9a64-45ba-8fec-96d700c3a387\") " pod="openshift-marketplace/certified-operators-x6bw7" Dec 03 13:02:29 crc kubenswrapper[4986]: I1203 13:02:29.819928 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfm9w\" (UniqueName: \"kubernetes.io/projected/c13ffdcc-9a64-45ba-8fec-96d700c3a387-kube-api-access-hfm9w\") pod \"certified-operators-x6bw7\" (UID: \"c13ffdcc-9a64-45ba-8fec-96d700c3a387\") " pod="openshift-marketplace/certified-operators-x6bw7" Dec 03 13:02:29 crc kubenswrapper[4986]: I1203 13:02:29.952092 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x6bw7" Dec 03 13:02:30 crc kubenswrapper[4986]: I1203 13:02:30.066377 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-84w5m" event={"ID":"57e3297d-ec2e-4cc1-8939-4d2dd78c6a8e","Type":"ContainerStarted","Data":"3642a2113d7a6ba073841813310abc09214bb65b435ad6da314075a31e5c4b5f"} Dec 03 13:02:30 crc kubenswrapper[4986]: I1203 13:02:30.068637 4986 generic.go:334] "Generic (PLEG): container finished" podID="d5ee5a5d-5ed3-46fa-acee-0fa1d5f5048b" containerID="78b1679207a4c6da28b52085839efad9ee2ef2ff74da605645544eb218b207a8" exitCode=0 Dec 03 13:02:30 crc kubenswrapper[4986]: I1203 13:02:30.068661 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cwp4r" event={"ID":"d5ee5a5d-5ed3-46fa-acee-0fa1d5f5048b","Type":"ContainerDied","Data":"78b1679207a4c6da28b52085839efad9ee2ef2ff74da605645544eb218b207a8"} Dec 03 13:02:30 crc kubenswrapper[4986]: I1203 13:02:30.068677 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cwp4r" event={"ID":"d5ee5a5d-5ed3-46fa-acee-0fa1d5f5048b","Type":"ContainerStarted","Data":"10b9790e1bb873281a3f4d09918e417203db74737733d09243604ec398f407b6"} Dec 03 13:02:30 crc kubenswrapper[4986]: I1203 13:02:30.117083 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-cwp4r" podStartSLOduration=2.495102937 podStartE2EDuration="4.11706642s" podCreationTimestamp="2025-12-03 13:02:26 +0000 UTC" firstStartedPulling="2025-12-03 13:02:28.034751026 +0000 UTC m=+407.501182207" lastFinishedPulling="2025-12-03 13:02:29.656714489 +0000 UTC m=+409.123145690" observedRunningTime="2025-12-03 13:02:30.111792572 +0000 UTC m=+409.578223783" watchObservedRunningTime="2025-12-03 13:02:30.11706642 +0000 UTC m=+409.583497601" Dec 03 13:02:30 crc kubenswrapper[4986]: I1203 13:02:30.119082 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5jd64"] Dec 03 13:02:30 crc kubenswrapper[4986]: W1203 13:02:30.129963 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd90ad1cf_edbe_42b5_84c6_0c0568fafd43.slice/crio-dc060f1d87d08006aa81474b23351468353187384fde66459a3785f43950131e WatchSource:0}: Error finding container dc060f1d87d08006aa81474b23351468353187384fde66459a3785f43950131e: Status 404 returned error can't find the container with id dc060f1d87d08006aa81474b23351468353187384fde66459a3785f43950131e Dec 03 13:02:30 crc kubenswrapper[4986]: I1203 13:02:30.182314 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-x6bw7"] Dec 03 13:02:30 crc kubenswrapper[4986]: W1203 13:02:30.235751 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc13ffdcc_9a64_45ba_8fec_96d700c3a387.slice/crio-0f55cf298515064fd3cc1b6872bbd719b50b2d347793340e428bb4847db70520 WatchSource:0}: Error finding container 0f55cf298515064fd3cc1b6872bbd719b50b2d347793340e428bb4847db70520: Status 404 returned error can't find the container with id 0f55cf298515064fd3cc1b6872bbd719b50b2d347793340e428bb4847db70520 Dec 03 13:02:31 crc kubenswrapper[4986]: I1203 13:02:31.076109 4986 generic.go:334] "Generic (PLEG): container finished" podID="c13ffdcc-9a64-45ba-8fec-96d700c3a387" containerID="a2849a0a4db3bc7cd08ba30e22b7443edaf16f11b18f46982811b74ee8aaa739" exitCode=0 Dec 03 13:02:31 crc kubenswrapper[4986]: I1203 13:02:31.076167 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x6bw7" event={"ID":"c13ffdcc-9a64-45ba-8fec-96d700c3a387","Type":"ContainerDied","Data":"a2849a0a4db3bc7cd08ba30e22b7443edaf16f11b18f46982811b74ee8aaa739"} Dec 03 13:02:31 crc kubenswrapper[4986]: I1203 13:02:31.076707 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x6bw7" event={"ID":"c13ffdcc-9a64-45ba-8fec-96d700c3a387","Type":"ContainerStarted","Data":"0f55cf298515064fd3cc1b6872bbd719b50b2d347793340e428bb4847db70520"} Dec 03 13:02:31 crc kubenswrapper[4986]: I1203 13:02:31.080536 4986 generic.go:334] "Generic (PLEG): container finished" podID="d90ad1cf-edbe-42b5-84c6-0c0568fafd43" containerID="139961b98f477bb2c313feeb15aad3268a87e15ad5a921739f5df28b087ad164" exitCode=0 Dec 03 13:02:31 crc kubenswrapper[4986]: I1203 13:02:31.080611 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5jd64" event={"ID":"d90ad1cf-edbe-42b5-84c6-0c0568fafd43","Type":"ContainerDied","Data":"139961b98f477bb2c313feeb15aad3268a87e15ad5a921739f5df28b087ad164"} Dec 03 13:02:31 crc kubenswrapper[4986]: I1203 13:02:31.080644 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5jd64" event={"ID":"d90ad1cf-edbe-42b5-84c6-0c0568fafd43","Type":"ContainerStarted","Data":"dc060f1d87d08006aa81474b23351468353187384fde66459a3785f43950131e"} Dec 03 13:02:31 crc kubenswrapper[4986]: I1203 13:02:31.085647 4986 generic.go:334] "Generic (PLEG): container finished" podID="57e3297d-ec2e-4cc1-8939-4d2dd78c6a8e" containerID="3642a2113d7a6ba073841813310abc09214bb65b435ad6da314075a31e5c4b5f" exitCode=0 Dec 03 13:02:31 crc kubenswrapper[4986]: I1203 13:02:31.085757 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-84w5m" event={"ID":"57e3297d-ec2e-4cc1-8939-4d2dd78c6a8e","Type":"ContainerDied","Data":"3642a2113d7a6ba073841813310abc09214bb65b435ad6da314075a31e5c4b5f"} Dec 03 13:02:33 crc kubenswrapper[4986]: I1203 13:02:33.491797 4986 patch_prober.go:28] interesting pod/machine-config-daemon-xggpj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 13:02:33 crc kubenswrapper[4986]: I1203 13:02:33.492166 4986 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 13:02:34 crc kubenswrapper[4986]: I1203 13:02:34.109093 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-84w5m" event={"ID":"57e3297d-ec2e-4cc1-8939-4d2dd78c6a8e","Type":"ContainerStarted","Data":"ec3209509b2eb09da5bf74c9b7c06ceba2dd2883f6b4ab7df3ca8c4d748f583f"} Dec 03 13:02:34 crc kubenswrapper[4986]: I1203 13:02:34.127813 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-84w5m" podStartSLOduration=2.229317394 podStartE2EDuration="7.127788299s" podCreationTimestamp="2025-12-03 13:02:27 +0000 UTC" firstStartedPulling="2025-12-03 13:02:29.041475722 +0000 UTC m=+408.507906913" lastFinishedPulling="2025-12-03 13:02:33.939946627 +0000 UTC m=+413.406377818" observedRunningTime="2025-12-03 13:02:34.127270476 +0000 UTC m=+413.593701677" watchObservedRunningTime="2025-12-03 13:02:34.127788299 +0000 UTC m=+413.594219490" Dec 03 13:02:35 crc kubenswrapper[4986]: I1203 13:02:35.117009 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5jd64" event={"ID":"d90ad1cf-edbe-42b5-84c6-0c0568fafd43","Type":"ContainerDied","Data":"d811021cce69a539a587cc23891bdb9b8f42e1c0d1e83d21da5b5de662c6a05c"} Dec 03 13:02:35 crc kubenswrapper[4986]: I1203 13:02:35.116938 4986 generic.go:334] "Generic (PLEG): container finished" podID="d90ad1cf-edbe-42b5-84c6-0c0568fafd43" containerID="d811021cce69a539a587cc23891bdb9b8f42e1c0d1e83d21da5b5de662c6a05c" exitCode=0 Dec 03 13:02:35 crc kubenswrapper[4986]: I1203 13:02:35.119545 4986 generic.go:334] "Generic (PLEG): container finished" podID="c13ffdcc-9a64-45ba-8fec-96d700c3a387" containerID="abdf68255ada14b64e9d7e270bd867d71ec0d09244d9c0ebbfc0cec2819d5d19" exitCode=0 Dec 03 13:02:35 crc kubenswrapper[4986]: I1203 13:02:35.119603 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x6bw7" event={"ID":"c13ffdcc-9a64-45ba-8fec-96d700c3a387","Type":"ContainerDied","Data":"abdf68255ada14b64e9d7e270bd867d71ec0d09244d9c0ebbfc0cec2819d5d19"} Dec 03 13:02:37 crc kubenswrapper[4986]: I1203 13:02:37.131766 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x6bw7" event={"ID":"c13ffdcc-9a64-45ba-8fec-96d700c3a387","Type":"ContainerStarted","Data":"ca54fc7d9cba88c1761f8802f42addee81c637d27f0bd7c02dfacd13f0dff18c"} Dec 03 13:02:37 crc kubenswrapper[4986]: I1203 13:02:37.135783 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5jd64" event={"ID":"d90ad1cf-edbe-42b5-84c6-0c0568fafd43","Type":"ContainerStarted","Data":"9eb248043181efcc58596a772f3e072ccf0bcde0aefa28dfe5fb214de714b36c"} Dec 03 13:02:37 crc kubenswrapper[4986]: I1203 13:02:37.156966 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-x6bw7" podStartSLOduration=3.22926291 podStartE2EDuration="8.156945444s" podCreationTimestamp="2025-12-03 13:02:29 +0000 UTC" firstStartedPulling="2025-12-03 13:02:31.078634968 +0000 UTC m=+410.545066159" lastFinishedPulling="2025-12-03 13:02:36.006317492 +0000 UTC m=+415.472748693" observedRunningTime="2025-12-03 13:02:37.154306195 +0000 UTC m=+416.620737406" watchObservedRunningTime="2025-12-03 13:02:37.156945444 +0000 UTC m=+416.623376655" Dec 03 13:02:37 crc kubenswrapper[4986]: I1203 13:02:37.176803 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5jd64" podStartSLOduration=3.378725464 podStartE2EDuration="8.176779106s" podCreationTimestamp="2025-12-03 13:02:29 +0000 UTC" firstStartedPulling="2025-12-03 13:02:31.083438205 +0000 UTC m=+410.549869396" lastFinishedPulling="2025-12-03 13:02:35.881491847 +0000 UTC m=+415.347923038" observedRunningTime="2025-12-03 13:02:37.172558685 +0000 UTC m=+416.638989886" watchObservedRunningTime="2025-12-03 13:02:37.176779106 +0000 UTC m=+416.643210307" Dec 03 13:02:37 crc kubenswrapper[4986]: I1203 13:02:37.329105 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-cwp4r" Dec 03 13:02:37 crc kubenswrapper[4986]: I1203 13:02:37.330269 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-cwp4r" Dec 03 13:02:37 crc kubenswrapper[4986]: I1203 13:02:37.370050 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-cwp4r" Dec 03 13:02:37 crc kubenswrapper[4986]: I1203 13:02:37.513843 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-84w5m" Dec 03 13:02:37 crc kubenswrapper[4986]: I1203 13:02:37.513897 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-84w5m" Dec 03 13:02:37 crc kubenswrapper[4986]: I1203 13:02:37.567987 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-84w5m" Dec 03 13:02:38 crc kubenswrapper[4986]: I1203 13:02:38.186266 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-cwp4r" Dec 03 13:02:39 crc kubenswrapper[4986]: I1203 13:02:39.718376 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5jd64" Dec 03 13:02:39 crc kubenswrapper[4986]: I1203 13:02:39.718745 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5jd64" Dec 03 13:02:39 crc kubenswrapper[4986]: I1203 13:02:39.952990 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-x6bw7" Dec 03 13:02:39 crc kubenswrapper[4986]: I1203 13:02:39.953378 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-x6bw7" Dec 03 13:02:40 crc kubenswrapper[4986]: I1203 13:02:40.005770 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-x6bw7" Dec 03 13:02:40 crc kubenswrapper[4986]: I1203 13:02:40.772470 4986 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5jd64" podUID="d90ad1cf-edbe-42b5-84c6-0c0568fafd43" containerName="registry-server" probeResult="failure" output=< Dec 03 13:02:40 crc kubenswrapper[4986]: timeout: failed to connect service ":50051" within 1s Dec 03 13:02:40 crc kubenswrapper[4986]: > Dec 03 13:02:47 crc kubenswrapper[4986]: I1203 13:02:47.558423 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-84w5m" Dec 03 13:02:49 crc kubenswrapper[4986]: I1203 13:02:49.768678 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5jd64" Dec 03 13:02:49 crc kubenswrapper[4986]: I1203 13:02:49.815340 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5jd64" Dec 03 13:02:49 crc kubenswrapper[4986]: I1203 13:02:49.993373 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-x6bw7" Dec 03 13:02:52 crc kubenswrapper[4986]: I1203 13:02:52.598347 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" podUID="d4e187c5-28e2-4881-8f59-214d93c767b1" containerName="registry" containerID="cri-o://2d1144c30060327bb2ef635a2a1dc4fb446fb85f0bd067cbdfc49d7d4b9a330b" gracePeriod=30 Dec 03 13:02:55 crc kubenswrapper[4986]: I1203 13:02:55.245948 4986 generic.go:334] "Generic (PLEG): container finished" podID="d4e187c5-28e2-4881-8f59-214d93c767b1" containerID="2d1144c30060327bb2ef635a2a1dc4fb446fb85f0bd067cbdfc49d7d4b9a330b" exitCode=0 Dec 03 13:02:55 crc kubenswrapper[4986]: I1203 13:02:55.246035 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" event={"ID":"d4e187c5-28e2-4881-8f59-214d93c767b1","Type":"ContainerDied","Data":"2d1144c30060327bb2ef635a2a1dc4fb446fb85f0bd067cbdfc49d7d4b9a330b"} Dec 03 13:02:55 crc kubenswrapper[4986]: I1203 13:02:55.691673 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 13:02:55 crc kubenswrapper[4986]: I1203 13:02:55.760675 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d4e187c5-28e2-4881-8f59-214d93c767b1-ca-trust-extracted\") pod \"d4e187c5-28e2-4881-8f59-214d93c767b1\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " Dec 03 13:02:55 crc kubenswrapper[4986]: I1203 13:02:55.760748 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d4e187c5-28e2-4881-8f59-214d93c767b1-bound-sa-token\") pod \"d4e187c5-28e2-4881-8f59-214d93c767b1\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " Dec 03 13:02:55 crc kubenswrapper[4986]: I1203 13:02:55.760775 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d4e187c5-28e2-4881-8f59-214d93c767b1-registry-tls\") pod \"d4e187c5-28e2-4881-8f59-214d93c767b1\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " Dec 03 13:02:55 crc kubenswrapper[4986]: I1203 13:02:55.760807 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d4e187c5-28e2-4881-8f59-214d93c767b1-registry-certificates\") pod \"d4e187c5-28e2-4881-8f59-214d93c767b1\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " Dec 03 13:02:55 crc kubenswrapper[4986]: I1203 13:02:55.760821 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d4e187c5-28e2-4881-8f59-214d93c767b1-trusted-ca\") pod \"d4e187c5-28e2-4881-8f59-214d93c767b1\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " Dec 03 13:02:55 crc kubenswrapper[4986]: I1203 13:02:55.762054 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d4e187c5-28e2-4881-8f59-214d93c767b1-installation-pull-secrets\") pod \"d4e187c5-28e2-4881-8f59-214d93c767b1\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " Dec 03 13:02:55 crc kubenswrapper[4986]: I1203 13:02:55.762090 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mp42m\" (UniqueName: \"kubernetes.io/projected/d4e187c5-28e2-4881-8f59-214d93c767b1-kube-api-access-mp42m\") pod \"d4e187c5-28e2-4881-8f59-214d93c767b1\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " Dec 03 13:02:55 crc kubenswrapper[4986]: I1203 13:02:55.762251 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"d4e187c5-28e2-4881-8f59-214d93c767b1\" (UID: \"d4e187c5-28e2-4881-8f59-214d93c767b1\") " Dec 03 13:02:55 crc kubenswrapper[4986]: I1203 13:02:55.762431 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4e187c5-28e2-4881-8f59-214d93c767b1-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "d4e187c5-28e2-4881-8f59-214d93c767b1" (UID: "d4e187c5-28e2-4881-8f59-214d93c767b1"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:02:55 crc kubenswrapper[4986]: I1203 13:02:55.762694 4986 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d4e187c5-28e2-4881-8f59-214d93c767b1-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 03 13:02:55 crc kubenswrapper[4986]: I1203 13:02:55.762712 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4e187c5-28e2-4881-8f59-214d93c767b1-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "d4e187c5-28e2-4881-8f59-214d93c767b1" (UID: "d4e187c5-28e2-4881-8f59-214d93c767b1"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:02:55 crc kubenswrapper[4986]: I1203 13:02:55.768497 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4e187c5-28e2-4881-8f59-214d93c767b1-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "d4e187c5-28e2-4881-8f59-214d93c767b1" (UID: "d4e187c5-28e2-4881-8f59-214d93c767b1"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:02:55 crc kubenswrapper[4986]: I1203 13:02:55.768948 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4e187c5-28e2-4881-8f59-214d93c767b1-kube-api-access-mp42m" (OuterVolumeSpecName: "kube-api-access-mp42m") pod "d4e187c5-28e2-4881-8f59-214d93c767b1" (UID: "d4e187c5-28e2-4881-8f59-214d93c767b1"). InnerVolumeSpecName "kube-api-access-mp42m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:02:55 crc kubenswrapper[4986]: I1203 13:02:55.769233 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4e187c5-28e2-4881-8f59-214d93c767b1-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "d4e187c5-28e2-4881-8f59-214d93c767b1" (UID: "d4e187c5-28e2-4881-8f59-214d93c767b1"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:02:55 crc kubenswrapper[4986]: I1203 13:02:55.769583 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4e187c5-28e2-4881-8f59-214d93c767b1-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "d4e187c5-28e2-4881-8f59-214d93c767b1" (UID: "d4e187c5-28e2-4881-8f59-214d93c767b1"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:02:55 crc kubenswrapper[4986]: I1203 13:02:55.776375 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "d4e187c5-28e2-4881-8f59-214d93c767b1" (UID: "d4e187c5-28e2-4881-8f59-214d93c767b1"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 03 13:02:55 crc kubenswrapper[4986]: I1203 13:02:55.780457 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4e187c5-28e2-4881-8f59-214d93c767b1-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "d4e187c5-28e2-4881-8f59-214d93c767b1" (UID: "d4e187c5-28e2-4881-8f59-214d93c767b1"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 13:02:55 crc kubenswrapper[4986]: I1203 13:02:55.864403 4986 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d4e187c5-28e2-4881-8f59-214d93c767b1-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 03 13:02:55 crc kubenswrapper[4986]: I1203 13:02:55.864467 4986 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d4e187c5-28e2-4881-8f59-214d93c767b1-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 03 13:02:55 crc kubenswrapper[4986]: I1203 13:02:55.864480 4986 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d4e187c5-28e2-4881-8f59-214d93c767b1-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 03 13:02:55 crc kubenswrapper[4986]: I1203 13:02:55.864495 4986 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d4e187c5-28e2-4881-8f59-214d93c767b1-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 03 13:02:55 crc kubenswrapper[4986]: I1203 13:02:55.864510 4986 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d4e187c5-28e2-4881-8f59-214d93c767b1-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 03 13:02:55 crc kubenswrapper[4986]: I1203 13:02:55.864525 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mp42m\" (UniqueName: \"kubernetes.io/projected/d4e187c5-28e2-4881-8f59-214d93c767b1-kube-api-access-mp42m\") on node \"crc\" DevicePath \"\"" Dec 03 13:02:56 crc kubenswrapper[4986]: I1203 13:02:56.255790 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" event={"ID":"d4e187c5-28e2-4881-8f59-214d93c767b1","Type":"ContainerDied","Data":"3082f142ce8019029d193ba82fa33015f6598b2d78158c40f499646613e8ebcc"} Dec 03 13:02:56 crc kubenswrapper[4986]: I1203 13:02:56.255850 4986 scope.go:117] "RemoveContainer" containerID="2d1144c30060327bb2ef635a2a1dc4fb446fb85f0bd067cbdfc49d7d4b9a330b" Dec 03 13:02:56 crc kubenswrapper[4986]: I1203 13:02:56.255854 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-zn5lj" Dec 03 13:02:56 crc kubenswrapper[4986]: I1203 13:02:56.297205 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-zn5lj"] Dec 03 13:02:56 crc kubenswrapper[4986]: I1203 13:02:56.300829 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-zn5lj"] Dec 03 13:02:56 crc kubenswrapper[4986]: I1203 13:02:56.954034 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4e187c5-28e2-4881-8f59-214d93c767b1" path="/var/lib/kubelet/pods/d4e187c5-28e2-4881-8f59-214d93c767b1/volumes" Dec 03 13:03:03 crc kubenswrapper[4986]: I1203 13:03:03.490971 4986 patch_prober.go:28] interesting pod/machine-config-daemon-xggpj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 13:03:03 crc kubenswrapper[4986]: I1203 13:03:03.491500 4986 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 13:03:03 crc kubenswrapper[4986]: I1203 13:03:03.491566 4986 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" Dec 03 13:03:03 crc kubenswrapper[4986]: I1203 13:03:03.492982 4986 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b5b4a46f5d4e257eeb833a2af2a9b0432001e7c973833c320044c5109d1acbda"} pod="openshift-machine-config-operator/machine-config-daemon-xggpj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 13:03:03 crc kubenswrapper[4986]: I1203 13:03:03.493270 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" containerName="machine-config-daemon" containerID="cri-o://b5b4a46f5d4e257eeb833a2af2a9b0432001e7c973833c320044c5109d1acbda" gracePeriod=600 Dec 03 13:03:04 crc kubenswrapper[4986]: I1203 13:03:04.311137 4986 generic.go:334] "Generic (PLEG): container finished" podID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" containerID="b5b4a46f5d4e257eeb833a2af2a9b0432001e7c973833c320044c5109d1acbda" exitCode=0 Dec 03 13:03:04 crc kubenswrapper[4986]: I1203 13:03:04.311223 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" event={"ID":"f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c","Type":"ContainerDied","Data":"b5b4a46f5d4e257eeb833a2af2a9b0432001e7c973833c320044c5109d1acbda"} Dec 03 13:03:04 crc kubenswrapper[4986]: I1203 13:03:04.312229 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" event={"ID":"f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c","Type":"ContainerStarted","Data":"4f3ec67bf8f28f554a2921325941f4b24fa918b706c1177ff1b3172fba622a6c"} Dec 03 13:03:04 crc kubenswrapper[4986]: I1203 13:03:04.312368 4986 scope.go:117] "RemoveContainer" containerID="d0ab87b22153ba31fa79d2d210e695b274c2ddc878ad679c605aa8fb716534d1" Dec 03 13:04:41 crc kubenswrapper[4986]: I1203 13:04:41.136315 4986 scope.go:117] "RemoveContainer" containerID="ca2a128d106abd2bc59f6265ea95f5cc7b48018428284ee87f64b3458b68ffde" Dec 03 13:05:03 crc kubenswrapper[4986]: I1203 13:05:03.491415 4986 patch_prober.go:28] interesting pod/machine-config-daemon-xggpj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 13:05:03 crc kubenswrapper[4986]: I1203 13:05:03.492131 4986 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 13:05:33 crc kubenswrapper[4986]: I1203 13:05:33.491662 4986 patch_prober.go:28] interesting pod/machine-config-daemon-xggpj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 13:05:33 crc kubenswrapper[4986]: I1203 13:05:33.492385 4986 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 13:06:03 crc kubenswrapper[4986]: I1203 13:06:03.491241 4986 patch_prober.go:28] interesting pod/machine-config-daemon-xggpj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 13:06:03 crc kubenswrapper[4986]: I1203 13:06:03.492101 4986 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 13:06:03 crc kubenswrapper[4986]: I1203 13:06:03.492177 4986 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" Dec 03 13:06:03 crc kubenswrapper[4986]: I1203 13:06:03.493124 4986 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4f3ec67bf8f28f554a2921325941f4b24fa918b706c1177ff1b3172fba622a6c"} pod="openshift-machine-config-operator/machine-config-daemon-xggpj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 13:06:03 crc kubenswrapper[4986]: I1203 13:06:03.493226 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" containerName="machine-config-daemon" containerID="cri-o://4f3ec67bf8f28f554a2921325941f4b24fa918b706c1177ff1b3172fba622a6c" gracePeriod=600 Dec 03 13:06:04 crc kubenswrapper[4986]: I1203 13:06:04.408486 4986 generic.go:334] "Generic (PLEG): container finished" podID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" containerID="4f3ec67bf8f28f554a2921325941f4b24fa918b706c1177ff1b3172fba622a6c" exitCode=0 Dec 03 13:06:04 crc kubenswrapper[4986]: I1203 13:06:04.408600 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" event={"ID":"f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c","Type":"ContainerDied","Data":"4f3ec67bf8f28f554a2921325941f4b24fa918b706c1177ff1b3172fba622a6c"} Dec 03 13:06:04 crc kubenswrapper[4986]: I1203 13:06:04.409180 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" event={"ID":"f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c","Type":"ContainerStarted","Data":"7d6eb68371a8474a49cff3df53b688047bada4287bac42533432af78bbe4483e"} Dec 03 13:06:04 crc kubenswrapper[4986]: I1203 13:06:04.409215 4986 scope.go:117] "RemoveContainer" containerID="b5b4a46f5d4e257eeb833a2af2a9b0432001e7c973833c320044c5109d1acbda" Dec 03 13:06:41 crc kubenswrapper[4986]: I1203 13:06:41.188564 4986 scope.go:117] "RemoveContainer" containerID="506a6f1fb8cbe50cc27dc03ed23d24f30ffd375aff821ed095160022031fcf5a" Dec 03 13:06:41 crc kubenswrapper[4986]: I1203 13:06:41.213530 4986 scope.go:117] "RemoveContainer" containerID="d2af21c038905513a752bed20b95b4d5e9d30cab21f74aa4cd9212130f15534f" Dec 03 13:08:03 crc kubenswrapper[4986]: I1203 13:08:03.491978 4986 patch_prober.go:28] interesting pod/machine-config-daemon-xggpj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 13:08:03 crc kubenswrapper[4986]: I1203 13:08:03.493544 4986 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 13:08:33 crc kubenswrapper[4986]: I1203 13:08:33.491406 4986 patch_prober.go:28] interesting pod/machine-config-daemon-xggpj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 13:08:33 crc kubenswrapper[4986]: I1203 13:08:33.492025 4986 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 13:08:34 crc kubenswrapper[4986]: I1203 13:08:34.644974 4986 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 03 13:09:03 crc kubenswrapper[4986]: I1203 13:09:03.491380 4986 patch_prober.go:28] interesting pod/machine-config-daemon-xggpj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 13:09:03 crc kubenswrapper[4986]: I1203 13:09:03.492068 4986 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 13:09:03 crc kubenswrapper[4986]: I1203 13:09:03.492130 4986 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" Dec 03 13:09:03 crc kubenswrapper[4986]: I1203 13:09:03.492933 4986 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7d6eb68371a8474a49cff3df53b688047bada4287bac42533432af78bbe4483e"} pod="openshift-machine-config-operator/machine-config-daemon-xggpj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 13:09:03 crc kubenswrapper[4986]: I1203 13:09:03.493020 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" containerName="machine-config-daemon" containerID="cri-o://7d6eb68371a8474a49cff3df53b688047bada4287bac42533432af78bbe4483e" gracePeriod=600 Dec 03 13:09:04 crc kubenswrapper[4986]: I1203 13:09:04.562403 4986 generic.go:334] "Generic (PLEG): container finished" podID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" containerID="7d6eb68371a8474a49cff3df53b688047bada4287bac42533432af78bbe4483e" exitCode=0 Dec 03 13:09:04 crc kubenswrapper[4986]: I1203 13:09:04.562449 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" event={"ID":"f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c","Type":"ContainerDied","Data":"7d6eb68371a8474a49cff3df53b688047bada4287bac42533432af78bbe4483e"} Dec 03 13:09:04 crc kubenswrapper[4986]: I1203 13:09:04.562784 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" event={"ID":"f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c","Type":"ContainerStarted","Data":"8955c8a516ca8c4bf5f23613b3c8c76be6f843b93d44621c8aa9c9ffd7fe8443"} Dec 03 13:09:04 crc kubenswrapper[4986]: I1203 13:09:04.562807 4986 scope.go:117] "RemoveContainer" containerID="4f3ec67bf8f28f554a2921325941f4b24fa918b706c1177ff1b3172fba622a6c" Dec 03 13:09:18 crc kubenswrapper[4986]: I1203 13:09:18.607995 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-pm8hm"] Dec 03 13:09:18 crc kubenswrapper[4986]: E1203 13:09:18.610190 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4e187c5-28e2-4881-8f59-214d93c767b1" containerName="registry" Dec 03 13:09:18 crc kubenswrapper[4986]: I1203 13:09:18.610404 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4e187c5-28e2-4881-8f59-214d93c767b1" containerName="registry" Dec 03 13:09:18 crc kubenswrapper[4986]: I1203 13:09:18.610686 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4e187c5-28e2-4881-8f59-214d93c767b1" containerName="registry" Dec 03 13:09:18 crc kubenswrapper[4986]: I1203 13:09:18.612026 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pm8hm" Dec 03 13:09:18 crc kubenswrapper[4986]: I1203 13:09:18.626265 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pm8hm"] Dec 03 13:09:18 crc kubenswrapper[4986]: I1203 13:09:18.648703 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1000a4c1-4987-45e1-a7ee-be54b5d4023a-utilities\") pod \"certified-operators-pm8hm\" (UID: \"1000a4c1-4987-45e1-a7ee-be54b5d4023a\") " pod="openshift-marketplace/certified-operators-pm8hm" Dec 03 13:09:18 crc kubenswrapper[4986]: I1203 13:09:18.648895 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwl9j\" (UniqueName: \"kubernetes.io/projected/1000a4c1-4987-45e1-a7ee-be54b5d4023a-kube-api-access-cwl9j\") pod \"certified-operators-pm8hm\" (UID: \"1000a4c1-4987-45e1-a7ee-be54b5d4023a\") " pod="openshift-marketplace/certified-operators-pm8hm" Dec 03 13:09:18 crc kubenswrapper[4986]: I1203 13:09:18.648997 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1000a4c1-4987-45e1-a7ee-be54b5d4023a-catalog-content\") pod \"certified-operators-pm8hm\" (UID: \"1000a4c1-4987-45e1-a7ee-be54b5d4023a\") " pod="openshift-marketplace/certified-operators-pm8hm" Dec 03 13:09:18 crc kubenswrapper[4986]: I1203 13:09:18.749539 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1000a4c1-4987-45e1-a7ee-be54b5d4023a-catalog-content\") pod \"certified-operators-pm8hm\" (UID: \"1000a4c1-4987-45e1-a7ee-be54b5d4023a\") " pod="openshift-marketplace/certified-operators-pm8hm" Dec 03 13:09:18 crc kubenswrapper[4986]: I1203 13:09:18.749873 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1000a4c1-4987-45e1-a7ee-be54b5d4023a-utilities\") pod \"certified-operators-pm8hm\" (UID: \"1000a4c1-4987-45e1-a7ee-be54b5d4023a\") " pod="openshift-marketplace/certified-operators-pm8hm" Dec 03 13:09:18 crc kubenswrapper[4986]: I1203 13:09:18.750018 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwl9j\" (UniqueName: \"kubernetes.io/projected/1000a4c1-4987-45e1-a7ee-be54b5d4023a-kube-api-access-cwl9j\") pod \"certified-operators-pm8hm\" (UID: \"1000a4c1-4987-45e1-a7ee-be54b5d4023a\") " pod="openshift-marketplace/certified-operators-pm8hm" Dec 03 13:09:18 crc kubenswrapper[4986]: I1203 13:09:18.750056 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1000a4c1-4987-45e1-a7ee-be54b5d4023a-catalog-content\") pod \"certified-operators-pm8hm\" (UID: \"1000a4c1-4987-45e1-a7ee-be54b5d4023a\") " pod="openshift-marketplace/certified-operators-pm8hm" Dec 03 13:09:18 crc kubenswrapper[4986]: I1203 13:09:18.750303 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1000a4c1-4987-45e1-a7ee-be54b5d4023a-utilities\") pod \"certified-operators-pm8hm\" (UID: \"1000a4c1-4987-45e1-a7ee-be54b5d4023a\") " pod="openshift-marketplace/certified-operators-pm8hm" Dec 03 13:09:18 crc kubenswrapper[4986]: I1203 13:09:18.791685 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwl9j\" (UniqueName: \"kubernetes.io/projected/1000a4c1-4987-45e1-a7ee-be54b5d4023a-kube-api-access-cwl9j\") pod \"certified-operators-pm8hm\" (UID: \"1000a4c1-4987-45e1-a7ee-be54b5d4023a\") " pod="openshift-marketplace/certified-operators-pm8hm" Dec 03 13:09:18 crc kubenswrapper[4986]: I1203 13:09:18.934637 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pm8hm" Dec 03 13:09:19 crc kubenswrapper[4986]: I1203 13:09:19.155331 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pm8hm"] Dec 03 13:09:19 crc kubenswrapper[4986]: I1203 13:09:19.662382 4986 generic.go:334] "Generic (PLEG): container finished" podID="1000a4c1-4987-45e1-a7ee-be54b5d4023a" containerID="72f7c1bbb5f8e49352dd7006d76a22e025b1ee6cf01222bff0a2d645605828d1" exitCode=0 Dec 03 13:09:19 crc kubenswrapper[4986]: I1203 13:09:19.662457 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pm8hm" event={"ID":"1000a4c1-4987-45e1-a7ee-be54b5d4023a","Type":"ContainerDied","Data":"72f7c1bbb5f8e49352dd7006d76a22e025b1ee6cf01222bff0a2d645605828d1"} Dec 03 13:09:19 crc kubenswrapper[4986]: I1203 13:09:19.662525 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pm8hm" event={"ID":"1000a4c1-4987-45e1-a7ee-be54b5d4023a","Type":"ContainerStarted","Data":"48dae0debf00154b51bfc69ef5e80d130f4f77214761915b8e897a50b0f4657a"} Dec 03 13:09:19 crc kubenswrapper[4986]: I1203 13:09:19.664592 4986 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 03 13:09:20 crc kubenswrapper[4986]: I1203 13:09:20.669652 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pm8hm" event={"ID":"1000a4c1-4987-45e1-a7ee-be54b5d4023a","Type":"ContainerStarted","Data":"2661110cd5ce413194fb7a1c7abab5e3e5d1c2a82715fe47cb662ff4b30030e2"} Dec 03 13:09:21 crc kubenswrapper[4986]: I1203 13:09:21.677508 4986 generic.go:334] "Generic (PLEG): container finished" podID="1000a4c1-4987-45e1-a7ee-be54b5d4023a" containerID="2661110cd5ce413194fb7a1c7abab5e3e5d1c2a82715fe47cb662ff4b30030e2" exitCode=0 Dec 03 13:09:21 crc kubenswrapper[4986]: I1203 13:09:21.677566 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pm8hm" event={"ID":"1000a4c1-4987-45e1-a7ee-be54b5d4023a","Type":"ContainerDied","Data":"2661110cd5ce413194fb7a1c7abab5e3e5d1c2a82715fe47cb662ff4b30030e2"} Dec 03 13:09:22 crc kubenswrapper[4986]: I1203 13:09:22.687790 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pm8hm" event={"ID":"1000a4c1-4987-45e1-a7ee-be54b5d4023a","Type":"ContainerStarted","Data":"fdc3775ff201008f71b66158d27ff6a4156d6049401760790dfe5d75a1f319a9"} Dec 03 13:09:22 crc kubenswrapper[4986]: I1203 13:09:22.715100 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-pm8hm" podStartSLOduration=2.135240124 podStartE2EDuration="4.715069767s" podCreationTimestamp="2025-12-03 13:09:18 +0000 UTC" firstStartedPulling="2025-12-03 13:09:19.664003165 +0000 UTC m=+819.130434396" lastFinishedPulling="2025-12-03 13:09:22.243832848 +0000 UTC m=+821.710264039" observedRunningTime="2025-12-03 13:09:22.711232658 +0000 UTC m=+822.177663899" watchObservedRunningTime="2025-12-03 13:09:22.715069767 +0000 UTC m=+822.181500998" Dec 03 13:09:28 crc kubenswrapper[4986]: I1203 13:09:28.935429 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-pm8hm" Dec 03 13:09:28 crc kubenswrapper[4986]: I1203 13:09:28.935857 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-pm8hm" Dec 03 13:09:28 crc kubenswrapper[4986]: I1203 13:09:28.994062 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-pm8hm" Dec 03 13:09:29 crc kubenswrapper[4986]: I1203 13:09:29.800534 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-pm8hm" Dec 03 13:09:29 crc kubenswrapper[4986]: I1203 13:09:29.843856 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pm8hm"] Dec 03 13:09:31 crc kubenswrapper[4986]: I1203 13:09:31.759455 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-pm8hm" podUID="1000a4c1-4987-45e1-a7ee-be54b5d4023a" containerName="registry-server" containerID="cri-o://fdc3775ff201008f71b66158d27ff6a4156d6049401760790dfe5d75a1f319a9" gracePeriod=2 Dec 03 13:09:32 crc kubenswrapper[4986]: I1203 13:09:32.757342 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pm8hm" Dec 03 13:09:32 crc kubenswrapper[4986]: I1203 13:09:32.767183 4986 generic.go:334] "Generic (PLEG): container finished" podID="1000a4c1-4987-45e1-a7ee-be54b5d4023a" containerID="fdc3775ff201008f71b66158d27ff6a4156d6049401760790dfe5d75a1f319a9" exitCode=0 Dec 03 13:09:32 crc kubenswrapper[4986]: I1203 13:09:32.767238 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pm8hm" event={"ID":"1000a4c1-4987-45e1-a7ee-be54b5d4023a","Type":"ContainerDied","Data":"fdc3775ff201008f71b66158d27ff6a4156d6049401760790dfe5d75a1f319a9"} Dec 03 13:09:32 crc kubenswrapper[4986]: I1203 13:09:32.767272 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pm8hm" event={"ID":"1000a4c1-4987-45e1-a7ee-be54b5d4023a","Type":"ContainerDied","Data":"48dae0debf00154b51bfc69ef5e80d130f4f77214761915b8e897a50b0f4657a"} Dec 03 13:09:32 crc kubenswrapper[4986]: I1203 13:09:32.767332 4986 scope.go:117] "RemoveContainer" containerID="fdc3775ff201008f71b66158d27ff6a4156d6049401760790dfe5d75a1f319a9" Dec 03 13:09:32 crc kubenswrapper[4986]: I1203 13:09:32.767538 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pm8hm" Dec 03 13:09:32 crc kubenswrapper[4986]: I1203 13:09:32.790239 4986 scope.go:117] "RemoveContainer" containerID="2661110cd5ce413194fb7a1c7abab5e3e5d1c2a82715fe47cb662ff4b30030e2" Dec 03 13:09:32 crc kubenswrapper[4986]: I1203 13:09:32.804716 4986 scope.go:117] "RemoveContainer" containerID="72f7c1bbb5f8e49352dd7006d76a22e025b1ee6cf01222bff0a2d645605828d1" Dec 03 13:09:32 crc kubenswrapper[4986]: I1203 13:09:32.820514 4986 scope.go:117] "RemoveContainer" containerID="fdc3775ff201008f71b66158d27ff6a4156d6049401760790dfe5d75a1f319a9" Dec 03 13:09:32 crc kubenswrapper[4986]: E1203 13:09:32.822661 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fdc3775ff201008f71b66158d27ff6a4156d6049401760790dfe5d75a1f319a9\": container with ID starting with fdc3775ff201008f71b66158d27ff6a4156d6049401760790dfe5d75a1f319a9 not found: ID does not exist" containerID="fdc3775ff201008f71b66158d27ff6a4156d6049401760790dfe5d75a1f319a9" Dec 03 13:09:32 crc kubenswrapper[4986]: I1203 13:09:32.822714 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fdc3775ff201008f71b66158d27ff6a4156d6049401760790dfe5d75a1f319a9"} err="failed to get container status \"fdc3775ff201008f71b66158d27ff6a4156d6049401760790dfe5d75a1f319a9\": rpc error: code = NotFound desc = could not find container \"fdc3775ff201008f71b66158d27ff6a4156d6049401760790dfe5d75a1f319a9\": container with ID starting with fdc3775ff201008f71b66158d27ff6a4156d6049401760790dfe5d75a1f319a9 not found: ID does not exist" Dec 03 13:09:32 crc kubenswrapper[4986]: I1203 13:09:32.822745 4986 scope.go:117] "RemoveContainer" containerID="2661110cd5ce413194fb7a1c7abab5e3e5d1c2a82715fe47cb662ff4b30030e2" Dec 03 13:09:32 crc kubenswrapper[4986]: E1203 13:09:32.823148 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2661110cd5ce413194fb7a1c7abab5e3e5d1c2a82715fe47cb662ff4b30030e2\": container with ID starting with 2661110cd5ce413194fb7a1c7abab5e3e5d1c2a82715fe47cb662ff4b30030e2 not found: ID does not exist" containerID="2661110cd5ce413194fb7a1c7abab5e3e5d1c2a82715fe47cb662ff4b30030e2" Dec 03 13:09:32 crc kubenswrapper[4986]: I1203 13:09:32.823185 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2661110cd5ce413194fb7a1c7abab5e3e5d1c2a82715fe47cb662ff4b30030e2"} err="failed to get container status \"2661110cd5ce413194fb7a1c7abab5e3e5d1c2a82715fe47cb662ff4b30030e2\": rpc error: code = NotFound desc = could not find container \"2661110cd5ce413194fb7a1c7abab5e3e5d1c2a82715fe47cb662ff4b30030e2\": container with ID starting with 2661110cd5ce413194fb7a1c7abab5e3e5d1c2a82715fe47cb662ff4b30030e2 not found: ID does not exist" Dec 03 13:09:32 crc kubenswrapper[4986]: I1203 13:09:32.823215 4986 scope.go:117] "RemoveContainer" containerID="72f7c1bbb5f8e49352dd7006d76a22e025b1ee6cf01222bff0a2d645605828d1" Dec 03 13:09:32 crc kubenswrapper[4986]: E1203 13:09:32.823549 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72f7c1bbb5f8e49352dd7006d76a22e025b1ee6cf01222bff0a2d645605828d1\": container with ID starting with 72f7c1bbb5f8e49352dd7006d76a22e025b1ee6cf01222bff0a2d645605828d1 not found: ID does not exist" containerID="72f7c1bbb5f8e49352dd7006d76a22e025b1ee6cf01222bff0a2d645605828d1" Dec 03 13:09:32 crc kubenswrapper[4986]: I1203 13:09:32.823578 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72f7c1bbb5f8e49352dd7006d76a22e025b1ee6cf01222bff0a2d645605828d1"} err="failed to get container status \"72f7c1bbb5f8e49352dd7006d76a22e025b1ee6cf01222bff0a2d645605828d1\": rpc error: code = NotFound desc = could not find container \"72f7c1bbb5f8e49352dd7006d76a22e025b1ee6cf01222bff0a2d645605828d1\": container with ID starting with 72f7c1bbb5f8e49352dd7006d76a22e025b1ee6cf01222bff0a2d645605828d1 not found: ID does not exist" Dec 03 13:09:32 crc kubenswrapper[4986]: I1203 13:09:32.919392 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1000a4c1-4987-45e1-a7ee-be54b5d4023a-utilities\") pod \"1000a4c1-4987-45e1-a7ee-be54b5d4023a\" (UID: \"1000a4c1-4987-45e1-a7ee-be54b5d4023a\") " Dec 03 13:09:32 crc kubenswrapper[4986]: I1203 13:09:32.919771 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwl9j\" (UniqueName: \"kubernetes.io/projected/1000a4c1-4987-45e1-a7ee-be54b5d4023a-kube-api-access-cwl9j\") pod \"1000a4c1-4987-45e1-a7ee-be54b5d4023a\" (UID: \"1000a4c1-4987-45e1-a7ee-be54b5d4023a\") " Dec 03 13:09:32 crc kubenswrapper[4986]: I1203 13:09:32.919850 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1000a4c1-4987-45e1-a7ee-be54b5d4023a-catalog-content\") pod \"1000a4c1-4987-45e1-a7ee-be54b5d4023a\" (UID: \"1000a4c1-4987-45e1-a7ee-be54b5d4023a\") " Dec 03 13:09:32 crc kubenswrapper[4986]: I1203 13:09:32.920443 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1000a4c1-4987-45e1-a7ee-be54b5d4023a-utilities" (OuterVolumeSpecName: "utilities") pod "1000a4c1-4987-45e1-a7ee-be54b5d4023a" (UID: "1000a4c1-4987-45e1-a7ee-be54b5d4023a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 13:09:32 crc kubenswrapper[4986]: I1203 13:09:32.924641 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1000a4c1-4987-45e1-a7ee-be54b5d4023a-kube-api-access-cwl9j" (OuterVolumeSpecName: "kube-api-access-cwl9j") pod "1000a4c1-4987-45e1-a7ee-be54b5d4023a" (UID: "1000a4c1-4987-45e1-a7ee-be54b5d4023a"). InnerVolumeSpecName "kube-api-access-cwl9j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:09:32 crc kubenswrapper[4986]: I1203 13:09:32.963149 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1000a4c1-4987-45e1-a7ee-be54b5d4023a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1000a4c1-4987-45e1-a7ee-be54b5d4023a" (UID: "1000a4c1-4987-45e1-a7ee-be54b5d4023a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 13:09:33 crc kubenswrapper[4986]: I1203 13:09:33.021492 4986 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1000a4c1-4987-45e1-a7ee-be54b5d4023a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 13:09:33 crc kubenswrapper[4986]: I1203 13:09:33.021528 4986 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1000a4c1-4987-45e1-a7ee-be54b5d4023a-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 13:09:33 crc kubenswrapper[4986]: I1203 13:09:33.021537 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwl9j\" (UniqueName: \"kubernetes.io/projected/1000a4c1-4987-45e1-a7ee-be54b5d4023a-kube-api-access-cwl9j\") on node \"crc\" DevicePath \"\"" Dec 03 13:09:33 crc kubenswrapper[4986]: I1203 13:09:33.104657 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pm8hm"] Dec 03 13:09:33 crc kubenswrapper[4986]: I1203 13:09:33.108558 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-pm8hm"] Dec 03 13:09:34 crc kubenswrapper[4986]: I1203 13:09:34.951746 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1000a4c1-4987-45e1-a7ee-be54b5d4023a" path="/var/lib/kubelet/pods/1000a4c1-4987-45e1-a7ee-be54b5d4023a/volumes" Dec 03 13:09:52 crc kubenswrapper[4986]: I1203 13:09:52.223163 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-cl9kr"] Dec 03 13:09:52 crc kubenswrapper[4986]: E1203 13:09:52.223911 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1000a4c1-4987-45e1-a7ee-be54b5d4023a" containerName="extract-content" Dec 03 13:09:52 crc kubenswrapper[4986]: I1203 13:09:52.223925 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="1000a4c1-4987-45e1-a7ee-be54b5d4023a" containerName="extract-content" Dec 03 13:09:52 crc kubenswrapper[4986]: E1203 13:09:52.223941 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1000a4c1-4987-45e1-a7ee-be54b5d4023a" containerName="extract-utilities" Dec 03 13:09:52 crc kubenswrapper[4986]: I1203 13:09:52.223950 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="1000a4c1-4987-45e1-a7ee-be54b5d4023a" containerName="extract-utilities" Dec 03 13:09:52 crc kubenswrapper[4986]: E1203 13:09:52.223961 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1000a4c1-4987-45e1-a7ee-be54b5d4023a" containerName="registry-server" Dec 03 13:09:52 crc kubenswrapper[4986]: I1203 13:09:52.223968 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="1000a4c1-4987-45e1-a7ee-be54b5d4023a" containerName="registry-server" Dec 03 13:09:52 crc kubenswrapper[4986]: I1203 13:09:52.224085 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="1000a4c1-4987-45e1-a7ee-be54b5d4023a" containerName="registry-server" Dec 03 13:09:52 crc kubenswrapper[4986]: I1203 13:09:52.224879 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cl9kr" Dec 03 13:09:52 crc kubenswrapper[4986]: I1203 13:09:52.249198 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cl9kr"] Dec 03 13:09:52 crc kubenswrapper[4986]: I1203 13:09:52.402219 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc69d339-6b83-42e6-bb52-0726bffadd0f-catalog-content\") pod \"redhat-marketplace-cl9kr\" (UID: \"fc69d339-6b83-42e6-bb52-0726bffadd0f\") " pod="openshift-marketplace/redhat-marketplace-cl9kr" Dec 03 13:09:52 crc kubenswrapper[4986]: I1203 13:09:52.402272 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tw8vg\" (UniqueName: \"kubernetes.io/projected/fc69d339-6b83-42e6-bb52-0726bffadd0f-kube-api-access-tw8vg\") pod \"redhat-marketplace-cl9kr\" (UID: \"fc69d339-6b83-42e6-bb52-0726bffadd0f\") " pod="openshift-marketplace/redhat-marketplace-cl9kr" Dec 03 13:09:52 crc kubenswrapper[4986]: I1203 13:09:52.402455 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc69d339-6b83-42e6-bb52-0726bffadd0f-utilities\") pod \"redhat-marketplace-cl9kr\" (UID: \"fc69d339-6b83-42e6-bb52-0726bffadd0f\") " pod="openshift-marketplace/redhat-marketplace-cl9kr" Dec 03 13:09:52 crc kubenswrapper[4986]: I1203 13:09:52.504103 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc69d339-6b83-42e6-bb52-0726bffadd0f-catalog-content\") pod \"redhat-marketplace-cl9kr\" (UID: \"fc69d339-6b83-42e6-bb52-0726bffadd0f\") " pod="openshift-marketplace/redhat-marketplace-cl9kr" Dec 03 13:09:52 crc kubenswrapper[4986]: I1203 13:09:52.504182 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tw8vg\" (UniqueName: \"kubernetes.io/projected/fc69d339-6b83-42e6-bb52-0726bffadd0f-kube-api-access-tw8vg\") pod \"redhat-marketplace-cl9kr\" (UID: \"fc69d339-6b83-42e6-bb52-0726bffadd0f\") " pod="openshift-marketplace/redhat-marketplace-cl9kr" Dec 03 13:09:52 crc kubenswrapper[4986]: I1203 13:09:52.504244 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc69d339-6b83-42e6-bb52-0726bffadd0f-utilities\") pod \"redhat-marketplace-cl9kr\" (UID: \"fc69d339-6b83-42e6-bb52-0726bffadd0f\") " pod="openshift-marketplace/redhat-marketplace-cl9kr" Dec 03 13:09:52 crc kubenswrapper[4986]: I1203 13:09:52.504826 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc69d339-6b83-42e6-bb52-0726bffadd0f-catalog-content\") pod \"redhat-marketplace-cl9kr\" (UID: \"fc69d339-6b83-42e6-bb52-0726bffadd0f\") " pod="openshift-marketplace/redhat-marketplace-cl9kr" Dec 03 13:09:52 crc kubenswrapper[4986]: I1203 13:09:52.504858 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc69d339-6b83-42e6-bb52-0726bffadd0f-utilities\") pod \"redhat-marketplace-cl9kr\" (UID: \"fc69d339-6b83-42e6-bb52-0726bffadd0f\") " pod="openshift-marketplace/redhat-marketplace-cl9kr" Dec 03 13:09:52 crc kubenswrapper[4986]: I1203 13:09:52.523564 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tw8vg\" (UniqueName: \"kubernetes.io/projected/fc69d339-6b83-42e6-bb52-0726bffadd0f-kube-api-access-tw8vg\") pod \"redhat-marketplace-cl9kr\" (UID: \"fc69d339-6b83-42e6-bb52-0726bffadd0f\") " pod="openshift-marketplace/redhat-marketplace-cl9kr" Dec 03 13:09:52 crc kubenswrapper[4986]: I1203 13:09:52.550710 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cl9kr" Dec 03 13:09:52 crc kubenswrapper[4986]: I1203 13:09:52.735737 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cl9kr"] Dec 03 13:09:52 crc kubenswrapper[4986]: I1203 13:09:52.894975 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cl9kr" event={"ID":"fc69d339-6b83-42e6-bb52-0726bffadd0f","Type":"ContainerStarted","Data":"a6a81b035e768d612b1e6492fe7dbb3bb2b1f0bc56fbc9bcfcd34e80d5393d3f"} Dec 03 13:09:52 crc kubenswrapper[4986]: I1203 13:09:52.895339 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cl9kr" event={"ID":"fc69d339-6b83-42e6-bb52-0726bffadd0f","Type":"ContainerStarted","Data":"87f298c7e6777c59d78f9f79f5971793602a4e4681a9898a239e9b0d5c349dfa"} Dec 03 13:09:53 crc kubenswrapper[4986]: I1203 13:09:53.901929 4986 generic.go:334] "Generic (PLEG): container finished" podID="fc69d339-6b83-42e6-bb52-0726bffadd0f" containerID="a6a81b035e768d612b1e6492fe7dbb3bb2b1f0bc56fbc9bcfcd34e80d5393d3f" exitCode=0 Dec 03 13:09:53 crc kubenswrapper[4986]: I1203 13:09:53.901968 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cl9kr" event={"ID":"fc69d339-6b83-42e6-bb52-0726bffadd0f","Type":"ContainerDied","Data":"a6a81b035e768d612b1e6492fe7dbb3bb2b1f0bc56fbc9bcfcd34e80d5393d3f"} Dec 03 13:09:54 crc kubenswrapper[4986]: I1203 13:09:54.909936 4986 generic.go:334] "Generic (PLEG): container finished" podID="fc69d339-6b83-42e6-bb52-0726bffadd0f" containerID="0ffb9d280b4542570912f4b5e49a82fb8cc5598a1319fa20a72bf93a1cfbe619" exitCode=0 Dec 03 13:09:54 crc kubenswrapper[4986]: I1203 13:09:54.910021 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cl9kr" event={"ID":"fc69d339-6b83-42e6-bb52-0726bffadd0f","Type":"ContainerDied","Data":"0ffb9d280b4542570912f4b5e49a82fb8cc5598a1319fa20a72bf93a1cfbe619"} Dec 03 13:09:55 crc kubenswrapper[4986]: I1203 13:09:55.917566 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cl9kr" event={"ID":"fc69d339-6b83-42e6-bb52-0726bffadd0f","Type":"ContainerStarted","Data":"65400d8f960e092955ca845043eee6ce7d44bdb964fe666cd008608167495fa5"} Dec 03 13:10:02 crc kubenswrapper[4986]: I1203 13:10:02.551713 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-cl9kr" Dec 03 13:10:02 crc kubenswrapper[4986]: I1203 13:10:02.552570 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-cl9kr" Dec 03 13:10:02 crc kubenswrapper[4986]: I1203 13:10:02.594165 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-cl9kr" Dec 03 13:10:02 crc kubenswrapper[4986]: I1203 13:10:02.611506 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-cl9kr" podStartSLOduration=9.202134571 podStartE2EDuration="10.611448665s" podCreationTimestamp="2025-12-03 13:09:52 +0000 UTC" firstStartedPulling="2025-12-03 13:09:53.903678399 +0000 UTC m=+853.370109590" lastFinishedPulling="2025-12-03 13:09:55.312992493 +0000 UTC m=+854.779423684" observedRunningTime="2025-12-03 13:09:55.941247586 +0000 UTC m=+855.407678777" watchObservedRunningTime="2025-12-03 13:10:02.611448665 +0000 UTC m=+862.077879876" Dec 03 13:10:02 crc kubenswrapper[4986]: I1203 13:10:02.999113 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-cl9kr" Dec 03 13:10:03 crc kubenswrapper[4986]: I1203 13:10:03.052595 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cl9kr"] Dec 03 13:10:04 crc kubenswrapper[4986]: I1203 13:10:04.962672 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-cl9kr" podUID="fc69d339-6b83-42e6-bb52-0726bffadd0f" containerName="registry-server" containerID="cri-o://65400d8f960e092955ca845043eee6ce7d44bdb964fe666cd008608167495fa5" gracePeriod=2 Dec 03 13:10:05 crc kubenswrapper[4986]: I1203 13:10:05.334526 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cl9kr" Dec 03 13:10:05 crc kubenswrapper[4986]: I1203 13:10:05.456139 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc69d339-6b83-42e6-bb52-0726bffadd0f-utilities\") pod \"fc69d339-6b83-42e6-bb52-0726bffadd0f\" (UID: \"fc69d339-6b83-42e6-bb52-0726bffadd0f\") " Dec 03 13:10:05 crc kubenswrapper[4986]: I1203 13:10:05.456185 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc69d339-6b83-42e6-bb52-0726bffadd0f-catalog-content\") pod \"fc69d339-6b83-42e6-bb52-0726bffadd0f\" (UID: \"fc69d339-6b83-42e6-bb52-0726bffadd0f\") " Dec 03 13:10:05 crc kubenswrapper[4986]: I1203 13:10:05.456275 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tw8vg\" (UniqueName: \"kubernetes.io/projected/fc69d339-6b83-42e6-bb52-0726bffadd0f-kube-api-access-tw8vg\") pod \"fc69d339-6b83-42e6-bb52-0726bffadd0f\" (UID: \"fc69d339-6b83-42e6-bb52-0726bffadd0f\") " Dec 03 13:10:05 crc kubenswrapper[4986]: I1203 13:10:05.457079 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc69d339-6b83-42e6-bb52-0726bffadd0f-utilities" (OuterVolumeSpecName: "utilities") pod "fc69d339-6b83-42e6-bb52-0726bffadd0f" (UID: "fc69d339-6b83-42e6-bb52-0726bffadd0f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 13:10:05 crc kubenswrapper[4986]: I1203 13:10:05.462496 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc69d339-6b83-42e6-bb52-0726bffadd0f-kube-api-access-tw8vg" (OuterVolumeSpecName: "kube-api-access-tw8vg") pod "fc69d339-6b83-42e6-bb52-0726bffadd0f" (UID: "fc69d339-6b83-42e6-bb52-0726bffadd0f"). InnerVolumeSpecName "kube-api-access-tw8vg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:10:05 crc kubenswrapper[4986]: I1203 13:10:05.473182 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc69d339-6b83-42e6-bb52-0726bffadd0f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fc69d339-6b83-42e6-bb52-0726bffadd0f" (UID: "fc69d339-6b83-42e6-bb52-0726bffadd0f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 13:10:05 crc kubenswrapper[4986]: I1203 13:10:05.557764 4986 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc69d339-6b83-42e6-bb52-0726bffadd0f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 13:10:05 crc kubenswrapper[4986]: I1203 13:10:05.557803 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tw8vg\" (UniqueName: \"kubernetes.io/projected/fc69d339-6b83-42e6-bb52-0726bffadd0f-kube-api-access-tw8vg\") on node \"crc\" DevicePath \"\"" Dec 03 13:10:05 crc kubenswrapper[4986]: I1203 13:10:05.557815 4986 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc69d339-6b83-42e6-bb52-0726bffadd0f-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 13:10:05 crc kubenswrapper[4986]: I1203 13:10:05.969522 4986 generic.go:334] "Generic (PLEG): container finished" podID="fc69d339-6b83-42e6-bb52-0726bffadd0f" containerID="65400d8f960e092955ca845043eee6ce7d44bdb964fe666cd008608167495fa5" exitCode=0 Dec 03 13:10:05 crc kubenswrapper[4986]: I1203 13:10:05.969561 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cl9kr" event={"ID":"fc69d339-6b83-42e6-bb52-0726bffadd0f","Type":"ContainerDied","Data":"65400d8f960e092955ca845043eee6ce7d44bdb964fe666cd008608167495fa5"} Dec 03 13:10:05 crc kubenswrapper[4986]: I1203 13:10:05.969604 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cl9kr" event={"ID":"fc69d339-6b83-42e6-bb52-0726bffadd0f","Type":"ContainerDied","Data":"87f298c7e6777c59d78f9f79f5971793602a4e4681a9898a239e9b0d5c349dfa"} Dec 03 13:10:05 crc kubenswrapper[4986]: I1203 13:10:05.969627 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cl9kr" Dec 03 13:10:05 crc kubenswrapper[4986]: I1203 13:10:05.969630 4986 scope.go:117] "RemoveContainer" containerID="65400d8f960e092955ca845043eee6ce7d44bdb964fe666cd008608167495fa5" Dec 03 13:10:05 crc kubenswrapper[4986]: I1203 13:10:05.985121 4986 scope.go:117] "RemoveContainer" containerID="0ffb9d280b4542570912f4b5e49a82fb8cc5598a1319fa20a72bf93a1cfbe619" Dec 03 13:10:05 crc kubenswrapper[4986]: I1203 13:10:05.998470 4986 scope.go:117] "RemoveContainer" containerID="a6a81b035e768d612b1e6492fe7dbb3bb2b1f0bc56fbc9bcfcd34e80d5393d3f" Dec 03 13:10:06 crc kubenswrapper[4986]: I1203 13:10:06.009235 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cl9kr"] Dec 03 13:10:06 crc kubenswrapper[4986]: I1203 13:10:06.013762 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-cl9kr"] Dec 03 13:10:06 crc kubenswrapper[4986]: I1203 13:10:06.025905 4986 scope.go:117] "RemoveContainer" containerID="65400d8f960e092955ca845043eee6ce7d44bdb964fe666cd008608167495fa5" Dec 03 13:10:06 crc kubenswrapper[4986]: E1203 13:10:06.026772 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65400d8f960e092955ca845043eee6ce7d44bdb964fe666cd008608167495fa5\": container with ID starting with 65400d8f960e092955ca845043eee6ce7d44bdb964fe666cd008608167495fa5 not found: ID does not exist" containerID="65400d8f960e092955ca845043eee6ce7d44bdb964fe666cd008608167495fa5" Dec 03 13:10:06 crc kubenswrapper[4986]: I1203 13:10:06.026804 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65400d8f960e092955ca845043eee6ce7d44bdb964fe666cd008608167495fa5"} err="failed to get container status \"65400d8f960e092955ca845043eee6ce7d44bdb964fe666cd008608167495fa5\": rpc error: code = NotFound desc = could not find container \"65400d8f960e092955ca845043eee6ce7d44bdb964fe666cd008608167495fa5\": container with ID starting with 65400d8f960e092955ca845043eee6ce7d44bdb964fe666cd008608167495fa5 not found: ID does not exist" Dec 03 13:10:06 crc kubenswrapper[4986]: I1203 13:10:06.026823 4986 scope.go:117] "RemoveContainer" containerID="0ffb9d280b4542570912f4b5e49a82fb8cc5598a1319fa20a72bf93a1cfbe619" Dec 03 13:10:06 crc kubenswrapper[4986]: E1203 13:10:06.027250 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ffb9d280b4542570912f4b5e49a82fb8cc5598a1319fa20a72bf93a1cfbe619\": container with ID starting with 0ffb9d280b4542570912f4b5e49a82fb8cc5598a1319fa20a72bf93a1cfbe619 not found: ID does not exist" containerID="0ffb9d280b4542570912f4b5e49a82fb8cc5598a1319fa20a72bf93a1cfbe619" Dec 03 13:10:06 crc kubenswrapper[4986]: I1203 13:10:06.027272 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ffb9d280b4542570912f4b5e49a82fb8cc5598a1319fa20a72bf93a1cfbe619"} err="failed to get container status \"0ffb9d280b4542570912f4b5e49a82fb8cc5598a1319fa20a72bf93a1cfbe619\": rpc error: code = NotFound desc = could not find container \"0ffb9d280b4542570912f4b5e49a82fb8cc5598a1319fa20a72bf93a1cfbe619\": container with ID starting with 0ffb9d280b4542570912f4b5e49a82fb8cc5598a1319fa20a72bf93a1cfbe619 not found: ID does not exist" Dec 03 13:10:06 crc kubenswrapper[4986]: I1203 13:10:06.027295 4986 scope.go:117] "RemoveContainer" containerID="a6a81b035e768d612b1e6492fe7dbb3bb2b1f0bc56fbc9bcfcd34e80d5393d3f" Dec 03 13:10:06 crc kubenswrapper[4986]: E1203 13:10:06.027634 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6a81b035e768d612b1e6492fe7dbb3bb2b1f0bc56fbc9bcfcd34e80d5393d3f\": container with ID starting with a6a81b035e768d612b1e6492fe7dbb3bb2b1f0bc56fbc9bcfcd34e80d5393d3f not found: ID does not exist" containerID="a6a81b035e768d612b1e6492fe7dbb3bb2b1f0bc56fbc9bcfcd34e80d5393d3f" Dec 03 13:10:06 crc kubenswrapper[4986]: I1203 13:10:06.027663 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6a81b035e768d612b1e6492fe7dbb3bb2b1f0bc56fbc9bcfcd34e80d5393d3f"} err="failed to get container status \"a6a81b035e768d612b1e6492fe7dbb3bb2b1f0bc56fbc9bcfcd34e80d5393d3f\": rpc error: code = NotFound desc = could not find container \"a6a81b035e768d612b1e6492fe7dbb3bb2b1f0bc56fbc9bcfcd34e80d5393d3f\": container with ID starting with a6a81b035e768d612b1e6492fe7dbb3bb2b1f0bc56fbc9bcfcd34e80d5393d3f not found: ID does not exist" Dec 03 13:10:06 crc kubenswrapper[4986]: I1203 13:10:06.952678 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc69d339-6b83-42e6-bb52-0726bffadd0f" path="/var/lib/kubelet/pods/fc69d339-6b83-42e6-bb52-0726bffadd0f/volumes" Dec 03 13:10:36 crc kubenswrapper[4986]: I1203 13:10:36.668308 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-dzxc8"] Dec 03 13:10:36 crc kubenswrapper[4986]: E1203 13:10:36.668980 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc69d339-6b83-42e6-bb52-0726bffadd0f" containerName="extract-content" Dec 03 13:10:36 crc kubenswrapper[4986]: I1203 13:10:36.668993 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc69d339-6b83-42e6-bb52-0726bffadd0f" containerName="extract-content" Dec 03 13:10:36 crc kubenswrapper[4986]: E1203 13:10:36.669005 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc69d339-6b83-42e6-bb52-0726bffadd0f" containerName="extract-utilities" Dec 03 13:10:36 crc kubenswrapper[4986]: I1203 13:10:36.669011 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc69d339-6b83-42e6-bb52-0726bffadd0f" containerName="extract-utilities" Dec 03 13:10:36 crc kubenswrapper[4986]: E1203 13:10:36.669029 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc69d339-6b83-42e6-bb52-0726bffadd0f" containerName="registry-server" Dec 03 13:10:36 crc kubenswrapper[4986]: I1203 13:10:36.669036 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc69d339-6b83-42e6-bb52-0726bffadd0f" containerName="registry-server" Dec 03 13:10:36 crc kubenswrapper[4986]: I1203 13:10:36.669119 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc69d339-6b83-42e6-bb52-0726bffadd0f" containerName="registry-server" Dec 03 13:10:36 crc kubenswrapper[4986]: I1203 13:10:36.669487 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-dzxc8" Dec 03 13:10:36 crc kubenswrapper[4986]: I1203 13:10:36.673652 4986 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-gj9lx" Dec 03 13:10:36 crc kubenswrapper[4986]: I1203 13:10:36.673651 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Dec 03 13:10:36 crc kubenswrapper[4986]: I1203 13:10:36.673954 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Dec 03 13:10:36 crc kubenswrapper[4986]: I1203 13:10:36.677762 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-dzxc8"] Dec 03 13:10:36 crc kubenswrapper[4986]: I1203 13:10:36.681881 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-5b446d88c5-ftncc"] Dec 03 13:10:36 crc kubenswrapper[4986]: I1203 13:10:36.682687 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-ftncc" Dec 03 13:10:36 crc kubenswrapper[4986]: I1203 13:10:36.684866 4986 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-94ltt" Dec 03 13:10:36 crc kubenswrapper[4986]: I1203 13:10:36.685900 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-ftncc"] Dec 03 13:10:36 crc kubenswrapper[4986]: I1203 13:10:36.697187 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-2wgk2"] Dec 03 13:10:36 crc kubenswrapper[4986]: I1203 13:10:36.697941 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-2wgk2" Dec 03 13:10:36 crc kubenswrapper[4986]: I1203 13:10:36.704577 4986 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-lpwmj" Dec 03 13:10:36 crc kubenswrapper[4986]: I1203 13:10:36.716303 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-2wgk2"] Dec 03 13:10:36 crc kubenswrapper[4986]: I1203 13:10:36.774484 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hqmf\" (UniqueName: \"kubernetes.io/projected/f4e3f3b7-bc75-4d36-9278-773bdf1109df-kube-api-access-7hqmf\") pod \"cert-manager-cainjector-7f985d654d-dzxc8\" (UID: \"f4e3f3b7-bc75-4d36-9278-773bdf1109df\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-dzxc8" Dec 03 13:10:36 crc kubenswrapper[4986]: I1203 13:10:36.774527 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ml9wj\" (UniqueName: \"kubernetes.io/projected/c0dbd0c1-fbde-463c-917f-d7d101f6c6e8-kube-api-access-ml9wj\") pod \"cert-manager-5b446d88c5-ftncc\" (UID: \"c0dbd0c1-fbde-463c-917f-d7d101f6c6e8\") " pod="cert-manager/cert-manager-5b446d88c5-ftncc" Dec 03 13:10:36 crc kubenswrapper[4986]: I1203 13:10:36.774571 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7976p\" (UniqueName: \"kubernetes.io/projected/e54cc3fa-08d8-433f-9db5-bcff8c8e43fe-kube-api-access-7976p\") pod \"cert-manager-webhook-5655c58dd6-2wgk2\" (UID: \"e54cc3fa-08d8-433f-9db5-bcff8c8e43fe\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-2wgk2" Dec 03 13:10:36 crc kubenswrapper[4986]: I1203 13:10:36.876088 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hqmf\" (UniqueName: \"kubernetes.io/projected/f4e3f3b7-bc75-4d36-9278-773bdf1109df-kube-api-access-7hqmf\") pod \"cert-manager-cainjector-7f985d654d-dzxc8\" (UID: \"f4e3f3b7-bc75-4d36-9278-773bdf1109df\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-dzxc8" Dec 03 13:10:36 crc kubenswrapper[4986]: I1203 13:10:36.876133 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ml9wj\" (UniqueName: \"kubernetes.io/projected/c0dbd0c1-fbde-463c-917f-d7d101f6c6e8-kube-api-access-ml9wj\") pod \"cert-manager-5b446d88c5-ftncc\" (UID: \"c0dbd0c1-fbde-463c-917f-d7d101f6c6e8\") " pod="cert-manager/cert-manager-5b446d88c5-ftncc" Dec 03 13:10:36 crc kubenswrapper[4986]: I1203 13:10:36.876179 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7976p\" (UniqueName: \"kubernetes.io/projected/e54cc3fa-08d8-433f-9db5-bcff8c8e43fe-kube-api-access-7976p\") pod \"cert-manager-webhook-5655c58dd6-2wgk2\" (UID: \"e54cc3fa-08d8-433f-9db5-bcff8c8e43fe\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-2wgk2" Dec 03 13:10:36 crc kubenswrapper[4986]: I1203 13:10:36.895621 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ml9wj\" (UniqueName: \"kubernetes.io/projected/c0dbd0c1-fbde-463c-917f-d7d101f6c6e8-kube-api-access-ml9wj\") pod \"cert-manager-5b446d88c5-ftncc\" (UID: \"c0dbd0c1-fbde-463c-917f-d7d101f6c6e8\") " pod="cert-manager/cert-manager-5b446d88c5-ftncc" Dec 03 13:10:36 crc kubenswrapper[4986]: I1203 13:10:36.895644 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7976p\" (UniqueName: \"kubernetes.io/projected/e54cc3fa-08d8-433f-9db5-bcff8c8e43fe-kube-api-access-7976p\") pod \"cert-manager-webhook-5655c58dd6-2wgk2\" (UID: \"e54cc3fa-08d8-433f-9db5-bcff8c8e43fe\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-2wgk2" Dec 03 13:10:36 crc kubenswrapper[4986]: I1203 13:10:36.905101 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hqmf\" (UniqueName: \"kubernetes.io/projected/f4e3f3b7-bc75-4d36-9278-773bdf1109df-kube-api-access-7hqmf\") pod \"cert-manager-cainjector-7f985d654d-dzxc8\" (UID: \"f4e3f3b7-bc75-4d36-9278-773bdf1109df\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-dzxc8" Dec 03 13:10:36 crc kubenswrapper[4986]: I1203 13:10:36.988467 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-dzxc8" Dec 03 13:10:36 crc kubenswrapper[4986]: I1203 13:10:36.996630 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-ftncc" Dec 03 13:10:37 crc kubenswrapper[4986]: I1203 13:10:37.017108 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-2wgk2" Dec 03 13:10:37 crc kubenswrapper[4986]: I1203 13:10:37.282680 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-2wgk2"] Dec 03 13:10:37 crc kubenswrapper[4986]: I1203 13:10:37.422099 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-ftncc"] Dec 03 13:10:37 crc kubenswrapper[4986]: W1203 13:10:37.423082 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc0dbd0c1_fbde_463c_917f_d7d101f6c6e8.slice/crio-1300e966f4231600efa63d6e8b1b9d7039c2983d9ff65ed0787483b6437c9b37 WatchSource:0}: Error finding container 1300e966f4231600efa63d6e8b1b9d7039c2983d9ff65ed0787483b6437c9b37: Status 404 returned error can't find the container with id 1300e966f4231600efa63d6e8b1b9d7039c2983d9ff65ed0787483b6437c9b37 Dec 03 13:10:37 crc kubenswrapper[4986]: I1203 13:10:37.425194 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-dzxc8"] Dec 03 13:10:37 crc kubenswrapper[4986]: W1203 13:10:37.425312 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf4e3f3b7_bc75_4d36_9278_773bdf1109df.slice/crio-64d5ff4d6466e33033c6528f114e7d7b2f6c01499af9436efa8c2f974daf5723 WatchSource:0}: Error finding container 64d5ff4d6466e33033c6528f114e7d7b2f6c01499af9436efa8c2f974daf5723: Status 404 returned error can't find the container with id 64d5ff4d6466e33033c6528f114e7d7b2f6c01499af9436efa8c2f974daf5723 Dec 03 13:10:38 crc kubenswrapper[4986]: I1203 13:10:38.159617 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-ftncc" event={"ID":"c0dbd0c1-fbde-463c-917f-d7d101f6c6e8","Type":"ContainerStarted","Data":"1300e966f4231600efa63d6e8b1b9d7039c2983d9ff65ed0787483b6437c9b37"} Dec 03 13:10:38 crc kubenswrapper[4986]: I1203 13:10:38.161235 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-2wgk2" event={"ID":"e54cc3fa-08d8-433f-9db5-bcff8c8e43fe","Type":"ContainerStarted","Data":"59ed0269053334fbf21582f8f13985a77ead5ad3e3bdafef60718536dc69fefa"} Dec 03 13:10:38 crc kubenswrapper[4986]: I1203 13:10:38.162697 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-dzxc8" event={"ID":"f4e3f3b7-bc75-4d36-9278-773bdf1109df","Type":"ContainerStarted","Data":"64d5ff4d6466e33033c6528f114e7d7b2f6c01499af9436efa8c2f974daf5723"} Dec 03 13:10:41 crc kubenswrapper[4986]: I1203 13:10:41.179778 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-dzxc8" event={"ID":"f4e3f3b7-bc75-4d36-9278-773bdf1109df","Type":"ContainerStarted","Data":"d2a337215de84e571c54878e25b083973f7f9f79f291a30738a571d8d2849b7d"} Dec 03 13:10:41 crc kubenswrapper[4986]: I1203 13:10:41.183517 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-ftncc" event={"ID":"c0dbd0c1-fbde-463c-917f-d7d101f6c6e8","Type":"ContainerStarted","Data":"14262fe240ede533b76e0e16310071ff88da40aa0bf2f32bc2fa54d3bed7549e"} Dec 03 13:10:41 crc kubenswrapper[4986]: I1203 13:10:41.185927 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-2wgk2" event={"ID":"e54cc3fa-08d8-433f-9db5-bcff8c8e43fe","Type":"ContainerStarted","Data":"90dcdc9730e900b53b0dfd1dd62f18d03e9acf118532e3df8feb1c0150f6ca07"} Dec 03 13:10:41 crc kubenswrapper[4986]: I1203 13:10:41.186116 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-2wgk2" Dec 03 13:10:41 crc kubenswrapper[4986]: I1203 13:10:41.206518 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7f985d654d-dzxc8" podStartSLOduration=1.738857139 podStartE2EDuration="5.206493754s" podCreationTimestamp="2025-12-03 13:10:36 +0000 UTC" firstStartedPulling="2025-12-03 13:10:37.427727183 +0000 UTC m=+896.894158374" lastFinishedPulling="2025-12-03 13:10:40.895363798 +0000 UTC m=+900.361794989" observedRunningTime="2025-12-03 13:10:41.200746451 +0000 UTC m=+900.667177692" watchObservedRunningTime="2025-12-03 13:10:41.206493754 +0000 UTC m=+900.672924985" Dec 03 13:10:41 crc kubenswrapper[4986]: I1203 13:10:41.217689 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-5b446d88c5-ftncc" podStartSLOduration=2.578826583 podStartE2EDuration="5.217663613s" podCreationTimestamp="2025-12-03 13:10:36 +0000 UTC" firstStartedPulling="2025-12-03 13:10:37.425307108 +0000 UTC m=+896.891738309" lastFinishedPulling="2025-12-03 13:10:40.064144148 +0000 UTC m=+899.530575339" observedRunningTime="2025-12-03 13:10:41.215144916 +0000 UTC m=+900.681576127" watchObservedRunningTime="2025-12-03 13:10:41.217663613 +0000 UTC m=+900.684094824" Dec 03 13:10:41 crc kubenswrapper[4986]: I1203 13:10:41.234630 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-5655c58dd6-2wgk2" podStartSLOduration=2.454893281 podStartE2EDuration="5.234610417s" podCreationTimestamp="2025-12-03 13:10:36 +0000 UTC" firstStartedPulling="2025-12-03 13:10:37.285446579 +0000 UTC m=+896.751877770" lastFinishedPulling="2025-12-03 13:10:40.065163715 +0000 UTC m=+899.531594906" observedRunningTime="2025-12-03 13:10:41.229173281 +0000 UTC m=+900.695604482" watchObservedRunningTime="2025-12-03 13:10:41.234610417 +0000 UTC m=+900.701041618" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.020585 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-5655c58dd6-2wgk2" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.391480 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-9nf52"] Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.391958 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" podUID="d3a45156-295b-4093-80e7-2059f81ddbd7" containerName="ovn-controller" containerID="cri-o://2ff8c1f1602e57602cae3f46dd3d2bf8d7abad6bd0952f4347152678e8606d92" gracePeriod=30 Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.392144 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" podUID="d3a45156-295b-4093-80e7-2059f81ddbd7" containerName="northd" containerID="cri-o://2843fb3fcbadb926b104a9f7c4084468b36293aebed6c692c29419a0457b62a2" gracePeriod=30 Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.392221 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" podUID="d3a45156-295b-4093-80e7-2059f81ddbd7" containerName="kube-rbac-proxy-node" containerID="cri-o://5c79f84fc019d109bac446dd058b4bafb0b516a13fae47b1924e4f7690658ba8" gracePeriod=30 Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.392174 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" podUID="d3a45156-295b-4093-80e7-2059f81ddbd7" containerName="sbdb" containerID="cri-o://9dfc7520e7cb1e09eb10bf9615dbae11d7e73474f1b73eed481d18269b65aef9" gracePeriod=30 Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.392210 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" podUID="d3a45156-295b-4093-80e7-2059f81ddbd7" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://6be6405facfa7726cbe56c48184cba0123774e8e34d3ceb9e18daed6f35560c6" gracePeriod=30 Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.392197 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" podUID="d3a45156-295b-4093-80e7-2059f81ddbd7" containerName="ovn-acl-logging" containerID="cri-o://7d94eca13821749d7e15a350c411a831d2887f4211ed092a4fb57e26e49fcd9d" gracePeriod=30 Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.392436 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" podUID="d3a45156-295b-4093-80e7-2059f81ddbd7" containerName="nbdb" containerID="cri-o://d77d078572dbe7f11e91ece0d25e76bd17004167dabce267a2b568e798aae158" gracePeriod=30 Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.460872 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" podUID="d3a45156-295b-4093-80e7-2059f81ddbd7" containerName="ovnkube-controller" containerID="cri-o://0284afd2c8d9ae3edc84d051a83f44827c82b51ce989aff5f0442eb2f8fc0af7" gracePeriod=30 Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.692046 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9nf52_d3a45156-295b-4093-80e7-2059f81ddbd7/ovnkube-controller/3.log" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.694613 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9nf52_d3a45156-295b-4093-80e7-2059f81ddbd7/ovn-acl-logging/0.log" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.695068 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9nf52_d3a45156-295b-4093-80e7-2059f81ddbd7/ovn-controller/0.log" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.695497 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.707948 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d3a45156-295b-4093-80e7-2059f81ddbd7-etc-openvswitch\") pod \"d3a45156-295b-4093-80e7-2059f81ddbd7\" (UID: \"d3a45156-295b-4093-80e7-2059f81ddbd7\") " Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.707999 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d3a45156-295b-4093-80e7-2059f81ddbd7-ovnkube-config\") pod \"d3a45156-295b-4093-80e7-2059f81ddbd7\" (UID: \"d3a45156-295b-4093-80e7-2059f81ddbd7\") " Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.708036 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d3a45156-295b-4093-80e7-2059f81ddbd7-env-overrides\") pod \"d3a45156-295b-4093-80e7-2059f81ddbd7\" (UID: \"d3a45156-295b-4093-80e7-2059f81ddbd7\") " Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.708055 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d3a45156-295b-4093-80e7-2059f81ddbd7-node-log\") pod \"d3a45156-295b-4093-80e7-2059f81ddbd7\" (UID: \"d3a45156-295b-4093-80e7-2059f81ddbd7\") " Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.708063 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3a45156-295b-4093-80e7-2059f81ddbd7-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "d3a45156-295b-4093-80e7-2059f81ddbd7" (UID: "d3a45156-295b-4093-80e7-2059f81ddbd7"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.708088 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5j752\" (UniqueName: \"kubernetes.io/projected/d3a45156-295b-4093-80e7-2059f81ddbd7-kube-api-access-5j752\") pod \"d3a45156-295b-4093-80e7-2059f81ddbd7\" (UID: \"d3a45156-295b-4093-80e7-2059f81ddbd7\") " Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.708111 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d3a45156-295b-4093-80e7-2059f81ddbd7-log-socket\") pod \"d3a45156-295b-4093-80e7-2059f81ddbd7\" (UID: \"d3a45156-295b-4093-80e7-2059f81ddbd7\") " Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.708131 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d3a45156-295b-4093-80e7-2059f81ddbd7-run-ovn\") pod \"d3a45156-295b-4093-80e7-2059f81ddbd7\" (UID: \"d3a45156-295b-4093-80e7-2059f81ddbd7\") " Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.708153 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d3a45156-295b-4093-80e7-2059f81ddbd7-systemd-units\") pod \"d3a45156-295b-4093-80e7-2059f81ddbd7\" (UID: \"d3a45156-295b-4093-80e7-2059f81ddbd7\") " Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.708181 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d3a45156-295b-4093-80e7-2059f81ddbd7-host-cni-netd\") pod \"d3a45156-295b-4093-80e7-2059f81ddbd7\" (UID: \"d3a45156-295b-4093-80e7-2059f81ddbd7\") " Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.708213 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d3a45156-295b-4093-80e7-2059f81ddbd7-var-lib-openvswitch\") pod \"d3a45156-295b-4093-80e7-2059f81ddbd7\" (UID: \"d3a45156-295b-4093-80e7-2059f81ddbd7\") " Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.708235 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d3a45156-295b-4093-80e7-2059f81ddbd7-host-cni-bin\") pod \"d3a45156-295b-4093-80e7-2059f81ddbd7\" (UID: \"d3a45156-295b-4093-80e7-2059f81ddbd7\") " Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.708262 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d3a45156-295b-4093-80e7-2059f81ddbd7-run-openvswitch\") pod \"d3a45156-295b-4093-80e7-2059f81ddbd7\" (UID: \"d3a45156-295b-4093-80e7-2059f81ddbd7\") " Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.708300 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d3a45156-295b-4093-80e7-2059f81ddbd7-host-kubelet\") pod \"d3a45156-295b-4093-80e7-2059f81ddbd7\" (UID: \"d3a45156-295b-4093-80e7-2059f81ddbd7\") " Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.708325 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d3a45156-295b-4093-80e7-2059f81ddbd7-ovn-node-metrics-cert\") pod \"d3a45156-295b-4093-80e7-2059f81ddbd7\" (UID: \"d3a45156-295b-4093-80e7-2059f81ddbd7\") " Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.708344 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d3a45156-295b-4093-80e7-2059f81ddbd7-host-slash\") pod \"d3a45156-295b-4093-80e7-2059f81ddbd7\" (UID: \"d3a45156-295b-4093-80e7-2059f81ddbd7\") " Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.708362 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d3a45156-295b-4093-80e7-2059f81ddbd7-host-run-ovn-kubernetes\") pod \"d3a45156-295b-4093-80e7-2059f81ddbd7\" (UID: \"d3a45156-295b-4093-80e7-2059f81ddbd7\") " Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.708381 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d3a45156-295b-4093-80e7-2059f81ddbd7-host-run-netns\") pod \"d3a45156-295b-4093-80e7-2059f81ddbd7\" (UID: \"d3a45156-295b-4093-80e7-2059f81ddbd7\") " Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.708408 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d3a45156-295b-4093-80e7-2059f81ddbd7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"d3a45156-295b-4093-80e7-2059f81ddbd7\" (UID: \"d3a45156-295b-4093-80e7-2059f81ddbd7\") " Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.708437 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d3a45156-295b-4093-80e7-2059f81ddbd7-ovnkube-script-lib\") pod \"d3a45156-295b-4093-80e7-2059f81ddbd7\" (UID: \"d3a45156-295b-4093-80e7-2059f81ddbd7\") " Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.708459 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d3a45156-295b-4093-80e7-2059f81ddbd7-run-systemd\") pod \"d3a45156-295b-4093-80e7-2059f81ddbd7\" (UID: \"d3a45156-295b-4093-80e7-2059f81ddbd7\") " Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.708106 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3a45156-295b-4093-80e7-2059f81ddbd7-node-log" (OuterVolumeSpecName: "node-log") pod "d3a45156-295b-4093-80e7-2059f81ddbd7" (UID: "d3a45156-295b-4093-80e7-2059f81ddbd7"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.708200 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3a45156-295b-4093-80e7-2059f81ddbd7-log-socket" (OuterVolumeSpecName: "log-socket") pod "d3a45156-295b-4093-80e7-2059f81ddbd7" (UID: "d3a45156-295b-4093-80e7-2059f81ddbd7"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.708207 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3a45156-295b-4093-80e7-2059f81ddbd7-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "d3a45156-295b-4093-80e7-2059f81ddbd7" (UID: "d3a45156-295b-4093-80e7-2059f81ddbd7"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.708236 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3a45156-295b-4093-80e7-2059f81ddbd7-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "d3a45156-295b-4093-80e7-2059f81ddbd7" (UID: "d3a45156-295b-4093-80e7-2059f81ddbd7"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.708250 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3a45156-295b-4093-80e7-2059f81ddbd7-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "d3a45156-295b-4093-80e7-2059f81ddbd7" (UID: "d3a45156-295b-4093-80e7-2059f81ddbd7"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.708308 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3a45156-295b-4093-80e7-2059f81ddbd7-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "d3a45156-295b-4093-80e7-2059f81ddbd7" (UID: "d3a45156-295b-4093-80e7-2059f81ddbd7"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.708336 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3a45156-295b-4093-80e7-2059f81ddbd7-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "d3a45156-295b-4093-80e7-2059f81ddbd7" (UID: "d3a45156-295b-4093-80e7-2059f81ddbd7"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.708344 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3a45156-295b-4093-80e7-2059f81ddbd7-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "d3a45156-295b-4093-80e7-2059f81ddbd7" (UID: "d3a45156-295b-4093-80e7-2059f81ddbd7"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.708390 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3a45156-295b-4093-80e7-2059f81ddbd7-host-slash" (OuterVolumeSpecName: "host-slash") pod "d3a45156-295b-4093-80e7-2059f81ddbd7" (UID: "d3a45156-295b-4093-80e7-2059f81ddbd7"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.708417 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3a45156-295b-4093-80e7-2059f81ddbd7-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "d3a45156-295b-4093-80e7-2059f81ddbd7" (UID: "d3a45156-295b-4093-80e7-2059f81ddbd7"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.708437 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3a45156-295b-4093-80e7-2059f81ddbd7-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "d3a45156-295b-4093-80e7-2059f81ddbd7" (UID: "d3a45156-295b-4093-80e7-2059f81ddbd7"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.708441 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3a45156-295b-4093-80e7-2059f81ddbd7-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "d3a45156-295b-4093-80e7-2059f81ddbd7" (UID: "d3a45156-295b-4093-80e7-2059f81ddbd7"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.708464 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3a45156-295b-4093-80e7-2059f81ddbd7-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "d3a45156-295b-4093-80e7-2059f81ddbd7" (UID: "d3a45156-295b-4093-80e7-2059f81ddbd7"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.708467 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3a45156-295b-4093-80e7-2059f81ddbd7-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "d3a45156-295b-4093-80e7-2059f81ddbd7" (UID: "d3a45156-295b-4093-80e7-2059f81ddbd7"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.708918 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3a45156-295b-4093-80e7-2059f81ddbd7-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "d3a45156-295b-4093-80e7-2059f81ddbd7" (UID: "d3a45156-295b-4093-80e7-2059f81ddbd7"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.709509 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3a45156-295b-4093-80e7-2059f81ddbd7-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "d3a45156-295b-4093-80e7-2059f81ddbd7" (UID: "d3a45156-295b-4093-80e7-2059f81ddbd7"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.719817 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3a45156-295b-4093-80e7-2059f81ddbd7-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "d3a45156-295b-4093-80e7-2059f81ddbd7" (UID: "d3a45156-295b-4093-80e7-2059f81ddbd7"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.728446 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3a45156-295b-4093-80e7-2059f81ddbd7-kube-api-access-5j752" (OuterVolumeSpecName: "kube-api-access-5j752") pod "d3a45156-295b-4093-80e7-2059f81ddbd7" (UID: "d3a45156-295b-4093-80e7-2059f81ddbd7"). InnerVolumeSpecName "kube-api-access-5j752". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.746424 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3a45156-295b-4093-80e7-2059f81ddbd7-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "d3a45156-295b-4093-80e7-2059f81ddbd7" (UID: "d3a45156-295b-4093-80e7-2059f81ddbd7"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.758548 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-gndvm"] Dec 03 13:10:47 crc kubenswrapper[4986]: E1203 13:10:47.758827 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3a45156-295b-4093-80e7-2059f81ddbd7" containerName="ovnkube-controller" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.758852 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3a45156-295b-4093-80e7-2059f81ddbd7" containerName="ovnkube-controller" Dec 03 13:10:47 crc kubenswrapper[4986]: E1203 13:10:47.758865 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3a45156-295b-4093-80e7-2059f81ddbd7" containerName="sbdb" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.758878 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3a45156-295b-4093-80e7-2059f81ddbd7" containerName="sbdb" Dec 03 13:10:47 crc kubenswrapper[4986]: E1203 13:10:47.758889 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3a45156-295b-4093-80e7-2059f81ddbd7" containerName="ovnkube-controller" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.758900 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3a45156-295b-4093-80e7-2059f81ddbd7" containerName="ovnkube-controller" Dec 03 13:10:47 crc kubenswrapper[4986]: E1203 13:10:47.758912 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3a45156-295b-4093-80e7-2059f81ddbd7" containerName="nbdb" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.758921 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3a45156-295b-4093-80e7-2059f81ddbd7" containerName="nbdb" Dec 03 13:10:47 crc kubenswrapper[4986]: E1203 13:10:47.758937 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3a45156-295b-4093-80e7-2059f81ddbd7" containerName="kube-rbac-proxy-node" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.758948 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3a45156-295b-4093-80e7-2059f81ddbd7" containerName="kube-rbac-proxy-node" Dec 03 13:10:47 crc kubenswrapper[4986]: E1203 13:10:47.758967 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3a45156-295b-4093-80e7-2059f81ddbd7" containerName="ovnkube-controller" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.758977 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3a45156-295b-4093-80e7-2059f81ddbd7" containerName="ovnkube-controller" Dec 03 13:10:47 crc kubenswrapper[4986]: E1203 13:10:47.758990 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3a45156-295b-4093-80e7-2059f81ddbd7" containerName="northd" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.759000 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3a45156-295b-4093-80e7-2059f81ddbd7" containerName="northd" Dec 03 13:10:47 crc kubenswrapper[4986]: E1203 13:10:47.759014 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3a45156-295b-4093-80e7-2059f81ddbd7" containerName="ovnkube-controller" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.759023 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3a45156-295b-4093-80e7-2059f81ddbd7" containerName="ovnkube-controller" Dec 03 13:10:47 crc kubenswrapper[4986]: E1203 13:10:47.759033 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3a45156-295b-4093-80e7-2059f81ddbd7" containerName="kube-rbac-proxy-ovn-metrics" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.759043 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3a45156-295b-4093-80e7-2059f81ddbd7" containerName="kube-rbac-proxy-ovn-metrics" Dec 03 13:10:47 crc kubenswrapper[4986]: E1203 13:10:47.759063 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3a45156-295b-4093-80e7-2059f81ddbd7" containerName="kubecfg-setup" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.759073 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3a45156-295b-4093-80e7-2059f81ddbd7" containerName="kubecfg-setup" Dec 03 13:10:47 crc kubenswrapper[4986]: E1203 13:10:47.759090 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3a45156-295b-4093-80e7-2059f81ddbd7" containerName="ovn-acl-logging" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.759100 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3a45156-295b-4093-80e7-2059f81ddbd7" containerName="ovn-acl-logging" Dec 03 13:10:47 crc kubenswrapper[4986]: E1203 13:10:47.759115 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3a45156-295b-4093-80e7-2059f81ddbd7" containerName="ovn-controller" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.759125 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3a45156-295b-4093-80e7-2059f81ddbd7" containerName="ovn-controller" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.759260 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3a45156-295b-4093-80e7-2059f81ddbd7" containerName="nbdb" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.759275 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3a45156-295b-4093-80e7-2059f81ddbd7" containerName="kube-rbac-proxy-ovn-metrics" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.759347 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3a45156-295b-4093-80e7-2059f81ddbd7" containerName="ovn-controller" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.759360 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3a45156-295b-4093-80e7-2059f81ddbd7" containerName="northd" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.759374 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3a45156-295b-4093-80e7-2059f81ddbd7" containerName="ovnkube-controller" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.759385 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3a45156-295b-4093-80e7-2059f81ddbd7" containerName="kube-rbac-proxy-node" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.759399 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3a45156-295b-4093-80e7-2059f81ddbd7" containerName="ovnkube-controller" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.759410 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3a45156-295b-4093-80e7-2059f81ddbd7" containerName="ovn-acl-logging" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.759423 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3a45156-295b-4093-80e7-2059f81ddbd7" containerName="ovnkube-controller" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.759435 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3a45156-295b-4093-80e7-2059f81ddbd7" containerName="sbdb" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.759446 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3a45156-295b-4093-80e7-2059f81ddbd7" containerName="ovnkube-controller" Dec 03 13:10:47 crc kubenswrapper[4986]: E1203 13:10:47.759601 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3a45156-295b-4093-80e7-2059f81ddbd7" containerName="ovnkube-controller" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.759615 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3a45156-295b-4093-80e7-2059f81ddbd7" containerName="ovnkube-controller" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.759861 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3a45156-295b-4093-80e7-2059f81ddbd7" containerName="ovnkube-controller" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.762385 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-gndvm" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.809618 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/afd316b0-03e5-4240-8ce3-9403986ac6ff-systemd-units\") pod \"ovnkube-node-gndvm\" (UID: \"afd316b0-03e5-4240-8ce3-9403986ac6ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-gndvm" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.809682 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/afd316b0-03e5-4240-8ce3-9403986ac6ff-node-log\") pod \"ovnkube-node-gndvm\" (UID: \"afd316b0-03e5-4240-8ce3-9403986ac6ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-gndvm" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.809705 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/afd316b0-03e5-4240-8ce3-9403986ac6ff-env-overrides\") pod \"ovnkube-node-gndvm\" (UID: \"afd316b0-03e5-4240-8ce3-9403986ac6ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-gndvm" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.809727 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzsrg\" (UniqueName: \"kubernetes.io/projected/afd316b0-03e5-4240-8ce3-9403986ac6ff-kube-api-access-qzsrg\") pod \"ovnkube-node-gndvm\" (UID: \"afd316b0-03e5-4240-8ce3-9403986ac6ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-gndvm" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.809749 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/afd316b0-03e5-4240-8ce3-9403986ac6ff-host-slash\") pod \"ovnkube-node-gndvm\" (UID: \"afd316b0-03e5-4240-8ce3-9403986ac6ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-gndvm" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.809773 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/afd316b0-03e5-4240-8ce3-9403986ac6ff-var-lib-openvswitch\") pod \"ovnkube-node-gndvm\" (UID: \"afd316b0-03e5-4240-8ce3-9403986ac6ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-gndvm" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.809802 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/afd316b0-03e5-4240-8ce3-9403986ac6ff-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-gndvm\" (UID: \"afd316b0-03e5-4240-8ce3-9403986ac6ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-gndvm" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.809827 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/afd316b0-03e5-4240-8ce3-9403986ac6ff-host-cni-bin\") pod \"ovnkube-node-gndvm\" (UID: \"afd316b0-03e5-4240-8ce3-9403986ac6ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-gndvm" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.809847 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/afd316b0-03e5-4240-8ce3-9403986ac6ff-ovnkube-config\") pod \"ovnkube-node-gndvm\" (UID: \"afd316b0-03e5-4240-8ce3-9403986ac6ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-gndvm" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.809869 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/afd316b0-03e5-4240-8ce3-9403986ac6ff-etc-openvswitch\") pod \"ovnkube-node-gndvm\" (UID: \"afd316b0-03e5-4240-8ce3-9403986ac6ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-gndvm" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.809886 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/afd316b0-03e5-4240-8ce3-9403986ac6ff-host-cni-netd\") pod \"ovnkube-node-gndvm\" (UID: \"afd316b0-03e5-4240-8ce3-9403986ac6ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-gndvm" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.809919 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/afd316b0-03e5-4240-8ce3-9403986ac6ff-log-socket\") pod \"ovnkube-node-gndvm\" (UID: \"afd316b0-03e5-4240-8ce3-9403986ac6ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-gndvm" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.809948 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/afd316b0-03e5-4240-8ce3-9403986ac6ff-run-systemd\") pod \"ovnkube-node-gndvm\" (UID: \"afd316b0-03e5-4240-8ce3-9403986ac6ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-gndvm" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.809974 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/afd316b0-03e5-4240-8ce3-9403986ac6ff-host-kubelet\") pod \"ovnkube-node-gndvm\" (UID: \"afd316b0-03e5-4240-8ce3-9403986ac6ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-gndvm" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.809994 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/afd316b0-03e5-4240-8ce3-9403986ac6ff-ovn-node-metrics-cert\") pod \"ovnkube-node-gndvm\" (UID: \"afd316b0-03e5-4240-8ce3-9403986ac6ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-gndvm" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.810019 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/afd316b0-03e5-4240-8ce3-9403986ac6ff-run-openvswitch\") pod \"ovnkube-node-gndvm\" (UID: \"afd316b0-03e5-4240-8ce3-9403986ac6ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-gndvm" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.810039 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/afd316b0-03e5-4240-8ce3-9403986ac6ff-run-ovn\") pod \"ovnkube-node-gndvm\" (UID: \"afd316b0-03e5-4240-8ce3-9403986ac6ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-gndvm" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.810063 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/afd316b0-03e5-4240-8ce3-9403986ac6ff-host-run-netns\") pod \"ovnkube-node-gndvm\" (UID: \"afd316b0-03e5-4240-8ce3-9403986ac6ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-gndvm" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.810084 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/afd316b0-03e5-4240-8ce3-9403986ac6ff-ovnkube-script-lib\") pod \"ovnkube-node-gndvm\" (UID: \"afd316b0-03e5-4240-8ce3-9403986ac6ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-gndvm" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.810115 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/afd316b0-03e5-4240-8ce3-9403986ac6ff-host-run-ovn-kubernetes\") pod \"ovnkube-node-gndvm\" (UID: \"afd316b0-03e5-4240-8ce3-9403986ac6ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-gndvm" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.810161 4986 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d3a45156-295b-4093-80e7-2059f81ddbd7-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.810173 4986 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d3a45156-295b-4093-80e7-2059f81ddbd7-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.810185 4986 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d3a45156-295b-4093-80e7-2059f81ddbd7-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.810196 4986 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d3a45156-295b-4093-80e7-2059f81ddbd7-node-log\") on node \"crc\" DevicePath \"\"" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.810207 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5j752\" (UniqueName: \"kubernetes.io/projected/d3a45156-295b-4093-80e7-2059f81ddbd7-kube-api-access-5j752\") on node \"crc\" DevicePath \"\"" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.810219 4986 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d3a45156-295b-4093-80e7-2059f81ddbd7-log-socket\") on node \"crc\" DevicePath \"\"" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.810230 4986 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d3a45156-295b-4093-80e7-2059f81ddbd7-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.810241 4986 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d3a45156-295b-4093-80e7-2059f81ddbd7-systemd-units\") on node \"crc\" DevicePath \"\"" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.810253 4986 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d3a45156-295b-4093-80e7-2059f81ddbd7-host-cni-netd\") on node \"crc\" DevicePath \"\"" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.810264 4986 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d3a45156-295b-4093-80e7-2059f81ddbd7-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.810275 4986 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d3a45156-295b-4093-80e7-2059f81ddbd7-host-cni-bin\") on node \"crc\" DevicePath \"\"" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.810303 4986 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d3a45156-295b-4093-80e7-2059f81ddbd7-run-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.810315 4986 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d3a45156-295b-4093-80e7-2059f81ddbd7-host-kubelet\") on node \"crc\" DevicePath \"\"" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.810326 4986 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d3a45156-295b-4093-80e7-2059f81ddbd7-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.810336 4986 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d3a45156-295b-4093-80e7-2059f81ddbd7-host-slash\") on node \"crc\" DevicePath \"\"" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.810348 4986 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d3a45156-295b-4093-80e7-2059f81ddbd7-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.810360 4986 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d3a45156-295b-4093-80e7-2059f81ddbd7-host-run-netns\") on node \"crc\" DevicePath \"\"" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.810371 4986 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d3a45156-295b-4093-80e7-2059f81ddbd7-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.810382 4986 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d3a45156-295b-4093-80e7-2059f81ddbd7-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.810393 4986 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d3a45156-295b-4093-80e7-2059f81ddbd7-run-systemd\") on node \"crc\" DevicePath \"\"" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.911429 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/afd316b0-03e5-4240-8ce3-9403986ac6ff-log-socket\") pod \"ovnkube-node-gndvm\" (UID: \"afd316b0-03e5-4240-8ce3-9403986ac6ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-gndvm" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.911535 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/afd316b0-03e5-4240-8ce3-9403986ac6ff-run-systemd\") pod \"ovnkube-node-gndvm\" (UID: \"afd316b0-03e5-4240-8ce3-9403986ac6ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-gndvm" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.911578 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/afd316b0-03e5-4240-8ce3-9403986ac6ff-run-systemd\") pod \"ovnkube-node-gndvm\" (UID: \"afd316b0-03e5-4240-8ce3-9403986ac6ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-gndvm" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.911540 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/afd316b0-03e5-4240-8ce3-9403986ac6ff-log-socket\") pod \"ovnkube-node-gndvm\" (UID: \"afd316b0-03e5-4240-8ce3-9403986ac6ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-gndvm" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.911616 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/afd316b0-03e5-4240-8ce3-9403986ac6ff-host-kubelet\") pod \"ovnkube-node-gndvm\" (UID: \"afd316b0-03e5-4240-8ce3-9403986ac6ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-gndvm" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.911684 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/afd316b0-03e5-4240-8ce3-9403986ac6ff-ovn-node-metrics-cert\") pod \"ovnkube-node-gndvm\" (UID: \"afd316b0-03e5-4240-8ce3-9403986ac6ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-gndvm" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.911733 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/afd316b0-03e5-4240-8ce3-9403986ac6ff-host-kubelet\") pod \"ovnkube-node-gndvm\" (UID: \"afd316b0-03e5-4240-8ce3-9403986ac6ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-gndvm" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.911765 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/afd316b0-03e5-4240-8ce3-9403986ac6ff-run-openvswitch\") pod \"ovnkube-node-gndvm\" (UID: \"afd316b0-03e5-4240-8ce3-9403986ac6ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-gndvm" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.911805 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/afd316b0-03e5-4240-8ce3-9403986ac6ff-run-ovn\") pod \"ovnkube-node-gndvm\" (UID: \"afd316b0-03e5-4240-8ce3-9403986ac6ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-gndvm" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.911874 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/afd316b0-03e5-4240-8ce3-9403986ac6ff-host-run-netns\") pod \"ovnkube-node-gndvm\" (UID: \"afd316b0-03e5-4240-8ce3-9403986ac6ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-gndvm" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.911939 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/afd316b0-03e5-4240-8ce3-9403986ac6ff-ovnkube-script-lib\") pod \"ovnkube-node-gndvm\" (UID: \"afd316b0-03e5-4240-8ce3-9403986ac6ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-gndvm" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.911985 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/afd316b0-03e5-4240-8ce3-9403986ac6ff-run-ovn\") pod \"ovnkube-node-gndvm\" (UID: \"afd316b0-03e5-4240-8ce3-9403986ac6ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-gndvm" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.911991 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/afd316b0-03e5-4240-8ce3-9403986ac6ff-host-run-ovn-kubernetes\") pod \"ovnkube-node-gndvm\" (UID: \"afd316b0-03e5-4240-8ce3-9403986ac6ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-gndvm" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.912104 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/afd316b0-03e5-4240-8ce3-9403986ac6ff-systemd-units\") pod \"ovnkube-node-gndvm\" (UID: \"afd316b0-03e5-4240-8ce3-9403986ac6ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-gndvm" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.912151 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/afd316b0-03e5-4240-8ce3-9403986ac6ff-node-log\") pod \"ovnkube-node-gndvm\" (UID: \"afd316b0-03e5-4240-8ce3-9403986ac6ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-gndvm" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.912194 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/afd316b0-03e5-4240-8ce3-9403986ac6ff-env-overrides\") pod \"ovnkube-node-gndvm\" (UID: \"afd316b0-03e5-4240-8ce3-9403986ac6ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-gndvm" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.912232 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzsrg\" (UniqueName: \"kubernetes.io/projected/afd316b0-03e5-4240-8ce3-9403986ac6ff-kube-api-access-qzsrg\") pod \"ovnkube-node-gndvm\" (UID: \"afd316b0-03e5-4240-8ce3-9403986ac6ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-gndvm" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.912271 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/afd316b0-03e5-4240-8ce3-9403986ac6ff-host-slash\") pod \"ovnkube-node-gndvm\" (UID: \"afd316b0-03e5-4240-8ce3-9403986ac6ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-gndvm" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.912349 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/afd316b0-03e5-4240-8ce3-9403986ac6ff-var-lib-openvswitch\") pod \"ovnkube-node-gndvm\" (UID: \"afd316b0-03e5-4240-8ce3-9403986ac6ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-gndvm" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.912399 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/afd316b0-03e5-4240-8ce3-9403986ac6ff-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-gndvm\" (UID: \"afd316b0-03e5-4240-8ce3-9403986ac6ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-gndvm" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.912452 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/afd316b0-03e5-4240-8ce3-9403986ac6ff-host-cni-bin\") pod \"ovnkube-node-gndvm\" (UID: \"afd316b0-03e5-4240-8ce3-9403986ac6ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-gndvm" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.912499 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/afd316b0-03e5-4240-8ce3-9403986ac6ff-ovnkube-config\") pod \"ovnkube-node-gndvm\" (UID: \"afd316b0-03e5-4240-8ce3-9403986ac6ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-gndvm" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.912505 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/afd316b0-03e5-4240-8ce3-9403986ac6ff-node-log\") pod \"ovnkube-node-gndvm\" (UID: \"afd316b0-03e5-4240-8ce3-9403986ac6ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-gndvm" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.912546 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/afd316b0-03e5-4240-8ce3-9403986ac6ff-etc-openvswitch\") pod \"ovnkube-node-gndvm\" (UID: \"afd316b0-03e5-4240-8ce3-9403986ac6ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-gndvm" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.912589 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/afd316b0-03e5-4240-8ce3-9403986ac6ff-host-cni-netd\") pod \"ovnkube-node-gndvm\" (UID: \"afd316b0-03e5-4240-8ce3-9403986ac6ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-gndvm" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.912591 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/afd316b0-03e5-4240-8ce3-9403986ac6ff-host-run-ovn-kubernetes\") pod \"ovnkube-node-gndvm\" (UID: \"afd316b0-03e5-4240-8ce3-9403986ac6ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-gndvm" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.912620 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/afd316b0-03e5-4240-8ce3-9403986ac6ff-host-run-netns\") pod \"ovnkube-node-gndvm\" (UID: \"afd316b0-03e5-4240-8ce3-9403986ac6ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-gndvm" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.911938 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/afd316b0-03e5-4240-8ce3-9403986ac6ff-run-openvswitch\") pod \"ovnkube-node-gndvm\" (UID: \"afd316b0-03e5-4240-8ce3-9403986ac6ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-gndvm" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.912603 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/afd316b0-03e5-4240-8ce3-9403986ac6ff-systemd-units\") pod \"ovnkube-node-gndvm\" (UID: \"afd316b0-03e5-4240-8ce3-9403986ac6ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-gndvm" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.912665 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/afd316b0-03e5-4240-8ce3-9403986ac6ff-var-lib-openvswitch\") pod \"ovnkube-node-gndvm\" (UID: \"afd316b0-03e5-4240-8ce3-9403986ac6ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-gndvm" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.912713 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/afd316b0-03e5-4240-8ce3-9403986ac6ff-host-cni-netd\") pod \"ovnkube-node-gndvm\" (UID: \"afd316b0-03e5-4240-8ce3-9403986ac6ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-gndvm" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.912729 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/afd316b0-03e5-4240-8ce3-9403986ac6ff-etc-openvswitch\") pod \"ovnkube-node-gndvm\" (UID: \"afd316b0-03e5-4240-8ce3-9403986ac6ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-gndvm" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.912746 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/afd316b0-03e5-4240-8ce3-9403986ac6ff-host-slash\") pod \"ovnkube-node-gndvm\" (UID: \"afd316b0-03e5-4240-8ce3-9403986ac6ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-gndvm" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.912742 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/afd316b0-03e5-4240-8ce3-9403986ac6ff-host-cni-bin\") pod \"ovnkube-node-gndvm\" (UID: \"afd316b0-03e5-4240-8ce3-9403986ac6ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-gndvm" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.912782 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/afd316b0-03e5-4240-8ce3-9403986ac6ff-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-gndvm\" (UID: \"afd316b0-03e5-4240-8ce3-9403986ac6ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-gndvm" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.913337 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/afd316b0-03e5-4240-8ce3-9403986ac6ff-ovnkube-script-lib\") pod \"ovnkube-node-gndvm\" (UID: \"afd316b0-03e5-4240-8ce3-9403986ac6ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-gndvm" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.913349 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/afd316b0-03e5-4240-8ce3-9403986ac6ff-env-overrides\") pod \"ovnkube-node-gndvm\" (UID: \"afd316b0-03e5-4240-8ce3-9403986ac6ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-gndvm" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.914010 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/afd316b0-03e5-4240-8ce3-9403986ac6ff-ovnkube-config\") pod \"ovnkube-node-gndvm\" (UID: \"afd316b0-03e5-4240-8ce3-9403986ac6ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-gndvm" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.916548 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/afd316b0-03e5-4240-8ce3-9403986ac6ff-ovn-node-metrics-cert\") pod \"ovnkube-node-gndvm\" (UID: \"afd316b0-03e5-4240-8ce3-9403986ac6ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-gndvm" Dec 03 13:10:47 crc kubenswrapper[4986]: I1203 13:10:47.939196 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzsrg\" (UniqueName: \"kubernetes.io/projected/afd316b0-03e5-4240-8ce3-9403986ac6ff-kube-api-access-qzsrg\") pod \"ovnkube-node-gndvm\" (UID: \"afd316b0-03e5-4240-8ce3-9403986ac6ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-gndvm" Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.083028 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-gndvm" Dec 03 13:10:48 crc kubenswrapper[4986]: W1203 13:10:48.114200 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podafd316b0_03e5_4240_8ce3_9403986ac6ff.slice/crio-755e420107edd17c01b72680b3c30ae272aad252ef80b880f478fed1c7932c76 WatchSource:0}: Error finding container 755e420107edd17c01b72680b3c30ae272aad252ef80b880f478fed1c7932c76: Status 404 returned error can't find the container with id 755e420107edd17c01b72680b3c30ae272aad252ef80b880f478fed1c7932c76 Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.229371 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-px97g_97196b6d-75cc-4de4-8805-f9ce3fbd4230/kube-multus/2.log" Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.229979 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-px97g_97196b6d-75cc-4de4-8805-f9ce3fbd4230/kube-multus/1.log" Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.230021 4986 generic.go:334] "Generic (PLEG): container finished" podID="97196b6d-75cc-4de4-8805-f9ce3fbd4230" containerID="f13dd00d8e806f3c535437f4f032413b30df5ad03e14969e338d1bd53faab5be" exitCode=2 Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.230073 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-px97g" event={"ID":"97196b6d-75cc-4de4-8805-f9ce3fbd4230","Type":"ContainerDied","Data":"f13dd00d8e806f3c535437f4f032413b30df5ad03e14969e338d1bd53faab5be"} Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.230104 4986 scope.go:117] "RemoveContainer" containerID="1dd2dabaec03ca7bc575d3b31582c870f9094270ef72a40fd7425e2cef5b54e0" Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.231232 4986 scope.go:117] "RemoveContainer" containerID="f13dd00d8e806f3c535437f4f032413b30df5ad03e14969e338d1bd53faab5be" Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.236335 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9nf52_d3a45156-295b-4093-80e7-2059f81ddbd7/ovnkube-controller/3.log" Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.239361 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9nf52_d3a45156-295b-4093-80e7-2059f81ddbd7/ovn-acl-logging/0.log" Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.240470 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9nf52_d3a45156-295b-4093-80e7-2059f81ddbd7/ovn-controller/0.log" Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.241118 4986 generic.go:334] "Generic (PLEG): container finished" podID="d3a45156-295b-4093-80e7-2059f81ddbd7" containerID="0284afd2c8d9ae3edc84d051a83f44827c82b51ce989aff5f0442eb2f8fc0af7" exitCode=0 Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.241144 4986 generic.go:334] "Generic (PLEG): container finished" podID="d3a45156-295b-4093-80e7-2059f81ddbd7" containerID="9dfc7520e7cb1e09eb10bf9615dbae11d7e73474f1b73eed481d18269b65aef9" exitCode=0 Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.241151 4986 generic.go:334] "Generic (PLEG): container finished" podID="d3a45156-295b-4093-80e7-2059f81ddbd7" containerID="d77d078572dbe7f11e91ece0d25e76bd17004167dabce267a2b568e798aae158" exitCode=0 Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.241158 4986 generic.go:334] "Generic (PLEG): container finished" podID="d3a45156-295b-4093-80e7-2059f81ddbd7" containerID="2843fb3fcbadb926b104a9f7c4084468b36293aebed6c692c29419a0457b62a2" exitCode=0 Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.241165 4986 generic.go:334] "Generic (PLEG): container finished" podID="d3a45156-295b-4093-80e7-2059f81ddbd7" containerID="6be6405facfa7726cbe56c48184cba0123774e8e34d3ceb9e18daed6f35560c6" exitCode=0 Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.241171 4986 generic.go:334] "Generic (PLEG): container finished" podID="d3a45156-295b-4093-80e7-2059f81ddbd7" containerID="5c79f84fc019d109bac446dd058b4bafb0b516a13fae47b1924e4f7690658ba8" exitCode=0 Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.241176 4986 generic.go:334] "Generic (PLEG): container finished" podID="d3a45156-295b-4093-80e7-2059f81ddbd7" containerID="7d94eca13821749d7e15a350c411a831d2887f4211ed092a4fb57e26e49fcd9d" exitCode=143 Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.241183 4986 generic.go:334] "Generic (PLEG): container finished" podID="d3a45156-295b-4093-80e7-2059f81ddbd7" containerID="2ff8c1f1602e57602cae3f46dd3d2bf8d7abad6bd0952f4347152678e8606d92" exitCode=143 Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.241219 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" event={"ID":"d3a45156-295b-4093-80e7-2059f81ddbd7","Type":"ContainerDied","Data":"0284afd2c8d9ae3edc84d051a83f44827c82b51ce989aff5f0442eb2f8fc0af7"} Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.241242 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" event={"ID":"d3a45156-295b-4093-80e7-2059f81ddbd7","Type":"ContainerDied","Data":"9dfc7520e7cb1e09eb10bf9615dbae11d7e73474f1b73eed481d18269b65aef9"} Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.241253 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" event={"ID":"d3a45156-295b-4093-80e7-2059f81ddbd7","Type":"ContainerDied","Data":"d77d078572dbe7f11e91ece0d25e76bd17004167dabce267a2b568e798aae158"} Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.241263 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" event={"ID":"d3a45156-295b-4093-80e7-2059f81ddbd7","Type":"ContainerDied","Data":"2843fb3fcbadb926b104a9f7c4084468b36293aebed6c692c29419a0457b62a2"} Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.241272 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" event={"ID":"d3a45156-295b-4093-80e7-2059f81ddbd7","Type":"ContainerDied","Data":"6be6405facfa7726cbe56c48184cba0123774e8e34d3ceb9e18daed6f35560c6"} Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.241298 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" event={"ID":"d3a45156-295b-4093-80e7-2059f81ddbd7","Type":"ContainerDied","Data":"5c79f84fc019d109bac446dd058b4bafb0b516a13fae47b1924e4f7690658ba8"} Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.241312 4986 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0284afd2c8d9ae3edc84d051a83f44827c82b51ce989aff5f0442eb2f8fc0af7"} Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.241320 4986 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"70229ee3e0b6f1c7ad17628a8b9b3b456957d5cee34dc20aecb98f45140d1e88"} Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.241327 4986 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9dfc7520e7cb1e09eb10bf9615dbae11d7e73474f1b73eed481d18269b65aef9"} Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.241331 4986 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d77d078572dbe7f11e91ece0d25e76bd17004167dabce267a2b568e798aae158"} Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.241336 4986 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2843fb3fcbadb926b104a9f7c4084468b36293aebed6c692c29419a0457b62a2"} Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.241341 4986 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6be6405facfa7726cbe56c48184cba0123774e8e34d3ceb9e18daed6f35560c6"} Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.241346 4986 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5c79f84fc019d109bac446dd058b4bafb0b516a13fae47b1924e4f7690658ba8"} Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.241351 4986 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7d94eca13821749d7e15a350c411a831d2887f4211ed092a4fb57e26e49fcd9d"} Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.241356 4986 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2ff8c1f1602e57602cae3f46dd3d2bf8d7abad6bd0952f4347152678e8606d92"} Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.241361 4986 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9c34a338f506e8dd09757984f92a29fd76ddd2d25fc00d680cc4883c4766080e"} Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.241367 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" event={"ID":"d3a45156-295b-4093-80e7-2059f81ddbd7","Type":"ContainerDied","Data":"7d94eca13821749d7e15a350c411a831d2887f4211ed092a4fb57e26e49fcd9d"} Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.241375 4986 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0284afd2c8d9ae3edc84d051a83f44827c82b51ce989aff5f0442eb2f8fc0af7"} Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.241381 4986 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"70229ee3e0b6f1c7ad17628a8b9b3b456957d5cee34dc20aecb98f45140d1e88"} Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.241387 4986 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9dfc7520e7cb1e09eb10bf9615dbae11d7e73474f1b73eed481d18269b65aef9"} Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.241396 4986 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d77d078572dbe7f11e91ece0d25e76bd17004167dabce267a2b568e798aae158"} Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.241403 4986 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2843fb3fcbadb926b104a9f7c4084468b36293aebed6c692c29419a0457b62a2"} Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.241410 4986 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6be6405facfa7726cbe56c48184cba0123774e8e34d3ceb9e18daed6f35560c6"} Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.241417 4986 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5c79f84fc019d109bac446dd058b4bafb0b516a13fae47b1924e4f7690658ba8"} Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.241422 4986 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7d94eca13821749d7e15a350c411a831d2887f4211ed092a4fb57e26e49fcd9d"} Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.241428 4986 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2ff8c1f1602e57602cae3f46dd3d2bf8d7abad6bd0952f4347152678e8606d92"} Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.241435 4986 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9c34a338f506e8dd09757984f92a29fd76ddd2d25fc00d680cc4883c4766080e"} Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.241443 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" event={"ID":"d3a45156-295b-4093-80e7-2059f81ddbd7","Type":"ContainerDied","Data":"2ff8c1f1602e57602cae3f46dd3d2bf8d7abad6bd0952f4347152678e8606d92"} Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.241451 4986 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0284afd2c8d9ae3edc84d051a83f44827c82b51ce989aff5f0442eb2f8fc0af7"} Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.241457 4986 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"70229ee3e0b6f1c7ad17628a8b9b3b456957d5cee34dc20aecb98f45140d1e88"} Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.241462 4986 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9dfc7520e7cb1e09eb10bf9615dbae11d7e73474f1b73eed481d18269b65aef9"} Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.241467 4986 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d77d078572dbe7f11e91ece0d25e76bd17004167dabce267a2b568e798aae158"} Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.241472 4986 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2843fb3fcbadb926b104a9f7c4084468b36293aebed6c692c29419a0457b62a2"} Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.241477 4986 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6be6405facfa7726cbe56c48184cba0123774e8e34d3ceb9e18daed6f35560c6"} Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.241482 4986 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5c79f84fc019d109bac446dd058b4bafb0b516a13fae47b1924e4f7690658ba8"} Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.241486 4986 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7d94eca13821749d7e15a350c411a831d2887f4211ed092a4fb57e26e49fcd9d"} Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.241491 4986 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2ff8c1f1602e57602cae3f46dd3d2bf8d7abad6bd0952f4347152678e8606d92"} Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.241496 4986 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9c34a338f506e8dd09757984f92a29fd76ddd2d25fc00d680cc4883c4766080e"} Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.241503 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" event={"ID":"d3a45156-295b-4093-80e7-2059f81ddbd7","Type":"ContainerDied","Data":"842e8329ae9a18171900a2867fc679837cb8e1260a3bbce3d1c6edb300d657ae"} Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.241511 4986 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0284afd2c8d9ae3edc84d051a83f44827c82b51ce989aff5f0442eb2f8fc0af7"} Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.241517 4986 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"70229ee3e0b6f1c7ad17628a8b9b3b456957d5cee34dc20aecb98f45140d1e88"} Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.241521 4986 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9dfc7520e7cb1e09eb10bf9615dbae11d7e73474f1b73eed481d18269b65aef9"} Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.241526 4986 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d77d078572dbe7f11e91ece0d25e76bd17004167dabce267a2b568e798aae158"} Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.241531 4986 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2843fb3fcbadb926b104a9f7c4084468b36293aebed6c692c29419a0457b62a2"} Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.241536 4986 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6be6405facfa7726cbe56c48184cba0123774e8e34d3ceb9e18daed6f35560c6"} Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.241540 4986 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5c79f84fc019d109bac446dd058b4bafb0b516a13fae47b1924e4f7690658ba8"} Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.241545 4986 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7d94eca13821749d7e15a350c411a831d2887f4211ed092a4fb57e26e49fcd9d"} Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.241550 4986 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2ff8c1f1602e57602cae3f46dd3d2bf8d7abad6bd0952f4347152678e8606d92"} Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.241555 4986 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9c34a338f506e8dd09757984f92a29fd76ddd2d25fc00d680cc4883c4766080e"} Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.241633 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-9nf52" Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.246071 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gndvm" event={"ID":"afd316b0-03e5-4240-8ce3-9403986ac6ff","Type":"ContainerStarted","Data":"755e420107edd17c01b72680b3c30ae272aad252ef80b880f478fed1c7932c76"} Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.274518 4986 scope.go:117] "RemoveContainer" containerID="0284afd2c8d9ae3edc84d051a83f44827c82b51ce989aff5f0442eb2f8fc0af7" Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.330515 4986 scope.go:117] "RemoveContainer" containerID="70229ee3e0b6f1c7ad17628a8b9b3b456957d5cee34dc20aecb98f45140d1e88" Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.344519 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-9nf52"] Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.349042 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-9nf52"] Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.350335 4986 scope.go:117] "RemoveContainer" containerID="9dfc7520e7cb1e09eb10bf9615dbae11d7e73474f1b73eed481d18269b65aef9" Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.385719 4986 scope.go:117] "RemoveContainer" containerID="d77d078572dbe7f11e91ece0d25e76bd17004167dabce267a2b568e798aae158" Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.401162 4986 scope.go:117] "RemoveContainer" containerID="2843fb3fcbadb926b104a9f7c4084468b36293aebed6c692c29419a0457b62a2" Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.413365 4986 scope.go:117] "RemoveContainer" containerID="6be6405facfa7726cbe56c48184cba0123774e8e34d3ceb9e18daed6f35560c6" Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.427932 4986 scope.go:117] "RemoveContainer" containerID="5c79f84fc019d109bac446dd058b4bafb0b516a13fae47b1924e4f7690658ba8" Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.440303 4986 scope.go:117] "RemoveContainer" containerID="7d94eca13821749d7e15a350c411a831d2887f4211ed092a4fb57e26e49fcd9d" Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.455116 4986 scope.go:117] "RemoveContainer" containerID="2ff8c1f1602e57602cae3f46dd3d2bf8d7abad6bd0952f4347152678e8606d92" Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.469165 4986 scope.go:117] "RemoveContainer" containerID="9c34a338f506e8dd09757984f92a29fd76ddd2d25fc00d680cc4883c4766080e" Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.480274 4986 scope.go:117] "RemoveContainer" containerID="0284afd2c8d9ae3edc84d051a83f44827c82b51ce989aff5f0442eb2f8fc0af7" Dec 03 13:10:48 crc kubenswrapper[4986]: E1203 13:10:48.480739 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0284afd2c8d9ae3edc84d051a83f44827c82b51ce989aff5f0442eb2f8fc0af7\": container with ID starting with 0284afd2c8d9ae3edc84d051a83f44827c82b51ce989aff5f0442eb2f8fc0af7 not found: ID does not exist" containerID="0284afd2c8d9ae3edc84d051a83f44827c82b51ce989aff5f0442eb2f8fc0af7" Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.480774 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0284afd2c8d9ae3edc84d051a83f44827c82b51ce989aff5f0442eb2f8fc0af7"} err="failed to get container status \"0284afd2c8d9ae3edc84d051a83f44827c82b51ce989aff5f0442eb2f8fc0af7\": rpc error: code = NotFound desc = could not find container \"0284afd2c8d9ae3edc84d051a83f44827c82b51ce989aff5f0442eb2f8fc0af7\": container with ID starting with 0284afd2c8d9ae3edc84d051a83f44827c82b51ce989aff5f0442eb2f8fc0af7 not found: ID does not exist" Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.480800 4986 scope.go:117] "RemoveContainer" containerID="70229ee3e0b6f1c7ad17628a8b9b3b456957d5cee34dc20aecb98f45140d1e88" Dec 03 13:10:48 crc kubenswrapper[4986]: E1203 13:10:48.481050 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70229ee3e0b6f1c7ad17628a8b9b3b456957d5cee34dc20aecb98f45140d1e88\": container with ID starting with 70229ee3e0b6f1c7ad17628a8b9b3b456957d5cee34dc20aecb98f45140d1e88 not found: ID does not exist" containerID="70229ee3e0b6f1c7ad17628a8b9b3b456957d5cee34dc20aecb98f45140d1e88" Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.481080 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70229ee3e0b6f1c7ad17628a8b9b3b456957d5cee34dc20aecb98f45140d1e88"} err="failed to get container status \"70229ee3e0b6f1c7ad17628a8b9b3b456957d5cee34dc20aecb98f45140d1e88\": rpc error: code = NotFound desc = could not find container \"70229ee3e0b6f1c7ad17628a8b9b3b456957d5cee34dc20aecb98f45140d1e88\": container with ID starting with 70229ee3e0b6f1c7ad17628a8b9b3b456957d5cee34dc20aecb98f45140d1e88 not found: ID does not exist" Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.481097 4986 scope.go:117] "RemoveContainer" containerID="9dfc7520e7cb1e09eb10bf9615dbae11d7e73474f1b73eed481d18269b65aef9" Dec 03 13:10:48 crc kubenswrapper[4986]: E1203 13:10:48.481615 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9dfc7520e7cb1e09eb10bf9615dbae11d7e73474f1b73eed481d18269b65aef9\": container with ID starting with 9dfc7520e7cb1e09eb10bf9615dbae11d7e73474f1b73eed481d18269b65aef9 not found: ID does not exist" containerID="9dfc7520e7cb1e09eb10bf9615dbae11d7e73474f1b73eed481d18269b65aef9" Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.481667 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9dfc7520e7cb1e09eb10bf9615dbae11d7e73474f1b73eed481d18269b65aef9"} err="failed to get container status \"9dfc7520e7cb1e09eb10bf9615dbae11d7e73474f1b73eed481d18269b65aef9\": rpc error: code = NotFound desc = could not find container \"9dfc7520e7cb1e09eb10bf9615dbae11d7e73474f1b73eed481d18269b65aef9\": container with ID starting with 9dfc7520e7cb1e09eb10bf9615dbae11d7e73474f1b73eed481d18269b65aef9 not found: ID does not exist" Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.481702 4986 scope.go:117] "RemoveContainer" containerID="d77d078572dbe7f11e91ece0d25e76bd17004167dabce267a2b568e798aae158" Dec 03 13:10:48 crc kubenswrapper[4986]: E1203 13:10:48.482396 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d77d078572dbe7f11e91ece0d25e76bd17004167dabce267a2b568e798aae158\": container with ID starting with d77d078572dbe7f11e91ece0d25e76bd17004167dabce267a2b568e798aae158 not found: ID does not exist" containerID="d77d078572dbe7f11e91ece0d25e76bd17004167dabce267a2b568e798aae158" Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.482425 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d77d078572dbe7f11e91ece0d25e76bd17004167dabce267a2b568e798aae158"} err="failed to get container status \"d77d078572dbe7f11e91ece0d25e76bd17004167dabce267a2b568e798aae158\": rpc error: code = NotFound desc = could not find container \"d77d078572dbe7f11e91ece0d25e76bd17004167dabce267a2b568e798aae158\": container with ID starting with d77d078572dbe7f11e91ece0d25e76bd17004167dabce267a2b568e798aae158 not found: ID does not exist" Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.482441 4986 scope.go:117] "RemoveContainer" containerID="2843fb3fcbadb926b104a9f7c4084468b36293aebed6c692c29419a0457b62a2" Dec 03 13:10:48 crc kubenswrapper[4986]: E1203 13:10:48.482806 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2843fb3fcbadb926b104a9f7c4084468b36293aebed6c692c29419a0457b62a2\": container with ID starting with 2843fb3fcbadb926b104a9f7c4084468b36293aebed6c692c29419a0457b62a2 not found: ID does not exist" containerID="2843fb3fcbadb926b104a9f7c4084468b36293aebed6c692c29419a0457b62a2" Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.482827 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2843fb3fcbadb926b104a9f7c4084468b36293aebed6c692c29419a0457b62a2"} err="failed to get container status \"2843fb3fcbadb926b104a9f7c4084468b36293aebed6c692c29419a0457b62a2\": rpc error: code = NotFound desc = could not find container \"2843fb3fcbadb926b104a9f7c4084468b36293aebed6c692c29419a0457b62a2\": container with ID starting with 2843fb3fcbadb926b104a9f7c4084468b36293aebed6c692c29419a0457b62a2 not found: ID does not exist" Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.482843 4986 scope.go:117] "RemoveContainer" containerID="6be6405facfa7726cbe56c48184cba0123774e8e34d3ceb9e18daed6f35560c6" Dec 03 13:10:48 crc kubenswrapper[4986]: E1203 13:10:48.483070 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6be6405facfa7726cbe56c48184cba0123774e8e34d3ceb9e18daed6f35560c6\": container with ID starting with 6be6405facfa7726cbe56c48184cba0123774e8e34d3ceb9e18daed6f35560c6 not found: ID does not exist" containerID="6be6405facfa7726cbe56c48184cba0123774e8e34d3ceb9e18daed6f35560c6" Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.483092 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6be6405facfa7726cbe56c48184cba0123774e8e34d3ceb9e18daed6f35560c6"} err="failed to get container status \"6be6405facfa7726cbe56c48184cba0123774e8e34d3ceb9e18daed6f35560c6\": rpc error: code = NotFound desc = could not find container \"6be6405facfa7726cbe56c48184cba0123774e8e34d3ceb9e18daed6f35560c6\": container with ID starting with 6be6405facfa7726cbe56c48184cba0123774e8e34d3ceb9e18daed6f35560c6 not found: ID does not exist" Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.483105 4986 scope.go:117] "RemoveContainer" containerID="5c79f84fc019d109bac446dd058b4bafb0b516a13fae47b1924e4f7690658ba8" Dec 03 13:10:48 crc kubenswrapper[4986]: E1203 13:10:48.483551 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c79f84fc019d109bac446dd058b4bafb0b516a13fae47b1924e4f7690658ba8\": container with ID starting with 5c79f84fc019d109bac446dd058b4bafb0b516a13fae47b1924e4f7690658ba8 not found: ID does not exist" containerID="5c79f84fc019d109bac446dd058b4bafb0b516a13fae47b1924e4f7690658ba8" Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.483571 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c79f84fc019d109bac446dd058b4bafb0b516a13fae47b1924e4f7690658ba8"} err="failed to get container status \"5c79f84fc019d109bac446dd058b4bafb0b516a13fae47b1924e4f7690658ba8\": rpc error: code = NotFound desc = could not find container \"5c79f84fc019d109bac446dd058b4bafb0b516a13fae47b1924e4f7690658ba8\": container with ID starting with 5c79f84fc019d109bac446dd058b4bafb0b516a13fae47b1924e4f7690658ba8 not found: ID does not exist" Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.483583 4986 scope.go:117] "RemoveContainer" containerID="7d94eca13821749d7e15a350c411a831d2887f4211ed092a4fb57e26e49fcd9d" Dec 03 13:10:48 crc kubenswrapper[4986]: E1203 13:10:48.483794 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d94eca13821749d7e15a350c411a831d2887f4211ed092a4fb57e26e49fcd9d\": container with ID starting with 7d94eca13821749d7e15a350c411a831d2887f4211ed092a4fb57e26e49fcd9d not found: ID does not exist" containerID="7d94eca13821749d7e15a350c411a831d2887f4211ed092a4fb57e26e49fcd9d" Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.483817 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d94eca13821749d7e15a350c411a831d2887f4211ed092a4fb57e26e49fcd9d"} err="failed to get container status \"7d94eca13821749d7e15a350c411a831d2887f4211ed092a4fb57e26e49fcd9d\": rpc error: code = NotFound desc = could not find container \"7d94eca13821749d7e15a350c411a831d2887f4211ed092a4fb57e26e49fcd9d\": container with ID starting with 7d94eca13821749d7e15a350c411a831d2887f4211ed092a4fb57e26e49fcd9d not found: ID does not exist" Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.483837 4986 scope.go:117] "RemoveContainer" containerID="2ff8c1f1602e57602cae3f46dd3d2bf8d7abad6bd0952f4347152678e8606d92" Dec 03 13:10:48 crc kubenswrapper[4986]: E1203 13:10:48.484199 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ff8c1f1602e57602cae3f46dd3d2bf8d7abad6bd0952f4347152678e8606d92\": container with ID starting with 2ff8c1f1602e57602cae3f46dd3d2bf8d7abad6bd0952f4347152678e8606d92 not found: ID does not exist" containerID="2ff8c1f1602e57602cae3f46dd3d2bf8d7abad6bd0952f4347152678e8606d92" Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.484218 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ff8c1f1602e57602cae3f46dd3d2bf8d7abad6bd0952f4347152678e8606d92"} err="failed to get container status \"2ff8c1f1602e57602cae3f46dd3d2bf8d7abad6bd0952f4347152678e8606d92\": rpc error: code = NotFound desc = could not find container \"2ff8c1f1602e57602cae3f46dd3d2bf8d7abad6bd0952f4347152678e8606d92\": container with ID starting with 2ff8c1f1602e57602cae3f46dd3d2bf8d7abad6bd0952f4347152678e8606d92 not found: ID does not exist" Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.484231 4986 scope.go:117] "RemoveContainer" containerID="9c34a338f506e8dd09757984f92a29fd76ddd2d25fc00d680cc4883c4766080e" Dec 03 13:10:48 crc kubenswrapper[4986]: E1203 13:10:48.484511 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c34a338f506e8dd09757984f92a29fd76ddd2d25fc00d680cc4883c4766080e\": container with ID starting with 9c34a338f506e8dd09757984f92a29fd76ddd2d25fc00d680cc4883c4766080e not found: ID does not exist" containerID="9c34a338f506e8dd09757984f92a29fd76ddd2d25fc00d680cc4883c4766080e" Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.484539 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c34a338f506e8dd09757984f92a29fd76ddd2d25fc00d680cc4883c4766080e"} err="failed to get container status \"9c34a338f506e8dd09757984f92a29fd76ddd2d25fc00d680cc4883c4766080e\": rpc error: code = NotFound desc = could not find container \"9c34a338f506e8dd09757984f92a29fd76ddd2d25fc00d680cc4883c4766080e\": container with ID starting with 9c34a338f506e8dd09757984f92a29fd76ddd2d25fc00d680cc4883c4766080e not found: ID does not exist" Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.484556 4986 scope.go:117] "RemoveContainer" containerID="0284afd2c8d9ae3edc84d051a83f44827c82b51ce989aff5f0442eb2f8fc0af7" Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.484828 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0284afd2c8d9ae3edc84d051a83f44827c82b51ce989aff5f0442eb2f8fc0af7"} err="failed to get container status \"0284afd2c8d9ae3edc84d051a83f44827c82b51ce989aff5f0442eb2f8fc0af7\": rpc error: code = NotFound desc = could not find container \"0284afd2c8d9ae3edc84d051a83f44827c82b51ce989aff5f0442eb2f8fc0af7\": container with ID starting with 0284afd2c8d9ae3edc84d051a83f44827c82b51ce989aff5f0442eb2f8fc0af7 not found: ID does not exist" Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.484846 4986 scope.go:117] "RemoveContainer" containerID="70229ee3e0b6f1c7ad17628a8b9b3b456957d5cee34dc20aecb98f45140d1e88" Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.485122 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70229ee3e0b6f1c7ad17628a8b9b3b456957d5cee34dc20aecb98f45140d1e88"} err="failed to get container status \"70229ee3e0b6f1c7ad17628a8b9b3b456957d5cee34dc20aecb98f45140d1e88\": rpc error: code = NotFound desc = could not find container \"70229ee3e0b6f1c7ad17628a8b9b3b456957d5cee34dc20aecb98f45140d1e88\": container with ID starting with 70229ee3e0b6f1c7ad17628a8b9b3b456957d5cee34dc20aecb98f45140d1e88 not found: ID does not exist" Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.485147 4986 scope.go:117] "RemoveContainer" containerID="9dfc7520e7cb1e09eb10bf9615dbae11d7e73474f1b73eed481d18269b65aef9" Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.485398 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9dfc7520e7cb1e09eb10bf9615dbae11d7e73474f1b73eed481d18269b65aef9"} err="failed to get container status \"9dfc7520e7cb1e09eb10bf9615dbae11d7e73474f1b73eed481d18269b65aef9\": rpc error: code = NotFound desc = could not find container \"9dfc7520e7cb1e09eb10bf9615dbae11d7e73474f1b73eed481d18269b65aef9\": container with ID starting with 9dfc7520e7cb1e09eb10bf9615dbae11d7e73474f1b73eed481d18269b65aef9 not found: ID does not exist" Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.485423 4986 scope.go:117] "RemoveContainer" containerID="d77d078572dbe7f11e91ece0d25e76bd17004167dabce267a2b568e798aae158" Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.485616 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d77d078572dbe7f11e91ece0d25e76bd17004167dabce267a2b568e798aae158"} err="failed to get container status \"d77d078572dbe7f11e91ece0d25e76bd17004167dabce267a2b568e798aae158\": rpc error: code = NotFound desc = could not find container \"d77d078572dbe7f11e91ece0d25e76bd17004167dabce267a2b568e798aae158\": container with ID starting with d77d078572dbe7f11e91ece0d25e76bd17004167dabce267a2b568e798aae158 not found: ID does not exist" Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.485633 4986 scope.go:117] "RemoveContainer" containerID="2843fb3fcbadb926b104a9f7c4084468b36293aebed6c692c29419a0457b62a2" Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.485862 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2843fb3fcbadb926b104a9f7c4084468b36293aebed6c692c29419a0457b62a2"} err="failed to get container status \"2843fb3fcbadb926b104a9f7c4084468b36293aebed6c692c29419a0457b62a2\": rpc error: code = NotFound desc = could not find container \"2843fb3fcbadb926b104a9f7c4084468b36293aebed6c692c29419a0457b62a2\": container with ID starting with 2843fb3fcbadb926b104a9f7c4084468b36293aebed6c692c29419a0457b62a2 not found: ID does not exist" Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.485885 4986 scope.go:117] "RemoveContainer" containerID="6be6405facfa7726cbe56c48184cba0123774e8e34d3ceb9e18daed6f35560c6" Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.486126 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6be6405facfa7726cbe56c48184cba0123774e8e34d3ceb9e18daed6f35560c6"} err="failed to get container status \"6be6405facfa7726cbe56c48184cba0123774e8e34d3ceb9e18daed6f35560c6\": rpc error: code = NotFound desc = could not find container \"6be6405facfa7726cbe56c48184cba0123774e8e34d3ceb9e18daed6f35560c6\": container with ID starting with 6be6405facfa7726cbe56c48184cba0123774e8e34d3ceb9e18daed6f35560c6 not found: ID does not exist" Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.486144 4986 scope.go:117] "RemoveContainer" containerID="5c79f84fc019d109bac446dd058b4bafb0b516a13fae47b1924e4f7690658ba8" Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.486364 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c79f84fc019d109bac446dd058b4bafb0b516a13fae47b1924e4f7690658ba8"} err="failed to get container status \"5c79f84fc019d109bac446dd058b4bafb0b516a13fae47b1924e4f7690658ba8\": rpc error: code = NotFound desc = could not find container \"5c79f84fc019d109bac446dd058b4bafb0b516a13fae47b1924e4f7690658ba8\": container with ID starting with 5c79f84fc019d109bac446dd058b4bafb0b516a13fae47b1924e4f7690658ba8 not found: ID does not exist" Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.486390 4986 scope.go:117] "RemoveContainer" containerID="7d94eca13821749d7e15a350c411a831d2887f4211ed092a4fb57e26e49fcd9d" Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.486603 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d94eca13821749d7e15a350c411a831d2887f4211ed092a4fb57e26e49fcd9d"} err="failed to get container status \"7d94eca13821749d7e15a350c411a831d2887f4211ed092a4fb57e26e49fcd9d\": rpc error: code = NotFound desc = could not find container \"7d94eca13821749d7e15a350c411a831d2887f4211ed092a4fb57e26e49fcd9d\": container with ID starting with 7d94eca13821749d7e15a350c411a831d2887f4211ed092a4fb57e26e49fcd9d not found: ID does not exist" Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.486620 4986 scope.go:117] "RemoveContainer" containerID="2ff8c1f1602e57602cae3f46dd3d2bf8d7abad6bd0952f4347152678e8606d92" Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.486799 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ff8c1f1602e57602cae3f46dd3d2bf8d7abad6bd0952f4347152678e8606d92"} err="failed to get container status \"2ff8c1f1602e57602cae3f46dd3d2bf8d7abad6bd0952f4347152678e8606d92\": rpc error: code = NotFound desc = could not find container \"2ff8c1f1602e57602cae3f46dd3d2bf8d7abad6bd0952f4347152678e8606d92\": container with ID starting with 2ff8c1f1602e57602cae3f46dd3d2bf8d7abad6bd0952f4347152678e8606d92 not found: ID does not exist" Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.486820 4986 scope.go:117] "RemoveContainer" containerID="9c34a338f506e8dd09757984f92a29fd76ddd2d25fc00d680cc4883c4766080e" Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.487151 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c34a338f506e8dd09757984f92a29fd76ddd2d25fc00d680cc4883c4766080e"} err="failed to get container status \"9c34a338f506e8dd09757984f92a29fd76ddd2d25fc00d680cc4883c4766080e\": rpc error: code = NotFound desc = could not find container \"9c34a338f506e8dd09757984f92a29fd76ddd2d25fc00d680cc4883c4766080e\": container with ID starting with 9c34a338f506e8dd09757984f92a29fd76ddd2d25fc00d680cc4883c4766080e not found: ID does not exist" Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.487168 4986 scope.go:117] "RemoveContainer" containerID="0284afd2c8d9ae3edc84d051a83f44827c82b51ce989aff5f0442eb2f8fc0af7" Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.487558 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0284afd2c8d9ae3edc84d051a83f44827c82b51ce989aff5f0442eb2f8fc0af7"} err="failed to get container status \"0284afd2c8d9ae3edc84d051a83f44827c82b51ce989aff5f0442eb2f8fc0af7\": rpc error: code = NotFound desc = could not find container \"0284afd2c8d9ae3edc84d051a83f44827c82b51ce989aff5f0442eb2f8fc0af7\": container with ID starting with 0284afd2c8d9ae3edc84d051a83f44827c82b51ce989aff5f0442eb2f8fc0af7 not found: ID does not exist" Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.487575 4986 scope.go:117] "RemoveContainer" containerID="70229ee3e0b6f1c7ad17628a8b9b3b456957d5cee34dc20aecb98f45140d1e88" Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.487897 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70229ee3e0b6f1c7ad17628a8b9b3b456957d5cee34dc20aecb98f45140d1e88"} err="failed to get container status \"70229ee3e0b6f1c7ad17628a8b9b3b456957d5cee34dc20aecb98f45140d1e88\": rpc error: code = NotFound desc = could not find container \"70229ee3e0b6f1c7ad17628a8b9b3b456957d5cee34dc20aecb98f45140d1e88\": container with ID starting with 70229ee3e0b6f1c7ad17628a8b9b3b456957d5cee34dc20aecb98f45140d1e88 not found: ID does not exist" Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.487918 4986 scope.go:117] "RemoveContainer" containerID="9dfc7520e7cb1e09eb10bf9615dbae11d7e73474f1b73eed481d18269b65aef9" Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.488161 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9dfc7520e7cb1e09eb10bf9615dbae11d7e73474f1b73eed481d18269b65aef9"} err="failed to get container status \"9dfc7520e7cb1e09eb10bf9615dbae11d7e73474f1b73eed481d18269b65aef9\": rpc error: code = NotFound desc = could not find container \"9dfc7520e7cb1e09eb10bf9615dbae11d7e73474f1b73eed481d18269b65aef9\": container with ID starting with 9dfc7520e7cb1e09eb10bf9615dbae11d7e73474f1b73eed481d18269b65aef9 not found: ID does not exist" Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.488179 4986 scope.go:117] "RemoveContainer" containerID="d77d078572dbe7f11e91ece0d25e76bd17004167dabce267a2b568e798aae158" Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.488380 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d77d078572dbe7f11e91ece0d25e76bd17004167dabce267a2b568e798aae158"} err="failed to get container status \"d77d078572dbe7f11e91ece0d25e76bd17004167dabce267a2b568e798aae158\": rpc error: code = NotFound desc = could not find container \"d77d078572dbe7f11e91ece0d25e76bd17004167dabce267a2b568e798aae158\": container with ID starting with d77d078572dbe7f11e91ece0d25e76bd17004167dabce267a2b568e798aae158 not found: ID does not exist" Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.488398 4986 scope.go:117] "RemoveContainer" containerID="2843fb3fcbadb926b104a9f7c4084468b36293aebed6c692c29419a0457b62a2" Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.488594 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2843fb3fcbadb926b104a9f7c4084468b36293aebed6c692c29419a0457b62a2"} err="failed to get container status \"2843fb3fcbadb926b104a9f7c4084468b36293aebed6c692c29419a0457b62a2\": rpc error: code = NotFound desc = could not find container \"2843fb3fcbadb926b104a9f7c4084468b36293aebed6c692c29419a0457b62a2\": container with ID starting with 2843fb3fcbadb926b104a9f7c4084468b36293aebed6c692c29419a0457b62a2 not found: ID does not exist" Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.488611 4986 scope.go:117] "RemoveContainer" containerID="6be6405facfa7726cbe56c48184cba0123774e8e34d3ceb9e18daed6f35560c6" Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.488800 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6be6405facfa7726cbe56c48184cba0123774e8e34d3ceb9e18daed6f35560c6"} err="failed to get container status \"6be6405facfa7726cbe56c48184cba0123774e8e34d3ceb9e18daed6f35560c6\": rpc error: code = NotFound desc = could not find container \"6be6405facfa7726cbe56c48184cba0123774e8e34d3ceb9e18daed6f35560c6\": container with ID starting with 6be6405facfa7726cbe56c48184cba0123774e8e34d3ceb9e18daed6f35560c6 not found: ID does not exist" Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.488816 4986 scope.go:117] "RemoveContainer" containerID="5c79f84fc019d109bac446dd058b4bafb0b516a13fae47b1924e4f7690658ba8" Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.489010 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c79f84fc019d109bac446dd058b4bafb0b516a13fae47b1924e4f7690658ba8"} err="failed to get container status \"5c79f84fc019d109bac446dd058b4bafb0b516a13fae47b1924e4f7690658ba8\": rpc error: code = NotFound desc = could not find container \"5c79f84fc019d109bac446dd058b4bafb0b516a13fae47b1924e4f7690658ba8\": container with ID starting with 5c79f84fc019d109bac446dd058b4bafb0b516a13fae47b1924e4f7690658ba8 not found: ID does not exist" Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.489032 4986 scope.go:117] "RemoveContainer" containerID="7d94eca13821749d7e15a350c411a831d2887f4211ed092a4fb57e26e49fcd9d" Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.489232 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d94eca13821749d7e15a350c411a831d2887f4211ed092a4fb57e26e49fcd9d"} err="failed to get container status \"7d94eca13821749d7e15a350c411a831d2887f4211ed092a4fb57e26e49fcd9d\": rpc error: code = NotFound desc = could not find container \"7d94eca13821749d7e15a350c411a831d2887f4211ed092a4fb57e26e49fcd9d\": container with ID starting with 7d94eca13821749d7e15a350c411a831d2887f4211ed092a4fb57e26e49fcd9d not found: ID does not exist" Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.489248 4986 scope.go:117] "RemoveContainer" containerID="2ff8c1f1602e57602cae3f46dd3d2bf8d7abad6bd0952f4347152678e8606d92" Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.489550 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ff8c1f1602e57602cae3f46dd3d2bf8d7abad6bd0952f4347152678e8606d92"} err="failed to get container status \"2ff8c1f1602e57602cae3f46dd3d2bf8d7abad6bd0952f4347152678e8606d92\": rpc error: code = NotFound desc = could not find container \"2ff8c1f1602e57602cae3f46dd3d2bf8d7abad6bd0952f4347152678e8606d92\": container with ID starting with 2ff8c1f1602e57602cae3f46dd3d2bf8d7abad6bd0952f4347152678e8606d92 not found: ID does not exist" Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.489595 4986 scope.go:117] "RemoveContainer" containerID="9c34a338f506e8dd09757984f92a29fd76ddd2d25fc00d680cc4883c4766080e" Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.489778 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c34a338f506e8dd09757984f92a29fd76ddd2d25fc00d680cc4883c4766080e"} err="failed to get container status \"9c34a338f506e8dd09757984f92a29fd76ddd2d25fc00d680cc4883c4766080e\": rpc error: code = NotFound desc = could not find container \"9c34a338f506e8dd09757984f92a29fd76ddd2d25fc00d680cc4883c4766080e\": container with ID starting with 9c34a338f506e8dd09757984f92a29fd76ddd2d25fc00d680cc4883c4766080e not found: ID does not exist" Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.489793 4986 scope.go:117] "RemoveContainer" containerID="0284afd2c8d9ae3edc84d051a83f44827c82b51ce989aff5f0442eb2f8fc0af7" Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.490011 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0284afd2c8d9ae3edc84d051a83f44827c82b51ce989aff5f0442eb2f8fc0af7"} err="failed to get container status \"0284afd2c8d9ae3edc84d051a83f44827c82b51ce989aff5f0442eb2f8fc0af7\": rpc error: code = NotFound desc = could not find container \"0284afd2c8d9ae3edc84d051a83f44827c82b51ce989aff5f0442eb2f8fc0af7\": container with ID starting with 0284afd2c8d9ae3edc84d051a83f44827c82b51ce989aff5f0442eb2f8fc0af7 not found: ID does not exist" Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.490049 4986 scope.go:117] "RemoveContainer" containerID="70229ee3e0b6f1c7ad17628a8b9b3b456957d5cee34dc20aecb98f45140d1e88" Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.490349 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70229ee3e0b6f1c7ad17628a8b9b3b456957d5cee34dc20aecb98f45140d1e88"} err="failed to get container status \"70229ee3e0b6f1c7ad17628a8b9b3b456957d5cee34dc20aecb98f45140d1e88\": rpc error: code = NotFound desc = could not find container \"70229ee3e0b6f1c7ad17628a8b9b3b456957d5cee34dc20aecb98f45140d1e88\": container with ID starting with 70229ee3e0b6f1c7ad17628a8b9b3b456957d5cee34dc20aecb98f45140d1e88 not found: ID does not exist" Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.490383 4986 scope.go:117] "RemoveContainer" containerID="9dfc7520e7cb1e09eb10bf9615dbae11d7e73474f1b73eed481d18269b65aef9" Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.490778 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9dfc7520e7cb1e09eb10bf9615dbae11d7e73474f1b73eed481d18269b65aef9"} err="failed to get container status \"9dfc7520e7cb1e09eb10bf9615dbae11d7e73474f1b73eed481d18269b65aef9\": rpc error: code = NotFound desc = could not find container \"9dfc7520e7cb1e09eb10bf9615dbae11d7e73474f1b73eed481d18269b65aef9\": container with ID starting with 9dfc7520e7cb1e09eb10bf9615dbae11d7e73474f1b73eed481d18269b65aef9 not found: ID does not exist" Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.490796 4986 scope.go:117] "RemoveContainer" containerID="d77d078572dbe7f11e91ece0d25e76bd17004167dabce267a2b568e798aae158" Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.491048 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d77d078572dbe7f11e91ece0d25e76bd17004167dabce267a2b568e798aae158"} err="failed to get container status \"d77d078572dbe7f11e91ece0d25e76bd17004167dabce267a2b568e798aae158\": rpc error: code = NotFound desc = could not find container \"d77d078572dbe7f11e91ece0d25e76bd17004167dabce267a2b568e798aae158\": container with ID starting with d77d078572dbe7f11e91ece0d25e76bd17004167dabce267a2b568e798aae158 not found: ID does not exist" Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.491089 4986 scope.go:117] "RemoveContainer" containerID="2843fb3fcbadb926b104a9f7c4084468b36293aebed6c692c29419a0457b62a2" Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.491432 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2843fb3fcbadb926b104a9f7c4084468b36293aebed6c692c29419a0457b62a2"} err="failed to get container status \"2843fb3fcbadb926b104a9f7c4084468b36293aebed6c692c29419a0457b62a2\": rpc error: code = NotFound desc = could not find container \"2843fb3fcbadb926b104a9f7c4084468b36293aebed6c692c29419a0457b62a2\": container with ID starting with 2843fb3fcbadb926b104a9f7c4084468b36293aebed6c692c29419a0457b62a2 not found: ID does not exist" Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.491458 4986 scope.go:117] "RemoveContainer" containerID="6be6405facfa7726cbe56c48184cba0123774e8e34d3ceb9e18daed6f35560c6" Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.491698 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6be6405facfa7726cbe56c48184cba0123774e8e34d3ceb9e18daed6f35560c6"} err="failed to get container status \"6be6405facfa7726cbe56c48184cba0123774e8e34d3ceb9e18daed6f35560c6\": rpc error: code = NotFound desc = could not find container \"6be6405facfa7726cbe56c48184cba0123774e8e34d3ceb9e18daed6f35560c6\": container with ID starting with 6be6405facfa7726cbe56c48184cba0123774e8e34d3ceb9e18daed6f35560c6 not found: ID does not exist" Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.491716 4986 scope.go:117] "RemoveContainer" containerID="5c79f84fc019d109bac446dd058b4bafb0b516a13fae47b1924e4f7690658ba8" Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.492043 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c79f84fc019d109bac446dd058b4bafb0b516a13fae47b1924e4f7690658ba8"} err="failed to get container status \"5c79f84fc019d109bac446dd058b4bafb0b516a13fae47b1924e4f7690658ba8\": rpc error: code = NotFound desc = could not find container \"5c79f84fc019d109bac446dd058b4bafb0b516a13fae47b1924e4f7690658ba8\": container with ID starting with 5c79f84fc019d109bac446dd058b4bafb0b516a13fae47b1924e4f7690658ba8 not found: ID does not exist" Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.492060 4986 scope.go:117] "RemoveContainer" containerID="7d94eca13821749d7e15a350c411a831d2887f4211ed092a4fb57e26e49fcd9d" Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.492248 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d94eca13821749d7e15a350c411a831d2887f4211ed092a4fb57e26e49fcd9d"} err="failed to get container status \"7d94eca13821749d7e15a350c411a831d2887f4211ed092a4fb57e26e49fcd9d\": rpc error: code = NotFound desc = could not find container \"7d94eca13821749d7e15a350c411a831d2887f4211ed092a4fb57e26e49fcd9d\": container with ID starting with 7d94eca13821749d7e15a350c411a831d2887f4211ed092a4fb57e26e49fcd9d not found: ID does not exist" Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.492268 4986 scope.go:117] "RemoveContainer" containerID="2ff8c1f1602e57602cae3f46dd3d2bf8d7abad6bd0952f4347152678e8606d92" Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.492516 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ff8c1f1602e57602cae3f46dd3d2bf8d7abad6bd0952f4347152678e8606d92"} err="failed to get container status \"2ff8c1f1602e57602cae3f46dd3d2bf8d7abad6bd0952f4347152678e8606d92\": rpc error: code = NotFound desc = could not find container \"2ff8c1f1602e57602cae3f46dd3d2bf8d7abad6bd0952f4347152678e8606d92\": container with ID starting with 2ff8c1f1602e57602cae3f46dd3d2bf8d7abad6bd0952f4347152678e8606d92 not found: ID does not exist" Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.492538 4986 scope.go:117] "RemoveContainer" containerID="9c34a338f506e8dd09757984f92a29fd76ddd2d25fc00d680cc4883c4766080e" Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.492756 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c34a338f506e8dd09757984f92a29fd76ddd2d25fc00d680cc4883c4766080e"} err="failed to get container status \"9c34a338f506e8dd09757984f92a29fd76ddd2d25fc00d680cc4883c4766080e\": rpc error: code = NotFound desc = could not find container \"9c34a338f506e8dd09757984f92a29fd76ddd2d25fc00d680cc4883c4766080e\": container with ID starting with 9c34a338f506e8dd09757984f92a29fd76ddd2d25fc00d680cc4883c4766080e not found: ID does not exist" Dec 03 13:10:48 crc kubenswrapper[4986]: I1203 13:10:48.953072 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3a45156-295b-4093-80e7-2059f81ddbd7" path="/var/lib/kubelet/pods/d3a45156-295b-4093-80e7-2059f81ddbd7/volumes" Dec 03 13:10:49 crc kubenswrapper[4986]: I1203 13:10:49.252965 4986 generic.go:334] "Generic (PLEG): container finished" podID="afd316b0-03e5-4240-8ce3-9403986ac6ff" containerID="61207fe80c982e2a7a1ea17d11922149ac890c126a86c3917d911153a1584997" exitCode=0 Dec 03 13:10:49 crc kubenswrapper[4986]: I1203 13:10:49.253019 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gndvm" event={"ID":"afd316b0-03e5-4240-8ce3-9403986ac6ff","Type":"ContainerDied","Data":"61207fe80c982e2a7a1ea17d11922149ac890c126a86c3917d911153a1584997"} Dec 03 13:10:49 crc kubenswrapper[4986]: I1203 13:10:49.255691 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-px97g_97196b6d-75cc-4de4-8805-f9ce3fbd4230/kube-multus/2.log" Dec 03 13:10:49 crc kubenswrapper[4986]: I1203 13:10:49.255854 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-px97g" event={"ID":"97196b6d-75cc-4de4-8805-f9ce3fbd4230","Type":"ContainerStarted","Data":"2cf1f9a37e91d0b985496f8967f94acc2ce8123b708d2c1660ae5b3e1cfe17fe"} Dec 03 13:10:50 crc kubenswrapper[4986]: I1203 13:10:50.270624 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gndvm" event={"ID":"afd316b0-03e5-4240-8ce3-9403986ac6ff","Type":"ContainerStarted","Data":"04dac673435178260832521853659c1afdc91e4ee035a527f93bfdaccad0b58a"} Dec 03 13:10:50 crc kubenswrapper[4986]: I1203 13:10:50.270900 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gndvm" event={"ID":"afd316b0-03e5-4240-8ce3-9403986ac6ff","Type":"ContainerStarted","Data":"215b8be2063f12eb5bb4a17c05e65ec17abb3517890957694382e242078c22fa"} Dec 03 13:10:50 crc kubenswrapper[4986]: I1203 13:10:50.270913 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gndvm" event={"ID":"afd316b0-03e5-4240-8ce3-9403986ac6ff","Type":"ContainerStarted","Data":"97133ac5c733d3785e2f587a57fb0e2b54170b8abf32ee4d19a6efaaadf8c624"} Dec 03 13:10:50 crc kubenswrapper[4986]: I1203 13:10:50.270924 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gndvm" event={"ID":"afd316b0-03e5-4240-8ce3-9403986ac6ff","Type":"ContainerStarted","Data":"254fdbb8212c75ad5e50a25cb52d357f201fcd860f3690300bf2d03b0e8e6123"} Dec 03 13:10:50 crc kubenswrapper[4986]: I1203 13:10:50.270934 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gndvm" event={"ID":"afd316b0-03e5-4240-8ce3-9403986ac6ff","Type":"ContainerStarted","Data":"6ba8c8fd5702743509233a3b4dc52fe7414af6cb5df78e597888bbe3bf85aa9a"} Dec 03 13:10:50 crc kubenswrapper[4986]: I1203 13:10:50.270944 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gndvm" event={"ID":"afd316b0-03e5-4240-8ce3-9403986ac6ff","Type":"ContainerStarted","Data":"4e646840c5fadba9b9bda99b9fb9df62225218249c6b1b0f0c41682417b1ac9b"} Dec 03 13:10:53 crc kubenswrapper[4986]: I1203 13:10:53.291190 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gndvm" event={"ID":"afd316b0-03e5-4240-8ce3-9403986ac6ff","Type":"ContainerStarted","Data":"7cbc84443a3fd0a8e32f8f7b1d2a94ceb38e5fb242aac0dbb234ef2fc3632593"} Dec 03 13:10:55 crc kubenswrapper[4986]: I1203 13:10:55.310741 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gndvm" event={"ID":"afd316b0-03e5-4240-8ce3-9403986ac6ff","Type":"ContainerStarted","Data":"110e184536142309b2b7a81c1d783eee8c67f515dc442d53b26ce867a36d43d3"} Dec 03 13:10:55 crc kubenswrapper[4986]: I1203 13:10:55.311493 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-gndvm" Dec 03 13:10:55 crc kubenswrapper[4986]: I1203 13:10:55.342163 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-gndvm" podStartSLOduration=8.342145522 podStartE2EDuration="8.342145522s" podCreationTimestamp="2025-12-03 13:10:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:10:55.340919889 +0000 UTC m=+914.807351090" watchObservedRunningTime="2025-12-03 13:10:55.342145522 +0000 UTC m=+914.808576713" Dec 03 13:10:55 crc kubenswrapper[4986]: I1203 13:10:55.343121 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-gndvm" Dec 03 13:10:56 crc kubenswrapper[4986]: I1203 13:10:56.319609 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-gndvm" Dec 03 13:10:56 crc kubenswrapper[4986]: I1203 13:10:56.319660 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-gndvm" Dec 03 13:10:56 crc kubenswrapper[4986]: I1203 13:10:56.358265 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-gndvm" Dec 03 13:11:03 crc kubenswrapper[4986]: I1203 13:11:03.491167 4986 patch_prober.go:28] interesting pod/machine-config-daemon-xggpj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 13:11:03 crc kubenswrapper[4986]: I1203 13:11:03.491831 4986 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 13:11:10 crc kubenswrapper[4986]: I1203 13:11:10.665957 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7cmnw"] Dec 03 13:11:10 crc kubenswrapper[4986]: I1203 13:11:10.667520 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7cmnw" Dec 03 13:11:10 crc kubenswrapper[4986]: I1203 13:11:10.672951 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7cmnw"] Dec 03 13:11:10 crc kubenswrapper[4986]: I1203 13:11:10.802897 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f46cea2-be35-4686-83dd-471af2dd6f01-catalog-content\") pod \"community-operators-7cmnw\" (UID: \"9f46cea2-be35-4686-83dd-471af2dd6f01\") " pod="openshift-marketplace/community-operators-7cmnw" Dec 03 13:11:10 crc kubenswrapper[4986]: I1203 13:11:10.803095 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f46cea2-be35-4686-83dd-471af2dd6f01-utilities\") pod \"community-operators-7cmnw\" (UID: \"9f46cea2-be35-4686-83dd-471af2dd6f01\") " pod="openshift-marketplace/community-operators-7cmnw" Dec 03 13:11:10 crc kubenswrapper[4986]: I1203 13:11:10.803187 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96n5w\" (UniqueName: \"kubernetes.io/projected/9f46cea2-be35-4686-83dd-471af2dd6f01-kube-api-access-96n5w\") pod \"community-operators-7cmnw\" (UID: \"9f46cea2-be35-4686-83dd-471af2dd6f01\") " pod="openshift-marketplace/community-operators-7cmnw" Dec 03 13:11:10 crc kubenswrapper[4986]: I1203 13:11:10.903831 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f46cea2-be35-4686-83dd-471af2dd6f01-utilities\") pod \"community-operators-7cmnw\" (UID: \"9f46cea2-be35-4686-83dd-471af2dd6f01\") " pod="openshift-marketplace/community-operators-7cmnw" Dec 03 13:11:10 crc kubenswrapper[4986]: I1203 13:11:10.903894 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96n5w\" (UniqueName: \"kubernetes.io/projected/9f46cea2-be35-4686-83dd-471af2dd6f01-kube-api-access-96n5w\") pod \"community-operators-7cmnw\" (UID: \"9f46cea2-be35-4686-83dd-471af2dd6f01\") " pod="openshift-marketplace/community-operators-7cmnw" Dec 03 13:11:10 crc kubenswrapper[4986]: I1203 13:11:10.903924 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f46cea2-be35-4686-83dd-471af2dd6f01-catalog-content\") pod \"community-operators-7cmnw\" (UID: \"9f46cea2-be35-4686-83dd-471af2dd6f01\") " pod="openshift-marketplace/community-operators-7cmnw" Dec 03 13:11:10 crc kubenswrapper[4986]: I1203 13:11:10.904380 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f46cea2-be35-4686-83dd-471af2dd6f01-utilities\") pod \"community-operators-7cmnw\" (UID: \"9f46cea2-be35-4686-83dd-471af2dd6f01\") " pod="openshift-marketplace/community-operators-7cmnw" Dec 03 13:11:10 crc kubenswrapper[4986]: I1203 13:11:10.904409 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f46cea2-be35-4686-83dd-471af2dd6f01-catalog-content\") pod \"community-operators-7cmnw\" (UID: \"9f46cea2-be35-4686-83dd-471af2dd6f01\") " pod="openshift-marketplace/community-operators-7cmnw" Dec 03 13:11:10 crc kubenswrapper[4986]: I1203 13:11:10.921164 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96n5w\" (UniqueName: \"kubernetes.io/projected/9f46cea2-be35-4686-83dd-471af2dd6f01-kube-api-access-96n5w\") pod \"community-operators-7cmnw\" (UID: \"9f46cea2-be35-4686-83dd-471af2dd6f01\") " pod="openshift-marketplace/community-operators-7cmnw" Dec 03 13:11:10 crc kubenswrapper[4986]: I1203 13:11:10.992383 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7cmnw" Dec 03 13:11:11 crc kubenswrapper[4986]: I1203 13:11:11.264752 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7cmnw"] Dec 03 13:11:11 crc kubenswrapper[4986]: I1203 13:11:11.412867 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7cmnw" event={"ID":"9f46cea2-be35-4686-83dd-471af2dd6f01","Type":"ContainerStarted","Data":"7a448b534688979e06edd3941cbcab8543e5406ba1aa23dc4f519dc9bef24984"} Dec 03 13:11:11 crc kubenswrapper[4986]: I1203 13:11:11.413279 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7cmnw" event={"ID":"9f46cea2-be35-4686-83dd-471af2dd6f01","Type":"ContainerStarted","Data":"5ec773d1f9a661525a4fd55fa74af2d071ba614b4735a96759ab83e59a97b0bd"} Dec 03 13:11:12 crc kubenswrapper[4986]: I1203 13:11:12.424881 4986 generic.go:334] "Generic (PLEG): container finished" podID="9f46cea2-be35-4686-83dd-471af2dd6f01" containerID="7a448b534688979e06edd3941cbcab8543e5406ba1aa23dc4f519dc9bef24984" exitCode=0 Dec 03 13:11:12 crc kubenswrapper[4986]: I1203 13:11:12.424961 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7cmnw" event={"ID":"9f46cea2-be35-4686-83dd-471af2dd6f01","Type":"ContainerDied","Data":"7a448b534688979e06edd3941cbcab8543e5406ba1aa23dc4f519dc9bef24984"} Dec 03 13:11:13 crc kubenswrapper[4986]: I1203 13:11:13.435408 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7cmnw" event={"ID":"9f46cea2-be35-4686-83dd-471af2dd6f01","Type":"ContainerStarted","Data":"88fe371b5a93a2c23ae2bb7ad61c4fbb70b9cff614ea314ed4ad5b6ff1efb6a4"} Dec 03 13:11:14 crc kubenswrapper[4986]: I1203 13:11:14.441910 4986 generic.go:334] "Generic (PLEG): container finished" podID="9f46cea2-be35-4686-83dd-471af2dd6f01" containerID="88fe371b5a93a2c23ae2bb7ad61c4fbb70b9cff614ea314ed4ad5b6ff1efb6a4" exitCode=0 Dec 03 13:11:14 crc kubenswrapper[4986]: I1203 13:11:14.441977 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7cmnw" event={"ID":"9f46cea2-be35-4686-83dd-471af2dd6f01","Type":"ContainerDied","Data":"88fe371b5a93a2c23ae2bb7ad61c4fbb70b9cff614ea314ed4ad5b6ff1efb6a4"} Dec 03 13:11:15 crc kubenswrapper[4986]: I1203 13:11:15.451550 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7cmnw" event={"ID":"9f46cea2-be35-4686-83dd-471af2dd6f01","Type":"ContainerStarted","Data":"c9cadd8cf1a74de46475a034ebe136be68d0961efe1c3f94a3ed5082e303e83c"} Dec 03 13:11:15 crc kubenswrapper[4986]: I1203 13:11:15.478266 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7cmnw" podStartSLOduration=2.984691048 podStartE2EDuration="5.478248755s" podCreationTimestamp="2025-12-03 13:11:10 +0000 UTC" firstStartedPulling="2025-12-03 13:11:12.429495016 +0000 UTC m=+931.895926247" lastFinishedPulling="2025-12-03 13:11:14.923052723 +0000 UTC m=+934.389483954" observedRunningTime="2025-12-03 13:11:15.475414699 +0000 UTC m=+934.941845910" watchObservedRunningTime="2025-12-03 13:11:15.478248755 +0000 UTC m=+934.944679956" Dec 03 13:11:18 crc kubenswrapper[4986]: I1203 13:11:18.113496 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-gndvm" Dec 03 13:11:20 crc kubenswrapper[4986]: I1203 13:11:20.993190 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7cmnw" Dec 03 13:11:20 crc kubenswrapper[4986]: I1203 13:11:20.993643 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7cmnw" Dec 03 13:11:21 crc kubenswrapper[4986]: I1203 13:11:21.035394 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7cmnw" Dec 03 13:11:21 crc kubenswrapper[4986]: I1203 13:11:21.527446 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7cmnw" Dec 03 13:11:21 crc kubenswrapper[4986]: I1203 13:11:21.581126 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7cmnw"] Dec 03 13:11:23 crc kubenswrapper[4986]: I1203 13:11:23.497984 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7cmnw" podUID="9f46cea2-be35-4686-83dd-471af2dd6f01" containerName="registry-server" containerID="cri-o://c9cadd8cf1a74de46475a034ebe136be68d0961efe1c3f94a3ed5082e303e83c" gracePeriod=2 Dec 03 13:11:23 crc kubenswrapper[4986]: I1203 13:11:23.985193 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7cmnw" Dec 03 13:11:24 crc kubenswrapper[4986]: I1203 13:11:24.098909 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-96n5w\" (UniqueName: \"kubernetes.io/projected/9f46cea2-be35-4686-83dd-471af2dd6f01-kube-api-access-96n5w\") pod \"9f46cea2-be35-4686-83dd-471af2dd6f01\" (UID: \"9f46cea2-be35-4686-83dd-471af2dd6f01\") " Dec 03 13:11:24 crc kubenswrapper[4986]: I1203 13:11:24.099012 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f46cea2-be35-4686-83dd-471af2dd6f01-utilities\") pod \"9f46cea2-be35-4686-83dd-471af2dd6f01\" (UID: \"9f46cea2-be35-4686-83dd-471af2dd6f01\") " Dec 03 13:11:24 crc kubenswrapper[4986]: I1203 13:11:24.099065 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f46cea2-be35-4686-83dd-471af2dd6f01-catalog-content\") pod \"9f46cea2-be35-4686-83dd-471af2dd6f01\" (UID: \"9f46cea2-be35-4686-83dd-471af2dd6f01\") " Dec 03 13:11:24 crc kubenswrapper[4986]: I1203 13:11:24.103030 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f46cea2-be35-4686-83dd-471af2dd6f01-utilities" (OuterVolumeSpecName: "utilities") pod "9f46cea2-be35-4686-83dd-471af2dd6f01" (UID: "9f46cea2-be35-4686-83dd-471af2dd6f01"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 13:11:24 crc kubenswrapper[4986]: I1203 13:11:24.105896 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f46cea2-be35-4686-83dd-471af2dd6f01-kube-api-access-96n5w" (OuterVolumeSpecName: "kube-api-access-96n5w") pod "9f46cea2-be35-4686-83dd-471af2dd6f01" (UID: "9f46cea2-be35-4686-83dd-471af2dd6f01"). InnerVolumeSpecName "kube-api-access-96n5w". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:11:24 crc kubenswrapper[4986]: I1203 13:11:24.157954 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f46cea2-be35-4686-83dd-471af2dd6f01-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9f46cea2-be35-4686-83dd-471af2dd6f01" (UID: "9f46cea2-be35-4686-83dd-471af2dd6f01"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 13:11:24 crc kubenswrapper[4986]: I1203 13:11:24.201317 4986 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f46cea2-be35-4686-83dd-471af2dd6f01-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 13:11:24 crc kubenswrapper[4986]: I1203 13:11:24.201397 4986 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f46cea2-be35-4686-83dd-471af2dd6f01-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 13:11:24 crc kubenswrapper[4986]: I1203 13:11:24.201413 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-96n5w\" (UniqueName: \"kubernetes.io/projected/9f46cea2-be35-4686-83dd-471af2dd6f01-kube-api-access-96n5w\") on node \"crc\" DevicePath \"\"" Dec 03 13:11:24 crc kubenswrapper[4986]: I1203 13:11:24.506154 4986 generic.go:334] "Generic (PLEG): container finished" podID="9f46cea2-be35-4686-83dd-471af2dd6f01" containerID="c9cadd8cf1a74de46475a034ebe136be68d0961efe1c3f94a3ed5082e303e83c" exitCode=0 Dec 03 13:11:24 crc kubenswrapper[4986]: I1203 13:11:24.506220 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7cmnw" event={"ID":"9f46cea2-be35-4686-83dd-471af2dd6f01","Type":"ContainerDied","Data":"c9cadd8cf1a74de46475a034ebe136be68d0961efe1c3f94a3ed5082e303e83c"} Dec 03 13:11:24 crc kubenswrapper[4986]: I1203 13:11:24.506301 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7cmnw" Dec 03 13:11:24 crc kubenswrapper[4986]: I1203 13:11:24.506276 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7cmnw" event={"ID":"9f46cea2-be35-4686-83dd-471af2dd6f01","Type":"ContainerDied","Data":"5ec773d1f9a661525a4fd55fa74af2d071ba614b4735a96759ab83e59a97b0bd"} Dec 03 13:11:24 crc kubenswrapper[4986]: I1203 13:11:24.506379 4986 scope.go:117] "RemoveContainer" containerID="c9cadd8cf1a74de46475a034ebe136be68d0961efe1c3f94a3ed5082e303e83c" Dec 03 13:11:24 crc kubenswrapper[4986]: I1203 13:11:24.529774 4986 scope.go:117] "RemoveContainer" containerID="88fe371b5a93a2c23ae2bb7ad61c4fbb70b9cff614ea314ed4ad5b6ff1efb6a4" Dec 03 13:11:24 crc kubenswrapper[4986]: I1203 13:11:24.540952 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7cmnw"] Dec 03 13:11:24 crc kubenswrapper[4986]: I1203 13:11:24.546256 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7cmnw"] Dec 03 13:11:24 crc kubenswrapper[4986]: I1203 13:11:24.559508 4986 scope.go:117] "RemoveContainer" containerID="7a448b534688979e06edd3941cbcab8543e5406ba1aa23dc4f519dc9bef24984" Dec 03 13:11:24 crc kubenswrapper[4986]: I1203 13:11:24.576343 4986 scope.go:117] "RemoveContainer" containerID="c9cadd8cf1a74de46475a034ebe136be68d0961efe1c3f94a3ed5082e303e83c" Dec 03 13:11:24 crc kubenswrapper[4986]: E1203 13:11:24.576691 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9cadd8cf1a74de46475a034ebe136be68d0961efe1c3f94a3ed5082e303e83c\": container with ID starting with c9cadd8cf1a74de46475a034ebe136be68d0961efe1c3f94a3ed5082e303e83c not found: ID does not exist" containerID="c9cadd8cf1a74de46475a034ebe136be68d0961efe1c3f94a3ed5082e303e83c" Dec 03 13:11:24 crc kubenswrapper[4986]: I1203 13:11:24.576734 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9cadd8cf1a74de46475a034ebe136be68d0961efe1c3f94a3ed5082e303e83c"} err="failed to get container status \"c9cadd8cf1a74de46475a034ebe136be68d0961efe1c3f94a3ed5082e303e83c\": rpc error: code = NotFound desc = could not find container \"c9cadd8cf1a74de46475a034ebe136be68d0961efe1c3f94a3ed5082e303e83c\": container with ID starting with c9cadd8cf1a74de46475a034ebe136be68d0961efe1c3f94a3ed5082e303e83c not found: ID does not exist" Dec 03 13:11:24 crc kubenswrapper[4986]: I1203 13:11:24.576755 4986 scope.go:117] "RemoveContainer" containerID="88fe371b5a93a2c23ae2bb7ad61c4fbb70b9cff614ea314ed4ad5b6ff1efb6a4" Dec 03 13:11:24 crc kubenswrapper[4986]: E1203 13:11:24.577067 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88fe371b5a93a2c23ae2bb7ad61c4fbb70b9cff614ea314ed4ad5b6ff1efb6a4\": container with ID starting with 88fe371b5a93a2c23ae2bb7ad61c4fbb70b9cff614ea314ed4ad5b6ff1efb6a4 not found: ID does not exist" containerID="88fe371b5a93a2c23ae2bb7ad61c4fbb70b9cff614ea314ed4ad5b6ff1efb6a4" Dec 03 13:11:24 crc kubenswrapper[4986]: I1203 13:11:24.577125 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88fe371b5a93a2c23ae2bb7ad61c4fbb70b9cff614ea314ed4ad5b6ff1efb6a4"} err="failed to get container status \"88fe371b5a93a2c23ae2bb7ad61c4fbb70b9cff614ea314ed4ad5b6ff1efb6a4\": rpc error: code = NotFound desc = could not find container \"88fe371b5a93a2c23ae2bb7ad61c4fbb70b9cff614ea314ed4ad5b6ff1efb6a4\": container with ID starting with 88fe371b5a93a2c23ae2bb7ad61c4fbb70b9cff614ea314ed4ad5b6ff1efb6a4 not found: ID does not exist" Dec 03 13:11:24 crc kubenswrapper[4986]: I1203 13:11:24.577159 4986 scope.go:117] "RemoveContainer" containerID="7a448b534688979e06edd3941cbcab8543e5406ba1aa23dc4f519dc9bef24984" Dec 03 13:11:24 crc kubenswrapper[4986]: E1203 13:11:24.577487 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a448b534688979e06edd3941cbcab8543e5406ba1aa23dc4f519dc9bef24984\": container with ID starting with 7a448b534688979e06edd3941cbcab8543e5406ba1aa23dc4f519dc9bef24984 not found: ID does not exist" containerID="7a448b534688979e06edd3941cbcab8543e5406ba1aa23dc4f519dc9bef24984" Dec 03 13:11:24 crc kubenswrapper[4986]: I1203 13:11:24.577517 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a448b534688979e06edd3941cbcab8543e5406ba1aa23dc4f519dc9bef24984"} err="failed to get container status \"7a448b534688979e06edd3941cbcab8543e5406ba1aa23dc4f519dc9bef24984\": rpc error: code = NotFound desc = could not find container \"7a448b534688979e06edd3941cbcab8543e5406ba1aa23dc4f519dc9bef24984\": container with ID starting with 7a448b534688979e06edd3941cbcab8543e5406ba1aa23dc4f519dc9bef24984 not found: ID does not exist" Dec 03 13:11:24 crc kubenswrapper[4986]: I1203 13:11:24.950670 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f46cea2-be35-4686-83dd-471af2dd6f01" path="/var/lib/kubelet/pods/9f46cea2-be35-4686-83dd-471af2dd6f01/volumes" Dec 03 13:11:29 crc kubenswrapper[4986]: I1203 13:11:29.248140 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212flxqdt"] Dec 03 13:11:29 crc kubenswrapper[4986]: E1203 13:11:29.248827 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f46cea2-be35-4686-83dd-471af2dd6f01" containerName="extract-content" Dec 03 13:11:29 crc kubenswrapper[4986]: I1203 13:11:29.248839 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f46cea2-be35-4686-83dd-471af2dd6f01" containerName="extract-content" Dec 03 13:11:29 crc kubenswrapper[4986]: E1203 13:11:29.248854 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f46cea2-be35-4686-83dd-471af2dd6f01" containerName="extract-utilities" Dec 03 13:11:29 crc kubenswrapper[4986]: I1203 13:11:29.248860 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f46cea2-be35-4686-83dd-471af2dd6f01" containerName="extract-utilities" Dec 03 13:11:29 crc kubenswrapper[4986]: E1203 13:11:29.248874 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f46cea2-be35-4686-83dd-471af2dd6f01" containerName="registry-server" Dec 03 13:11:29 crc kubenswrapper[4986]: I1203 13:11:29.248880 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f46cea2-be35-4686-83dd-471af2dd6f01" containerName="registry-server" Dec 03 13:11:29 crc kubenswrapper[4986]: I1203 13:11:29.248972 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f46cea2-be35-4686-83dd-471af2dd6f01" containerName="registry-server" Dec 03 13:11:29 crc kubenswrapper[4986]: I1203 13:11:29.249738 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212flxqdt" Dec 03 13:11:29 crc kubenswrapper[4986]: I1203 13:11:29.252070 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 03 13:11:29 crc kubenswrapper[4986]: I1203 13:11:29.252541 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212flxqdt"] Dec 03 13:11:29 crc kubenswrapper[4986]: I1203 13:11:29.365219 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wvph\" (UniqueName: \"kubernetes.io/projected/3e31ff03-19d4-45b9-a2a2-c80add3e095a-kube-api-access-4wvph\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212flxqdt\" (UID: \"3e31ff03-19d4-45b9-a2a2-c80add3e095a\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212flxqdt" Dec 03 13:11:29 crc kubenswrapper[4986]: I1203 13:11:29.365336 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3e31ff03-19d4-45b9-a2a2-c80add3e095a-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212flxqdt\" (UID: \"3e31ff03-19d4-45b9-a2a2-c80add3e095a\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212flxqdt" Dec 03 13:11:29 crc kubenswrapper[4986]: I1203 13:11:29.365370 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3e31ff03-19d4-45b9-a2a2-c80add3e095a-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212flxqdt\" (UID: \"3e31ff03-19d4-45b9-a2a2-c80add3e095a\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212flxqdt" Dec 03 13:11:29 crc kubenswrapper[4986]: I1203 13:11:29.466405 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3e31ff03-19d4-45b9-a2a2-c80add3e095a-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212flxqdt\" (UID: \"3e31ff03-19d4-45b9-a2a2-c80add3e095a\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212flxqdt" Dec 03 13:11:29 crc kubenswrapper[4986]: I1203 13:11:29.466451 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3e31ff03-19d4-45b9-a2a2-c80add3e095a-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212flxqdt\" (UID: \"3e31ff03-19d4-45b9-a2a2-c80add3e095a\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212flxqdt" Dec 03 13:11:29 crc kubenswrapper[4986]: I1203 13:11:29.466499 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wvph\" (UniqueName: \"kubernetes.io/projected/3e31ff03-19d4-45b9-a2a2-c80add3e095a-kube-api-access-4wvph\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212flxqdt\" (UID: \"3e31ff03-19d4-45b9-a2a2-c80add3e095a\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212flxqdt" Dec 03 13:11:29 crc kubenswrapper[4986]: I1203 13:11:29.466869 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3e31ff03-19d4-45b9-a2a2-c80add3e095a-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212flxqdt\" (UID: \"3e31ff03-19d4-45b9-a2a2-c80add3e095a\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212flxqdt" Dec 03 13:11:29 crc kubenswrapper[4986]: I1203 13:11:29.467005 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3e31ff03-19d4-45b9-a2a2-c80add3e095a-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212flxqdt\" (UID: \"3e31ff03-19d4-45b9-a2a2-c80add3e095a\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212flxqdt" Dec 03 13:11:29 crc kubenswrapper[4986]: I1203 13:11:29.496371 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wvph\" (UniqueName: \"kubernetes.io/projected/3e31ff03-19d4-45b9-a2a2-c80add3e095a-kube-api-access-4wvph\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212flxqdt\" (UID: \"3e31ff03-19d4-45b9-a2a2-c80add3e095a\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212flxqdt" Dec 03 13:11:29 crc kubenswrapper[4986]: I1203 13:11:29.569488 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212flxqdt" Dec 03 13:11:30 crc kubenswrapper[4986]: I1203 13:11:30.055019 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212flxqdt"] Dec 03 13:11:30 crc kubenswrapper[4986]: I1203 13:11:30.547993 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212flxqdt" event={"ID":"3e31ff03-19d4-45b9-a2a2-c80add3e095a","Type":"ContainerStarted","Data":"8f24dec6f2bdc5bba9c7f0cc6c17da175678b20cd3c537b851c6f8a5e5106c48"} Dec 03 13:11:30 crc kubenswrapper[4986]: I1203 13:11:30.548272 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212flxqdt" event={"ID":"3e31ff03-19d4-45b9-a2a2-c80add3e095a","Type":"ContainerStarted","Data":"cdda42795763b9244f3222d6e0cd7f37b2b08643936079bf72be95eff9c17574"} Dec 03 13:11:31 crc kubenswrapper[4986]: I1203 13:11:31.556171 4986 generic.go:334] "Generic (PLEG): container finished" podID="3e31ff03-19d4-45b9-a2a2-c80add3e095a" containerID="8f24dec6f2bdc5bba9c7f0cc6c17da175678b20cd3c537b851c6f8a5e5106c48" exitCode=0 Dec 03 13:11:31 crc kubenswrapper[4986]: I1203 13:11:31.556344 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212flxqdt" event={"ID":"3e31ff03-19d4-45b9-a2a2-c80add3e095a","Type":"ContainerDied","Data":"8f24dec6f2bdc5bba9c7f0cc6c17da175678b20cd3c537b851c6f8a5e5106c48"} Dec 03 13:11:31 crc kubenswrapper[4986]: I1203 13:11:31.602663 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-m58bx"] Dec 03 13:11:31 crc kubenswrapper[4986]: I1203 13:11:31.603979 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m58bx" Dec 03 13:11:31 crc kubenswrapper[4986]: I1203 13:11:31.649243 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-m58bx"] Dec 03 13:11:31 crc kubenswrapper[4986]: I1203 13:11:31.696898 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34555a47-c152-4719-9274-8e6a8680341d-catalog-content\") pod \"redhat-operators-m58bx\" (UID: \"34555a47-c152-4719-9274-8e6a8680341d\") " pod="openshift-marketplace/redhat-operators-m58bx" Dec 03 13:11:31 crc kubenswrapper[4986]: I1203 13:11:31.696949 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhz4s\" (UniqueName: \"kubernetes.io/projected/34555a47-c152-4719-9274-8e6a8680341d-kube-api-access-mhz4s\") pod \"redhat-operators-m58bx\" (UID: \"34555a47-c152-4719-9274-8e6a8680341d\") " pod="openshift-marketplace/redhat-operators-m58bx" Dec 03 13:11:31 crc kubenswrapper[4986]: I1203 13:11:31.697013 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34555a47-c152-4719-9274-8e6a8680341d-utilities\") pod \"redhat-operators-m58bx\" (UID: \"34555a47-c152-4719-9274-8e6a8680341d\") " pod="openshift-marketplace/redhat-operators-m58bx" Dec 03 13:11:31 crc kubenswrapper[4986]: I1203 13:11:31.798423 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34555a47-c152-4719-9274-8e6a8680341d-catalog-content\") pod \"redhat-operators-m58bx\" (UID: \"34555a47-c152-4719-9274-8e6a8680341d\") " pod="openshift-marketplace/redhat-operators-m58bx" Dec 03 13:11:31 crc kubenswrapper[4986]: I1203 13:11:31.798482 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhz4s\" (UniqueName: \"kubernetes.io/projected/34555a47-c152-4719-9274-8e6a8680341d-kube-api-access-mhz4s\") pod \"redhat-operators-m58bx\" (UID: \"34555a47-c152-4719-9274-8e6a8680341d\") " pod="openshift-marketplace/redhat-operators-m58bx" Dec 03 13:11:31 crc kubenswrapper[4986]: I1203 13:11:31.798549 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34555a47-c152-4719-9274-8e6a8680341d-utilities\") pod \"redhat-operators-m58bx\" (UID: \"34555a47-c152-4719-9274-8e6a8680341d\") " pod="openshift-marketplace/redhat-operators-m58bx" Dec 03 13:11:31 crc kubenswrapper[4986]: I1203 13:11:31.798985 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34555a47-c152-4719-9274-8e6a8680341d-utilities\") pod \"redhat-operators-m58bx\" (UID: \"34555a47-c152-4719-9274-8e6a8680341d\") " pod="openshift-marketplace/redhat-operators-m58bx" Dec 03 13:11:31 crc kubenswrapper[4986]: I1203 13:11:31.799209 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34555a47-c152-4719-9274-8e6a8680341d-catalog-content\") pod \"redhat-operators-m58bx\" (UID: \"34555a47-c152-4719-9274-8e6a8680341d\") " pod="openshift-marketplace/redhat-operators-m58bx" Dec 03 13:11:31 crc kubenswrapper[4986]: I1203 13:11:31.820832 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhz4s\" (UniqueName: \"kubernetes.io/projected/34555a47-c152-4719-9274-8e6a8680341d-kube-api-access-mhz4s\") pod \"redhat-operators-m58bx\" (UID: \"34555a47-c152-4719-9274-8e6a8680341d\") " pod="openshift-marketplace/redhat-operators-m58bx" Dec 03 13:11:31 crc kubenswrapper[4986]: I1203 13:11:31.948205 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m58bx" Dec 03 13:11:32 crc kubenswrapper[4986]: I1203 13:11:32.159880 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-m58bx"] Dec 03 13:11:32 crc kubenswrapper[4986]: W1203 13:11:32.167329 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34555a47_c152_4719_9274_8e6a8680341d.slice/crio-97f5b262767d8bb29681265412836fcf7bb55ba98f8fcf9e93ff8fc03fbfbf13 WatchSource:0}: Error finding container 97f5b262767d8bb29681265412836fcf7bb55ba98f8fcf9e93ff8fc03fbfbf13: Status 404 returned error can't find the container with id 97f5b262767d8bb29681265412836fcf7bb55ba98f8fcf9e93ff8fc03fbfbf13 Dec 03 13:11:32 crc kubenswrapper[4986]: I1203 13:11:32.563359 4986 generic.go:334] "Generic (PLEG): container finished" podID="34555a47-c152-4719-9274-8e6a8680341d" containerID="3bb8a717e9dbdd9e363f0b51afaf963681bf3227067079c11d1d2b8b53c17fa8" exitCode=0 Dec 03 13:11:32 crc kubenswrapper[4986]: I1203 13:11:32.563428 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m58bx" event={"ID":"34555a47-c152-4719-9274-8e6a8680341d","Type":"ContainerDied","Data":"3bb8a717e9dbdd9e363f0b51afaf963681bf3227067079c11d1d2b8b53c17fa8"} Dec 03 13:11:32 crc kubenswrapper[4986]: I1203 13:11:32.565595 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m58bx" event={"ID":"34555a47-c152-4719-9274-8e6a8680341d","Type":"ContainerStarted","Data":"97f5b262767d8bb29681265412836fcf7bb55ba98f8fcf9e93ff8fc03fbfbf13"} Dec 03 13:11:33 crc kubenswrapper[4986]: I1203 13:11:33.491252 4986 patch_prober.go:28] interesting pod/machine-config-daemon-xggpj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 13:11:33 crc kubenswrapper[4986]: I1203 13:11:33.491326 4986 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 13:11:33 crc kubenswrapper[4986]: I1203 13:11:33.572102 4986 generic.go:334] "Generic (PLEG): container finished" podID="3e31ff03-19d4-45b9-a2a2-c80add3e095a" containerID="0edf6efd5b1b784ec1cbb2cb82fd37ec6dada272ba62fc3d1c7ec3763fa1952d" exitCode=0 Dec 03 13:11:33 crc kubenswrapper[4986]: I1203 13:11:33.572160 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212flxqdt" event={"ID":"3e31ff03-19d4-45b9-a2a2-c80add3e095a","Type":"ContainerDied","Data":"0edf6efd5b1b784ec1cbb2cb82fd37ec6dada272ba62fc3d1c7ec3763fa1952d"} Dec 03 13:11:34 crc kubenswrapper[4986]: I1203 13:11:34.580880 4986 generic.go:334] "Generic (PLEG): container finished" podID="34555a47-c152-4719-9274-8e6a8680341d" containerID="286701e0afce8de0d04d4e5630c400e671f971b9ad3664406ca71f5f7b74884c" exitCode=0 Dec 03 13:11:34 crc kubenswrapper[4986]: I1203 13:11:34.580982 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m58bx" event={"ID":"34555a47-c152-4719-9274-8e6a8680341d","Type":"ContainerDied","Data":"286701e0afce8de0d04d4e5630c400e671f971b9ad3664406ca71f5f7b74884c"} Dec 03 13:11:34 crc kubenswrapper[4986]: I1203 13:11:34.583614 4986 generic.go:334] "Generic (PLEG): container finished" podID="3e31ff03-19d4-45b9-a2a2-c80add3e095a" containerID="bafff8f4e86b8fc7d6b9e22f93db62ab2b2e88367153501012d947995d917067" exitCode=0 Dec 03 13:11:34 crc kubenswrapper[4986]: I1203 13:11:34.583683 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212flxqdt" event={"ID":"3e31ff03-19d4-45b9-a2a2-c80add3e095a","Type":"ContainerDied","Data":"bafff8f4e86b8fc7d6b9e22f93db62ab2b2e88367153501012d947995d917067"} Dec 03 13:11:35 crc kubenswrapper[4986]: I1203 13:11:35.592749 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m58bx" event={"ID":"34555a47-c152-4719-9274-8e6a8680341d","Type":"ContainerStarted","Data":"ad0cea859afefd2afe028150628f6de81f0559ed959e8622e0dfdae519eabe8c"} Dec 03 13:11:35 crc kubenswrapper[4986]: I1203 13:11:35.612169 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-m58bx" podStartSLOduration=2.163998801 podStartE2EDuration="4.612142714s" podCreationTimestamp="2025-12-03 13:11:31 +0000 UTC" firstStartedPulling="2025-12-03 13:11:32.56505393 +0000 UTC m=+952.031485121" lastFinishedPulling="2025-12-03 13:11:35.013197833 +0000 UTC m=+954.479629034" observedRunningTime="2025-12-03 13:11:35.609131674 +0000 UTC m=+955.075562885" watchObservedRunningTime="2025-12-03 13:11:35.612142714 +0000 UTC m=+955.078573925" Dec 03 13:11:35 crc kubenswrapper[4986]: I1203 13:11:35.837645 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212flxqdt" Dec 03 13:11:35 crc kubenswrapper[4986]: I1203 13:11:35.947312 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4wvph\" (UniqueName: \"kubernetes.io/projected/3e31ff03-19d4-45b9-a2a2-c80add3e095a-kube-api-access-4wvph\") pod \"3e31ff03-19d4-45b9-a2a2-c80add3e095a\" (UID: \"3e31ff03-19d4-45b9-a2a2-c80add3e095a\") " Dec 03 13:11:35 crc kubenswrapper[4986]: I1203 13:11:35.947417 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3e31ff03-19d4-45b9-a2a2-c80add3e095a-bundle\") pod \"3e31ff03-19d4-45b9-a2a2-c80add3e095a\" (UID: \"3e31ff03-19d4-45b9-a2a2-c80add3e095a\") " Dec 03 13:11:35 crc kubenswrapper[4986]: I1203 13:11:35.947440 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3e31ff03-19d4-45b9-a2a2-c80add3e095a-util\") pod \"3e31ff03-19d4-45b9-a2a2-c80add3e095a\" (UID: \"3e31ff03-19d4-45b9-a2a2-c80add3e095a\") " Dec 03 13:11:36 crc kubenswrapper[4986]: I1203 13:11:36.013357 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e31ff03-19d4-45b9-a2a2-c80add3e095a-kube-api-access-4wvph" (OuterVolumeSpecName: "kube-api-access-4wvph") pod "3e31ff03-19d4-45b9-a2a2-c80add3e095a" (UID: "3e31ff03-19d4-45b9-a2a2-c80add3e095a"). InnerVolumeSpecName "kube-api-access-4wvph". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:11:36 crc kubenswrapper[4986]: I1203 13:11:36.013972 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e31ff03-19d4-45b9-a2a2-c80add3e095a-bundle" (OuterVolumeSpecName: "bundle") pod "3e31ff03-19d4-45b9-a2a2-c80add3e095a" (UID: "3e31ff03-19d4-45b9-a2a2-c80add3e095a"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 13:11:36 crc kubenswrapper[4986]: I1203 13:11:36.048729 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4wvph\" (UniqueName: \"kubernetes.io/projected/3e31ff03-19d4-45b9-a2a2-c80add3e095a-kube-api-access-4wvph\") on node \"crc\" DevicePath \"\"" Dec 03 13:11:36 crc kubenswrapper[4986]: I1203 13:11:36.048769 4986 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3e31ff03-19d4-45b9-a2a2-c80add3e095a-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 13:11:36 crc kubenswrapper[4986]: I1203 13:11:36.600593 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212flxqdt" event={"ID":"3e31ff03-19d4-45b9-a2a2-c80add3e095a","Type":"ContainerDied","Data":"cdda42795763b9244f3222d6e0cd7f37b2b08643936079bf72be95eff9c17574"} Dec 03 13:11:36 crc kubenswrapper[4986]: I1203 13:11:36.600632 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212flxqdt" Dec 03 13:11:36 crc kubenswrapper[4986]: I1203 13:11:36.600666 4986 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cdda42795763b9244f3222d6e0cd7f37b2b08643936079bf72be95eff9c17574" Dec 03 13:11:36 crc kubenswrapper[4986]: I1203 13:11:36.668521 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e31ff03-19d4-45b9-a2a2-c80add3e095a-util" (OuterVolumeSpecName: "util") pod "3e31ff03-19d4-45b9-a2a2-c80add3e095a" (UID: "3e31ff03-19d4-45b9-a2a2-c80add3e095a"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 13:11:36 crc kubenswrapper[4986]: I1203 13:11:36.758012 4986 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3e31ff03-19d4-45b9-a2a2-c80add3e095a-util\") on node \"crc\" DevicePath \"\"" Dec 03 13:11:39 crc kubenswrapper[4986]: I1203 13:11:39.713521 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-zznsq"] Dec 03 13:11:39 crc kubenswrapper[4986]: E1203 13:11:39.713978 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e31ff03-19d4-45b9-a2a2-c80add3e095a" containerName="pull" Dec 03 13:11:39 crc kubenswrapper[4986]: I1203 13:11:39.713991 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e31ff03-19d4-45b9-a2a2-c80add3e095a" containerName="pull" Dec 03 13:11:39 crc kubenswrapper[4986]: E1203 13:11:39.714001 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e31ff03-19d4-45b9-a2a2-c80add3e095a" containerName="extract" Dec 03 13:11:39 crc kubenswrapper[4986]: I1203 13:11:39.714007 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e31ff03-19d4-45b9-a2a2-c80add3e095a" containerName="extract" Dec 03 13:11:39 crc kubenswrapper[4986]: E1203 13:11:39.714017 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e31ff03-19d4-45b9-a2a2-c80add3e095a" containerName="util" Dec 03 13:11:39 crc kubenswrapper[4986]: I1203 13:11:39.714024 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e31ff03-19d4-45b9-a2a2-c80add3e095a" containerName="util" Dec 03 13:11:39 crc kubenswrapper[4986]: I1203 13:11:39.714122 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e31ff03-19d4-45b9-a2a2-c80add3e095a" containerName="extract" Dec 03 13:11:39 crc kubenswrapper[4986]: I1203 13:11:39.714508 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-zznsq" Dec 03 13:11:39 crc kubenswrapper[4986]: I1203 13:11:39.717594 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Dec 03 13:11:39 crc kubenswrapper[4986]: I1203 13:11:39.718059 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Dec 03 13:11:39 crc kubenswrapper[4986]: I1203 13:11:39.718973 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-qmnxb" Dec 03 13:11:39 crc kubenswrapper[4986]: I1203 13:11:39.734784 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-zznsq"] Dec 03 13:11:39 crc kubenswrapper[4986]: I1203 13:11:39.811724 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7578q\" (UniqueName: \"kubernetes.io/projected/eed2e928-3f77-4b48-8f9d-9cd923d4f708-kube-api-access-7578q\") pod \"nmstate-operator-5b5b58f5c8-zznsq\" (UID: \"eed2e928-3f77-4b48-8f9d-9cd923d4f708\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-zznsq" Dec 03 13:11:39 crc kubenswrapper[4986]: I1203 13:11:39.912874 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7578q\" (UniqueName: \"kubernetes.io/projected/eed2e928-3f77-4b48-8f9d-9cd923d4f708-kube-api-access-7578q\") pod \"nmstate-operator-5b5b58f5c8-zznsq\" (UID: \"eed2e928-3f77-4b48-8f9d-9cd923d4f708\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-zznsq" Dec 03 13:11:39 crc kubenswrapper[4986]: I1203 13:11:39.941010 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7578q\" (UniqueName: \"kubernetes.io/projected/eed2e928-3f77-4b48-8f9d-9cd923d4f708-kube-api-access-7578q\") pod \"nmstate-operator-5b5b58f5c8-zznsq\" (UID: \"eed2e928-3f77-4b48-8f9d-9cd923d4f708\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-zznsq" Dec 03 13:11:40 crc kubenswrapper[4986]: I1203 13:11:40.036255 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-zznsq" Dec 03 13:11:40 crc kubenswrapper[4986]: I1203 13:11:40.481041 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-zznsq"] Dec 03 13:11:40 crc kubenswrapper[4986]: I1203 13:11:40.622225 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-zznsq" event={"ID":"eed2e928-3f77-4b48-8f9d-9cd923d4f708","Type":"ContainerStarted","Data":"6443a173d3d2ec2af0b7cdec784867913aabbdb50dcec2afa04f316b68ee03a3"} Dec 03 13:11:41 crc kubenswrapper[4986]: I1203 13:11:41.949074 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-m58bx" Dec 03 13:11:41 crc kubenswrapper[4986]: I1203 13:11:41.949138 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-m58bx" Dec 03 13:11:42 crc kubenswrapper[4986]: I1203 13:11:42.012115 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-m58bx" Dec 03 13:11:42 crc kubenswrapper[4986]: I1203 13:11:42.678936 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-m58bx" Dec 03 13:11:45 crc kubenswrapper[4986]: I1203 13:11:45.660391 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-zznsq" event={"ID":"eed2e928-3f77-4b48-8f9d-9cd923d4f708","Type":"ContainerStarted","Data":"a4e40c1d66a0fa1959df880ab053c94d1e7610a990b10f8b6922aa867db9b690"} Dec 03 13:11:45 crc kubenswrapper[4986]: I1203 13:11:45.683044 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-zznsq" podStartSLOduration=2.007812188 podStartE2EDuration="6.683022333s" podCreationTimestamp="2025-12-03 13:11:39 +0000 UTC" firstStartedPulling="2025-12-03 13:11:40.496197892 +0000 UTC m=+959.962629083" lastFinishedPulling="2025-12-03 13:11:45.171408037 +0000 UTC m=+964.637839228" observedRunningTime="2025-12-03 13:11:45.681783761 +0000 UTC m=+965.148214952" watchObservedRunningTime="2025-12-03 13:11:45.683022333 +0000 UTC m=+965.149453534" Dec 03 13:11:45 crc kubenswrapper[4986]: I1203 13:11:45.986924 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-m58bx"] Dec 03 13:11:45 crc kubenswrapper[4986]: I1203 13:11:45.987187 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-m58bx" podUID="34555a47-c152-4719-9274-8e6a8680341d" containerName="registry-server" containerID="cri-o://ad0cea859afefd2afe028150628f6de81f0559ed959e8622e0dfdae519eabe8c" gracePeriod=2 Dec 03 13:11:46 crc kubenswrapper[4986]: I1203 13:11:46.344437 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m58bx" Dec 03 13:11:46 crc kubenswrapper[4986]: I1203 13:11:46.492332 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34555a47-c152-4719-9274-8e6a8680341d-utilities\") pod \"34555a47-c152-4719-9274-8e6a8680341d\" (UID: \"34555a47-c152-4719-9274-8e6a8680341d\") " Dec 03 13:11:46 crc kubenswrapper[4986]: I1203 13:11:46.492400 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34555a47-c152-4719-9274-8e6a8680341d-catalog-content\") pod \"34555a47-c152-4719-9274-8e6a8680341d\" (UID: \"34555a47-c152-4719-9274-8e6a8680341d\") " Dec 03 13:11:46 crc kubenswrapper[4986]: I1203 13:11:46.492438 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mhz4s\" (UniqueName: \"kubernetes.io/projected/34555a47-c152-4719-9274-8e6a8680341d-kube-api-access-mhz4s\") pod \"34555a47-c152-4719-9274-8e6a8680341d\" (UID: \"34555a47-c152-4719-9274-8e6a8680341d\") " Dec 03 13:11:46 crc kubenswrapper[4986]: I1203 13:11:46.495092 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34555a47-c152-4719-9274-8e6a8680341d-utilities" (OuterVolumeSpecName: "utilities") pod "34555a47-c152-4719-9274-8e6a8680341d" (UID: "34555a47-c152-4719-9274-8e6a8680341d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 13:11:46 crc kubenswrapper[4986]: I1203 13:11:46.497927 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34555a47-c152-4719-9274-8e6a8680341d-kube-api-access-mhz4s" (OuterVolumeSpecName: "kube-api-access-mhz4s") pod "34555a47-c152-4719-9274-8e6a8680341d" (UID: "34555a47-c152-4719-9274-8e6a8680341d"). InnerVolumeSpecName "kube-api-access-mhz4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:11:46 crc kubenswrapper[4986]: I1203 13:11:46.592909 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34555a47-c152-4719-9274-8e6a8680341d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "34555a47-c152-4719-9274-8e6a8680341d" (UID: "34555a47-c152-4719-9274-8e6a8680341d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 13:11:46 crc kubenswrapper[4986]: I1203 13:11:46.593522 4986 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34555a47-c152-4719-9274-8e6a8680341d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 13:11:46 crc kubenswrapper[4986]: I1203 13:11:46.593543 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mhz4s\" (UniqueName: \"kubernetes.io/projected/34555a47-c152-4719-9274-8e6a8680341d-kube-api-access-mhz4s\") on node \"crc\" DevicePath \"\"" Dec 03 13:11:46 crc kubenswrapper[4986]: I1203 13:11:46.593555 4986 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34555a47-c152-4719-9274-8e6a8680341d-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 13:11:46 crc kubenswrapper[4986]: I1203 13:11:46.667790 4986 generic.go:334] "Generic (PLEG): container finished" podID="34555a47-c152-4719-9274-8e6a8680341d" containerID="ad0cea859afefd2afe028150628f6de81f0559ed959e8622e0dfdae519eabe8c" exitCode=0 Dec 03 13:11:46 crc kubenswrapper[4986]: I1203 13:11:46.667849 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m58bx" Dec 03 13:11:46 crc kubenswrapper[4986]: I1203 13:11:46.667849 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m58bx" event={"ID":"34555a47-c152-4719-9274-8e6a8680341d","Type":"ContainerDied","Data":"ad0cea859afefd2afe028150628f6de81f0559ed959e8622e0dfdae519eabe8c"} Dec 03 13:11:46 crc kubenswrapper[4986]: I1203 13:11:46.667902 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m58bx" event={"ID":"34555a47-c152-4719-9274-8e6a8680341d","Type":"ContainerDied","Data":"97f5b262767d8bb29681265412836fcf7bb55ba98f8fcf9e93ff8fc03fbfbf13"} Dec 03 13:11:46 crc kubenswrapper[4986]: I1203 13:11:46.667927 4986 scope.go:117] "RemoveContainer" containerID="ad0cea859afefd2afe028150628f6de81f0559ed959e8622e0dfdae519eabe8c" Dec 03 13:11:46 crc kubenswrapper[4986]: I1203 13:11:46.691524 4986 scope.go:117] "RemoveContainer" containerID="286701e0afce8de0d04d4e5630c400e671f971b9ad3664406ca71f5f7b74884c" Dec 03 13:11:46 crc kubenswrapper[4986]: I1203 13:11:46.703200 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-m58bx"] Dec 03 13:11:46 crc kubenswrapper[4986]: I1203 13:11:46.709699 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-m58bx"] Dec 03 13:11:46 crc kubenswrapper[4986]: I1203 13:11:46.730075 4986 scope.go:117] "RemoveContainer" containerID="3bb8a717e9dbdd9e363f0b51afaf963681bf3227067079c11d1d2b8b53c17fa8" Dec 03 13:11:46 crc kubenswrapper[4986]: I1203 13:11:46.746205 4986 scope.go:117] "RemoveContainer" containerID="ad0cea859afefd2afe028150628f6de81f0559ed959e8622e0dfdae519eabe8c" Dec 03 13:11:46 crc kubenswrapper[4986]: E1203 13:11:46.746738 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad0cea859afefd2afe028150628f6de81f0559ed959e8622e0dfdae519eabe8c\": container with ID starting with ad0cea859afefd2afe028150628f6de81f0559ed959e8622e0dfdae519eabe8c not found: ID does not exist" containerID="ad0cea859afefd2afe028150628f6de81f0559ed959e8622e0dfdae519eabe8c" Dec 03 13:11:46 crc kubenswrapper[4986]: I1203 13:11:46.746793 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad0cea859afefd2afe028150628f6de81f0559ed959e8622e0dfdae519eabe8c"} err="failed to get container status \"ad0cea859afefd2afe028150628f6de81f0559ed959e8622e0dfdae519eabe8c\": rpc error: code = NotFound desc = could not find container \"ad0cea859afefd2afe028150628f6de81f0559ed959e8622e0dfdae519eabe8c\": container with ID starting with ad0cea859afefd2afe028150628f6de81f0559ed959e8622e0dfdae519eabe8c not found: ID does not exist" Dec 03 13:11:46 crc kubenswrapper[4986]: I1203 13:11:46.746827 4986 scope.go:117] "RemoveContainer" containerID="286701e0afce8de0d04d4e5630c400e671f971b9ad3664406ca71f5f7b74884c" Dec 03 13:11:46 crc kubenswrapper[4986]: E1203 13:11:46.747201 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"286701e0afce8de0d04d4e5630c400e671f971b9ad3664406ca71f5f7b74884c\": container with ID starting with 286701e0afce8de0d04d4e5630c400e671f971b9ad3664406ca71f5f7b74884c not found: ID does not exist" containerID="286701e0afce8de0d04d4e5630c400e671f971b9ad3664406ca71f5f7b74884c" Dec 03 13:11:46 crc kubenswrapper[4986]: I1203 13:11:46.747229 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"286701e0afce8de0d04d4e5630c400e671f971b9ad3664406ca71f5f7b74884c"} err="failed to get container status \"286701e0afce8de0d04d4e5630c400e671f971b9ad3664406ca71f5f7b74884c\": rpc error: code = NotFound desc = could not find container \"286701e0afce8de0d04d4e5630c400e671f971b9ad3664406ca71f5f7b74884c\": container with ID starting with 286701e0afce8de0d04d4e5630c400e671f971b9ad3664406ca71f5f7b74884c not found: ID does not exist" Dec 03 13:11:46 crc kubenswrapper[4986]: I1203 13:11:46.747247 4986 scope.go:117] "RemoveContainer" containerID="3bb8a717e9dbdd9e363f0b51afaf963681bf3227067079c11d1d2b8b53c17fa8" Dec 03 13:11:46 crc kubenswrapper[4986]: E1203 13:11:46.747855 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3bb8a717e9dbdd9e363f0b51afaf963681bf3227067079c11d1d2b8b53c17fa8\": container with ID starting with 3bb8a717e9dbdd9e363f0b51afaf963681bf3227067079c11d1d2b8b53c17fa8 not found: ID does not exist" containerID="3bb8a717e9dbdd9e363f0b51afaf963681bf3227067079c11d1d2b8b53c17fa8" Dec 03 13:11:46 crc kubenswrapper[4986]: I1203 13:11:46.747881 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bb8a717e9dbdd9e363f0b51afaf963681bf3227067079c11d1d2b8b53c17fa8"} err="failed to get container status \"3bb8a717e9dbdd9e363f0b51afaf963681bf3227067079c11d1d2b8b53c17fa8\": rpc error: code = NotFound desc = could not find container \"3bb8a717e9dbdd9e363f0b51afaf963681bf3227067079c11d1d2b8b53c17fa8\": container with ID starting with 3bb8a717e9dbdd9e363f0b51afaf963681bf3227067079c11d1d2b8b53c17fa8 not found: ID does not exist" Dec 03 13:11:46 crc kubenswrapper[4986]: I1203 13:11:46.951944 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34555a47-c152-4719-9274-8e6a8680341d" path="/var/lib/kubelet/pods/34555a47-c152-4719-9274-8e6a8680341d/volumes" Dec 03 13:11:49 crc kubenswrapper[4986]: I1203 13:11:49.570322 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-ll678"] Dec 03 13:11:49 crc kubenswrapper[4986]: E1203 13:11:49.571208 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34555a47-c152-4719-9274-8e6a8680341d" containerName="registry-server" Dec 03 13:11:49 crc kubenswrapper[4986]: I1203 13:11:49.571241 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="34555a47-c152-4719-9274-8e6a8680341d" containerName="registry-server" Dec 03 13:11:49 crc kubenswrapper[4986]: E1203 13:11:49.571269 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34555a47-c152-4719-9274-8e6a8680341d" containerName="extract-content" Dec 03 13:11:49 crc kubenswrapper[4986]: I1203 13:11:49.574343 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="34555a47-c152-4719-9274-8e6a8680341d" containerName="extract-content" Dec 03 13:11:49 crc kubenswrapper[4986]: E1203 13:11:49.574417 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34555a47-c152-4719-9274-8e6a8680341d" containerName="extract-utilities" Dec 03 13:11:49 crc kubenswrapper[4986]: I1203 13:11:49.574439 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="34555a47-c152-4719-9274-8e6a8680341d" containerName="extract-utilities" Dec 03 13:11:49 crc kubenswrapper[4986]: I1203 13:11:49.574802 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="34555a47-c152-4719-9274-8e6a8680341d" containerName="registry-server" Dec 03 13:11:49 crc kubenswrapper[4986]: I1203 13:11:49.576087 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-ll678" Dec 03 13:11:49 crc kubenswrapper[4986]: I1203 13:11:49.579990 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-bz2lf" Dec 03 13:11:49 crc kubenswrapper[4986]: I1203 13:11:49.595335 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-rgwgv"] Dec 03 13:11:49 crc kubenswrapper[4986]: I1203 13:11:49.596439 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-rgwgv" Dec 03 13:11:49 crc kubenswrapper[4986]: I1203 13:11:49.604182 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-ll678"] Dec 03 13:11:49 crc kubenswrapper[4986]: I1203 13:11:49.619636 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-zztq9"] Dec 03 13:11:49 crc kubenswrapper[4986]: I1203 13:11:49.621883 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-zztq9" Dec 03 13:11:49 crc kubenswrapper[4986]: I1203 13:11:49.625469 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Dec 03 13:11:49 crc kubenswrapper[4986]: I1203 13:11:49.634222 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-zztq9"] Dec 03 13:11:49 crc kubenswrapper[4986]: I1203 13:11:49.710131 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-8lb6z"] Dec 03 13:11:49 crc kubenswrapper[4986]: I1203 13:11:49.711315 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-8lb6z" Dec 03 13:11:49 crc kubenswrapper[4986]: I1203 13:11:49.713700 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Dec 03 13:11:49 crc kubenswrapper[4986]: I1203 13:11:49.717688 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-hps8h" Dec 03 13:11:49 crc kubenswrapper[4986]: I1203 13:11:49.717895 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Dec 03 13:11:49 crc kubenswrapper[4986]: I1203 13:11:49.722032 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-8lb6z"] Dec 03 13:11:49 crc kubenswrapper[4986]: I1203 13:11:49.733145 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/c233dd25-afb2-4ee7-b907-c79d08e02af6-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-zztq9\" (UID: \"c233dd25-afb2-4ee7-b907-c79d08e02af6\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-zztq9" Dec 03 13:11:49 crc kubenswrapper[4986]: I1203 13:11:49.733193 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqfqj\" (UniqueName: \"kubernetes.io/projected/cb8d169c-9c96-403d-9f6b-357dd8ccc78a-kube-api-access-cqfqj\") pod \"nmstate-metrics-7f946cbc9-ll678\" (UID: \"cb8d169c-9c96-403d-9f6b-357dd8ccc78a\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-ll678" Dec 03 13:11:49 crc kubenswrapper[4986]: I1203 13:11:49.733221 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/bd89b324-ae85-4e4e-b40b-a76a7ae8e498-ovs-socket\") pod \"nmstate-handler-rgwgv\" (UID: \"bd89b324-ae85-4e4e-b40b-a76a7ae8e498\") " pod="openshift-nmstate/nmstate-handler-rgwgv" Dec 03 13:11:49 crc kubenswrapper[4986]: I1203 13:11:49.733248 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/bd89b324-ae85-4e4e-b40b-a76a7ae8e498-dbus-socket\") pod \"nmstate-handler-rgwgv\" (UID: \"bd89b324-ae85-4e4e-b40b-a76a7ae8e498\") " pod="openshift-nmstate/nmstate-handler-rgwgv" Dec 03 13:11:49 crc kubenswrapper[4986]: I1203 13:11:49.733327 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86w6m\" (UniqueName: \"kubernetes.io/projected/c233dd25-afb2-4ee7-b907-c79d08e02af6-kube-api-access-86w6m\") pod \"nmstate-webhook-5f6d4c5ccb-zztq9\" (UID: \"c233dd25-afb2-4ee7-b907-c79d08e02af6\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-zztq9" Dec 03 13:11:49 crc kubenswrapper[4986]: I1203 13:11:49.733381 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n24tb\" (UniqueName: \"kubernetes.io/projected/bd89b324-ae85-4e4e-b40b-a76a7ae8e498-kube-api-access-n24tb\") pod \"nmstate-handler-rgwgv\" (UID: \"bd89b324-ae85-4e4e-b40b-a76a7ae8e498\") " pod="openshift-nmstate/nmstate-handler-rgwgv" Dec 03 13:11:49 crc kubenswrapper[4986]: I1203 13:11:49.733411 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/bd89b324-ae85-4e4e-b40b-a76a7ae8e498-nmstate-lock\") pod \"nmstate-handler-rgwgv\" (UID: \"bd89b324-ae85-4e4e-b40b-a76a7ae8e498\") " pod="openshift-nmstate/nmstate-handler-rgwgv" Dec 03 13:11:49 crc kubenswrapper[4986]: I1203 13:11:49.834467 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n24tb\" (UniqueName: \"kubernetes.io/projected/bd89b324-ae85-4e4e-b40b-a76a7ae8e498-kube-api-access-n24tb\") pod \"nmstate-handler-rgwgv\" (UID: \"bd89b324-ae85-4e4e-b40b-a76a7ae8e498\") " pod="openshift-nmstate/nmstate-handler-rgwgv" Dec 03 13:11:49 crc kubenswrapper[4986]: I1203 13:11:49.834517 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/bd89b324-ae85-4e4e-b40b-a76a7ae8e498-nmstate-lock\") pod \"nmstate-handler-rgwgv\" (UID: \"bd89b324-ae85-4e4e-b40b-a76a7ae8e498\") " pod="openshift-nmstate/nmstate-handler-rgwgv" Dec 03 13:11:49 crc kubenswrapper[4986]: I1203 13:11:49.834551 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/c4ff9001-bea2-41b9-8820-0c46e15b2fbb-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-8lb6z\" (UID: \"c4ff9001-bea2-41b9-8820-0c46e15b2fbb\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-8lb6z" Dec 03 13:11:49 crc kubenswrapper[4986]: I1203 13:11:49.834582 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crslp\" (UniqueName: \"kubernetes.io/projected/c4ff9001-bea2-41b9-8820-0c46e15b2fbb-kube-api-access-crslp\") pod \"nmstate-console-plugin-7fbb5f6569-8lb6z\" (UID: \"c4ff9001-bea2-41b9-8820-0c46e15b2fbb\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-8lb6z" Dec 03 13:11:49 crc kubenswrapper[4986]: I1203 13:11:49.834621 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/c233dd25-afb2-4ee7-b907-c79d08e02af6-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-zztq9\" (UID: \"c233dd25-afb2-4ee7-b907-c79d08e02af6\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-zztq9" Dec 03 13:11:49 crc kubenswrapper[4986]: I1203 13:11:49.834681 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/bd89b324-ae85-4e4e-b40b-a76a7ae8e498-nmstate-lock\") pod \"nmstate-handler-rgwgv\" (UID: \"bd89b324-ae85-4e4e-b40b-a76a7ae8e498\") " pod="openshift-nmstate/nmstate-handler-rgwgv" Dec 03 13:11:49 crc kubenswrapper[4986]: I1203 13:11:49.834778 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqfqj\" (UniqueName: \"kubernetes.io/projected/cb8d169c-9c96-403d-9f6b-357dd8ccc78a-kube-api-access-cqfqj\") pod \"nmstate-metrics-7f946cbc9-ll678\" (UID: \"cb8d169c-9c96-403d-9f6b-357dd8ccc78a\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-ll678" Dec 03 13:11:49 crc kubenswrapper[4986]: E1203 13:11:49.834845 4986 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Dec 03 13:11:49 crc kubenswrapper[4986]: I1203 13:11:49.834899 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/bd89b324-ae85-4e4e-b40b-a76a7ae8e498-ovs-socket\") pod \"nmstate-handler-rgwgv\" (UID: \"bd89b324-ae85-4e4e-b40b-a76a7ae8e498\") " pod="openshift-nmstate/nmstate-handler-rgwgv" Dec 03 13:11:49 crc kubenswrapper[4986]: I1203 13:11:49.834920 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/bd89b324-ae85-4e4e-b40b-a76a7ae8e498-dbus-socket\") pod \"nmstate-handler-rgwgv\" (UID: \"bd89b324-ae85-4e4e-b40b-a76a7ae8e498\") " pod="openshift-nmstate/nmstate-handler-rgwgv" Dec 03 13:11:49 crc kubenswrapper[4986]: E1203 13:11:49.834957 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c233dd25-afb2-4ee7-b907-c79d08e02af6-tls-key-pair podName:c233dd25-afb2-4ee7-b907-c79d08e02af6 nodeName:}" failed. No retries permitted until 2025-12-03 13:11:50.334931554 +0000 UTC m=+969.801362745 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/c233dd25-afb2-4ee7-b907-c79d08e02af6-tls-key-pair") pod "nmstate-webhook-5f6d4c5ccb-zztq9" (UID: "c233dd25-afb2-4ee7-b907-c79d08e02af6") : secret "openshift-nmstate-webhook" not found Dec 03 13:11:49 crc kubenswrapper[4986]: I1203 13:11:49.835035 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86w6m\" (UniqueName: \"kubernetes.io/projected/c233dd25-afb2-4ee7-b907-c79d08e02af6-kube-api-access-86w6m\") pod \"nmstate-webhook-5f6d4c5ccb-zztq9\" (UID: \"c233dd25-afb2-4ee7-b907-c79d08e02af6\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-zztq9" Dec 03 13:11:49 crc kubenswrapper[4986]: I1203 13:11:49.835040 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/bd89b324-ae85-4e4e-b40b-a76a7ae8e498-ovs-socket\") pod \"nmstate-handler-rgwgv\" (UID: \"bd89b324-ae85-4e4e-b40b-a76a7ae8e498\") " pod="openshift-nmstate/nmstate-handler-rgwgv" Dec 03 13:11:49 crc kubenswrapper[4986]: I1203 13:11:49.835082 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/c4ff9001-bea2-41b9-8820-0c46e15b2fbb-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-8lb6z\" (UID: \"c4ff9001-bea2-41b9-8820-0c46e15b2fbb\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-8lb6z" Dec 03 13:11:49 crc kubenswrapper[4986]: I1203 13:11:49.835344 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/bd89b324-ae85-4e4e-b40b-a76a7ae8e498-dbus-socket\") pod \"nmstate-handler-rgwgv\" (UID: \"bd89b324-ae85-4e4e-b40b-a76a7ae8e498\") " pod="openshift-nmstate/nmstate-handler-rgwgv" Dec 03 13:11:49 crc kubenswrapper[4986]: I1203 13:11:49.858388 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n24tb\" (UniqueName: \"kubernetes.io/projected/bd89b324-ae85-4e4e-b40b-a76a7ae8e498-kube-api-access-n24tb\") pod \"nmstate-handler-rgwgv\" (UID: \"bd89b324-ae85-4e4e-b40b-a76a7ae8e498\") " pod="openshift-nmstate/nmstate-handler-rgwgv" Dec 03 13:11:49 crc kubenswrapper[4986]: I1203 13:11:49.860548 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86w6m\" (UniqueName: \"kubernetes.io/projected/c233dd25-afb2-4ee7-b907-c79d08e02af6-kube-api-access-86w6m\") pod \"nmstate-webhook-5f6d4c5ccb-zztq9\" (UID: \"c233dd25-afb2-4ee7-b907-c79d08e02af6\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-zztq9" Dec 03 13:11:49 crc kubenswrapper[4986]: I1203 13:11:49.863027 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqfqj\" (UniqueName: \"kubernetes.io/projected/cb8d169c-9c96-403d-9f6b-357dd8ccc78a-kube-api-access-cqfqj\") pod \"nmstate-metrics-7f946cbc9-ll678\" (UID: \"cb8d169c-9c96-403d-9f6b-357dd8ccc78a\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-ll678" Dec 03 13:11:49 crc kubenswrapper[4986]: I1203 13:11:49.899268 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-758c8fb5b-28296"] Dec 03 13:11:49 crc kubenswrapper[4986]: I1203 13:11:49.900051 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-758c8fb5b-28296" Dec 03 13:11:49 crc kubenswrapper[4986]: I1203 13:11:49.900960 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-ll678" Dec 03 13:11:49 crc kubenswrapper[4986]: I1203 13:11:49.911887 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-758c8fb5b-28296"] Dec 03 13:11:49 crc kubenswrapper[4986]: I1203 13:11:49.931424 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-rgwgv" Dec 03 13:11:49 crc kubenswrapper[4986]: I1203 13:11:49.936466 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/c4ff9001-bea2-41b9-8820-0c46e15b2fbb-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-8lb6z\" (UID: \"c4ff9001-bea2-41b9-8820-0c46e15b2fbb\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-8lb6z" Dec 03 13:11:49 crc kubenswrapper[4986]: I1203 13:11:49.936526 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crslp\" (UniqueName: \"kubernetes.io/projected/c4ff9001-bea2-41b9-8820-0c46e15b2fbb-kube-api-access-crslp\") pod \"nmstate-console-plugin-7fbb5f6569-8lb6z\" (UID: \"c4ff9001-bea2-41b9-8820-0c46e15b2fbb\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-8lb6z" Dec 03 13:11:49 crc kubenswrapper[4986]: I1203 13:11:49.936616 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/c4ff9001-bea2-41b9-8820-0c46e15b2fbb-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-8lb6z\" (UID: \"c4ff9001-bea2-41b9-8820-0c46e15b2fbb\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-8lb6z" Dec 03 13:11:49 crc kubenswrapper[4986]: I1203 13:11:49.937721 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/c4ff9001-bea2-41b9-8820-0c46e15b2fbb-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-8lb6z\" (UID: \"c4ff9001-bea2-41b9-8820-0c46e15b2fbb\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-8lb6z" Dec 03 13:11:49 crc kubenswrapper[4986]: I1203 13:11:49.942230 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/c4ff9001-bea2-41b9-8820-0c46e15b2fbb-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-8lb6z\" (UID: \"c4ff9001-bea2-41b9-8820-0c46e15b2fbb\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-8lb6z" Dec 03 13:11:49 crc kubenswrapper[4986]: I1203 13:11:49.958606 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crslp\" (UniqueName: \"kubernetes.io/projected/c4ff9001-bea2-41b9-8820-0c46e15b2fbb-kube-api-access-crslp\") pod \"nmstate-console-plugin-7fbb5f6569-8lb6z\" (UID: \"c4ff9001-bea2-41b9-8820-0c46e15b2fbb\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-8lb6z" Dec 03 13:11:50 crc kubenswrapper[4986]: I1203 13:11:50.037454 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/08a64b9b-d303-4041-8fb1-0a1d1d1f2a1f-console-serving-cert\") pod \"console-758c8fb5b-28296\" (UID: \"08a64b9b-d303-4041-8fb1-0a1d1d1f2a1f\") " pod="openshift-console/console-758c8fb5b-28296" Dec 03 13:11:50 crc kubenswrapper[4986]: I1203 13:11:50.037498 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/08a64b9b-d303-4041-8fb1-0a1d1d1f2a1f-oauth-serving-cert\") pod \"console-758c8fb5b-28296\" (UID: \"08a64b9b-d303-4041-8fb1-0a1d1d1f2a1f\") " pod="openshift-console/console-758c8fb5b-28296" Dec 03 13:11:50 crc kubenswrapper[4986]: I1203 13:11:50.037555 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/08a64b9b-d303-4041-8fb1-0a1d1d1f2a1f-console-config\") pod \"console-758c8fb5b-28296\" (UID: \"08a64b9b-d303-4041-8fb1-0a1d1d1f2a1f\") " pod="openshift-console/console-758c8fb5b-28296" Dec 03 13:11:50 crc kubenswrapper[4986]: I1203 13:11:50.037644 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-8lb6z" Dec 03 13:11:50 crc kubenswrapper[4986]: I1203 13:11:50.037727 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/08a64b9b-d303-4041-8fb1-0a1d1d1f2a1f-console-oauth-config\") pod \"console-758c8fb5b-28296\" (UID: \"08a64b9b-d303-4041-8fb1-0a1d1d1f2a1f\") " pod="openshift-console/console-758c8fb5b-28296" Dec 03 13:11:50 crc kubenswrapper[4986]: I1203 13:11:50.037770 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgxwz\" (UniqueName: \"kubernetes.io/projected/08a64b9b-d303-4041-8fb1-0a1d1d1f2a1f-kube-api-access-zgxwz\") pod \"console-758c8fb5b-28296\" (UID: \"08a64b9b-d303-4041-8fb1-0a1d1d1f2a1f\") " pod="openshift-console/console-758c8fb5b-28296" Dec 03 13:11:50 crc kubenswrapper[4986]: I1203 13:11:50.037922 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/08a64b9b-d303-4041-8fb1-0a1d1d1f2a1f-service-ca\") pod \"console-758c8fb5b-28296\" (UID: \"08a64b9b-d303-4041-8fb1-0a1d1d1f2a1f\") " pod="openshift-console/console-758c8fb5b-28296" Dec 03 13:11:50 crc kubenswrapper[4986]: I1203 13:11:50.037977 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/08a64b9b-d303-4041-8fb1-0a1d1d1f2a1f-trusted-ca-bundle\") pod \"console-758c8fb5b-28296\" (UID: \"08a64b9b-d303-4041-8fb1-0a1d1d1f2a1f\") " pod="openshift-console/console-758c8fb5b-28296" Dec 03 13:11:50 crc kubenswrapper[4986]: I1203 13:11:50.113683 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-ll678"] Dec 03 13:11:50 crc kubenswrapper[4986]: W1203 13:11:50.118395 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb8d169c_9c96_403d_9f6b_357dd8ccc78a.slice/crio-56b9efc021e6e8e379666d64e2d31c0fb3e44c6fcd1d5dc8e682ef4319338873 WatchSource:0}: Error finding container 56b9efc021e6e8e379666d64e2d31c0fb3e44c6fcd1d5dc8e682ef4319338873: Status 404 returned error can't find the container with id 56b9efc021e6e8e379666d64e2d31c0fb3e44c6fcd1d5dc8e682ef4319338873 Dec 03 13:11:50 crc kubenswrapper[4986]: I1203 13:11:50.139005 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/08a64b9b-d303-4041-8fb1-0a1d1d1f2a1f-service-ca\") pod \"console-758c8fb5b-28296\" (UID: \"08a64b9b-d303-4041-8fb1-0a1d1d1f2a1f\") " pod="openshift-console/console-758c8fb5b-28296" Dec 03 13:11:50 crc kubenswrapper[4986]: I1203 13:11:50.139083 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/08a64b9b-d303-4041-8fb1-0a1d1d1f2a1f-trusted-ca-bundle\") pod \"console-758c8fb5b-28296\" (UID: \"08a64b9b-d303-4041-8fb1-0a1d1d1f2a1f\") " pod="openshift-console/console-758c8fb5b-28296" Dec 03 13:11:50 crc kubenswrapper[4986]: I1203 13:11:50.139119 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/08a64b9b-d303-4041-8fb1-0a1d1d1f2a1f-console-serving-cert\") pod \"console-758c8fb5b-28296\" (UID: \"08a64b9b-d303-4041-8fb1-0a1d1d1f2a1f\") " pod="openshift-console/console-758c8fb5b-28296" Dec 03 13:11:50 crc kubenswrapper[4986]: I1203 13:11:50.139148 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/08a64b9b-d303-4041-8fb1-0a1d1d1f2a1f-oauth-serving-cert\") pod \"console-758c8fb5b-28296\" (UID: \"08a64b9b-d303-4041-8fb1-0a1d1d1f2a1f\") " pod="openshift-console/console-758c8fb5b-28296" Dec 03 13:11:50 crc kubenswrapper[4986]: I1203 13:11:50.139221 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/08a64b9b-d303-4041-8fb1-0a1d1d1f2a1f-console-config\") pod \"console-758c8fb5b-28296\" (UID: \"08a64b9b-d303-4041-8fb1-0a1d1d1f2a1f\") " pod="openshift-console/console-758c8fb5b-28296" Dec 03 13:11:50 crc kubenswrapper[4986]: I1203 13:11:50.139255 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/08a64b9b-d303-4041-8fb1-0a1d1d1f2a1f-console-oauth-config\") pod \"console-758c8fb5b-28296\" (UID: \"08a64b9b-d303-4041-8fb1-0a1d1d1f2a1f\") " pod="openshift-console/console-758c8fb5b-28296" Dec 03 13:11:50 crc kubenswrapper[4986]: I1203 13:11:50.139316 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgxwz\" (UniqueName: \"kubernetes.io/projected/08a64b9b-d303-4041-8fb1-0a1d1d1f2a1f-kube-api-access-zgxwz\") pod \"console-758c8fb5b-28296\" (UID: \"08a64b9b-d303-4041-8fb1-0a1d1d1f2a1f\") " pod="openshift-console/console-758c8fb5b-28296" Dec 03 13:11:50 crc kubenswrapper[4986]: I1203 13:11:50.140938 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/08a64b9b-d303-4041-8fb1-0a1d1d1f2a1f-service-ca\") pod \"console-758c8fb5b-28296\" (UID: \"08a64b9b-d303-4041-8fb1-0a1d1d1f2a1f\") " pod="openshift-console/console-758c8fb5b-28296" Dec 03 13:11:50 crc kubenswrapper[4986]: I1203 13:11:50.141341 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/08a64b9b-d303-4041-8fb1-0a1d1d1f2a1f-trusted-ca-bundle\") pod \"console-758c8fb5b-28296\" (UID: \"08a64b9b-d303-4041-8fb1-0a1d1d1f2a1f\") " pod="openshift-console/console-758c8fb5b-28296" Dec 03 13:11:50 crc kubenswrapper[4986]: I1203 13:11:50.141607 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/08a64b9b-d303-4041-8fb1-0a1d1d1f2a1f-oauth-serving-cert\") pod \"console-758c8fb5b-28296\" (UID: \"08a64b9b-d303-4041-8fb1-0a1d1d1f2a1f\") " pod="openshift-console/console-758c8fb5b-28296" Dec 03 13:11:50 crc kubenswrapper[4986]: I1203 13:11:50.142222 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/08a64b9b-d303-4041-8fb1-0a1d1d1f2a1f-console-config\") pod \"console-758c8fb5b-28296\" (UID: \"08a64b9b-d303-4041-8fb1-0a1d1d1f2a1f\") " pod="openshift-console/console-758c8fb5b-28296" Dec 03 13:11:50 crc kubenswrapper[4986]: I1203 13:11:50.146540 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/08a64b9b-d303-4041-8fb1-0a1d1d1f2a1f-console-serving-cert\") pod \"console-758c8fb5b-28296\" (UID: \"08a64b9b-d303-4041-8fb1-0a1d1d1f2a1f\") " pod="openshift-console/console-758c8fb5b-28296" Dec 03 13:11:50 crc kubenswrapper[4986]: I1203 13:11:50.147215 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/08a64b9b-d303-4041-8fb1-0a1d1d1f2a1f-console-oauth-config\") pod \"console-758c8fb5b-28296\" (UID: \"08a64b9b-d303-4041-8fb1-0a1d1d1f2a1f\") " pod="openshift-console/console-758c8fb5b-28296" Dec 03 13:11:50 crc kubenswrapper[4986]: I1203 13:11:50.163682 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgxwz\" (UniqueName: \"kubernetes.io/projected/08a64b9b-d303-4041-8fb1-0a1d1d1f2a1f-kube-api-access-zgxwz\") pod \"console-758c8fb5b-28296\" (UID: \"08a64b9b-d303-4041-8fb1-0a1d1d1f2a1f\") " pod="openshift-console/console-758c8fb5b-28296" Dec 03 13:11:50 crc kubenswrapper[4986]: I1203 13:11:50.242544 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-8lb6z"] Dec 03 13:11:50 crc kubenswrapper[4986]: W1203 13:11:50.247005 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4ff9001_bea2_41b9_8820_0c46e15b2fbb.slice/crio-05b8a7c345cd45408601c2b8b1b1a4168e5d2a27d73d7776ba26eb12a322c2cc WatchSource:0}: Error finding container 05b8a7c345cd45408601c2b8b1b1a4168e5d2a27d73d7776ba26eb12a322c2cc: Status 404 returned error can't find the container with id 05b8a7c345cd45408601c2b8b1b1a4168e5d2a27d73d7776ba26eb12a322c2cc Dec 03 13:11:50 crc kubenswrapper[4986]: I1203 13:11:50.284012 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-758c8fb5b-28296" Dec 03 13:11:50 crc kubenswrapper[4986]: I1203 13:11:50.341411 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/c233dd25-afb2-4ee7-b907-c79d08e02af6-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-zztq9\" (UID: \"c233dd25-afb2-4ee7-b907-c79d08e02af6\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-zztq9" Dec 03 13:11:50 crc kubenswrapper[4986]: I1203 13:11:50.345715 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/c233dd25-afb2-4ee7-b907-c79d08e02af6-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-zztq9\" (UID: \"c233dd25-afb2-4ee7-b907-c79d08e02af6\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-zztq9" Dec 03 13:11:50 crc kubenswrapper[4986]: I1203 13:11:50.554998 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-zztq9" Dec 03 13:11:50 crc kubenswrapper[4986]: I1203 13:11:50.682050 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-758c8fb5b-28296"] Dec 03 13:11:50 crc kubenswrapper[4986]: W1203 13:11:50.695934 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod08a64b9b_d303_4041_8fb1_0a1d1d1f2a1f.slice/crio-fb215f6c3e395d31fed8db05c411c67c0c8871e0b28b4460f719f0b5bbbe4be0 WatchSource:0}: Error finding container fb215f6c3e395d31fed8db05c411c67c0c8871e0b28b4460f719f0b5bbbe4be0: Status 404 returned error can't find the container with id fb215f6c3e395d31fed8db05c411c67c0c8871e0b28b4460f719f0b5bbbe4be0 Dec 03 13:11:50 crc kubenswrapper[4986]: I1203 13:11:50.695972 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-8lb6z" event={"ID":"c4ff9001-bea2-41b9-8820-0c46e15b2fbb","Type":"ContainerStarted","Data":"05b8a7c345cd45408601c2b8b1b1a4168e5d2a27d73d7776ba26eb12a322c2cc"} Dec 03 13:11:50 crc kubenswrapper[4986]: I1203 13:11:50.697107 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-ll678" event={"ID":"cb8d169c-9c96-403d-9f6b-357dd8ccc78a","Type":"ContainerStarted","Data":"56b9efc021e6e8e379666d64e2d31c0fb3e44c6fcd1d5dc8e682ef4319338873"} Dec 03 13:11:50 crc kubenswrapper[4986]: I1203 13:11:50.698046 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-rgwgv" event={"ID":"bd89b324-ae85-4e4e-b40b-a76a7ae8e498","Type":"ContainerStarted","Data":"eb7f66a1ca2d2669fc64fdb4162bec59bde1551b6d4b5cb4c1094c2da36c5db3"} Dec 03 13:11:50 crc kubenswrapper[4986]: I1203 13:11:50.802884 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-zztq9"] Dec 03 13:11:50 crc kubenswrapper[4986]: W1203 13:11:50.808817 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc233dd25_afb2_4ee7_b907_c79d08e02af6.slice/crio-9be1384a8d5aac2fe4f0f7817f46b2d46348ea69e939f4b13b514450b8d31d2b WatchSource:0}: Error finding container 9be1384a8d5aac2fe4f0f7817f46b2d46348ea69e939f4b13b514450b8d31d2b: Status 404 returned error can't find the container with id 9be1384a8d5aac2fe4f0f7817f46b2d46348ea69e939f4b13b514450b8d31d2b Dec 03 13:11:51 crc kubenswrapper[4986]: I1203 13:11:51.704219 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-zztq9" event={"ID":"c233dd25-afb2-4ee7-b907-c79d08e02af6","Type":"ContainerStarted","Data":"9be1384a8d5aac2fe4f0f7817f46b2d46348ea69e939f4b13b514450b8d31d2b"} Dec 03 13:11:51 crc kubenswrapper[4986]: I1203 13:11:51.706338 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-758c8fb5b-28296" event={"ID":"08a64b9b-d303-4041-8fb1-0a1d1d1f2a1f","Type":"ContainerStarted","Data":"d2007de5d5c8072cf7de69c2c36557201fb024fadaa68f7a1bac371dca24798f"} Dec 03 13:11:51 crc kubenswrapper[4986]: I1203 13:11:51.706413 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-758c8fb5b-28296" event={"ID":"08a64b9b-d303-4041-8fb1-0a1d1d1f2a1f","Type":"ContainerStarted","Data":"fb215f6c3e395d31fed8db05c411c67c0c8871e0b28b4460f719f0b5bbbe4be0"} Dec 03 13:11:51 crc kubenswrapper[4986]: I1203 13:11:51.723990 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-758c8fb5b-28296" podStartSLOduration=2.723972122 podStartE2EDuration="2.723972122s" podCreationTimestamp="2025-12-03 13:11:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:11:51.720760126 +0000 UTC m=+971.187191337" watchObservedRunningTime="2025-12-03 13:11:51.723972122 +0000 UTC m=+971.190403323" Dec 03 13:11:53 crc kubenswrapper[4986]: I1203 13:11:53.719829 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-rgwgv" event={"ID":"bd89b324-ae85-4e4e-b40b-a76a7ae8e498","Type":"ContainerStarted","Data":"3945ad60ad5cc6d14e9ecc78e03dd1b4cf6fc8ad9b8a746fe80864a436c26356"} Dec 03 13:11:53 crc kubenswrapper[4986]: I1203 13:11:53.720371 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-rgwgv" Dec 03 13:11:53 crc kubenswrapper[4986]: I1203 13:11:53.721419 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-8lb6z" event={"ID":"c4ff9001-bea2-41b9-8820-0c46e15b2fbb","Type":"ContainerStarted","Data":"d28a4edb73ab1fb2f902d24a7fc782c546f14ddf76f6dcb316122625189c5f38"} Dec 03 13:11:53 crc kubenswrapper[4986]: I1203 13:11:53.722648 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-ll678" event={"ID":"cb8d169c-9c96-403d-9f6b-357dd8ccc78a","Type":"ContainerStarted","Data":"ff41d9ae6badb68bf83d1011a8df1460adda9f98201ebb88439ff0746d049888"} Dec 03 13:11:53 crc kubenswrapper[4986]: I1203 13:11:53.724077 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-zztq9" event={"ID":"c233dd25-afb2-4ee7-b907-c79d08e02af6","Type":"ContainerStarted","Data":"ea328497a8f41d4aa4fe69276d21420f736c5b607a2bf0f5bbb39f2b47a4bc7a"} Dec 03 13:11:53 crc kubenswrapper[4986]: I1203 13:11:53.724232 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-zztq9" Dec 03 13:11:53 crc kubenswrapper[4986]: I1203 13:11:53.738005 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-rgwgv" podStartSLOduration=1.792627767 podStartE2EDuration="4.737986178s" podCreationTimestamp="2025-12-03 13:11:49 +0000 UTC" firstStartedPulling="2025-12-03 13:11:49.958082453 +0000 UTC m=+969.424513644" lastFinishedPulling="2025-12-03 13:11:52.903440864 +0000 UTC m=+972.369872055" observedRunningTime="2025-12-03 13:11:53.733001215 +0000 UTC m=+973.199432446" watchObservedRunningTime="2025-12-03 13:11:53.737986178 +0000 UTC m=+973.204417369" Dec 03 13:11:53 crc kubenswrapper[4986]: I1203 13:11:53.759120 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-zztq9" podStartSLOduration=2.679908578 podStartE2EDuration="4.759097885s" podCreationTimestamp="2025-12-03 13:11:49 +0000 UTC" firstStartedPulling="2025-12-03 13:11:50.8113382 +0000 UTC m=+970.277769391" lastFinishedPulling="2025-12-03 13:11:52.890527497 +0000 UTC m=+972.356958698" observedRunningTime="2025-12-03 13:11:53.752505629 +0000 UTC m=+973.218936840" watchObservedRunningTime="2025-12-03 13:11:53.759097885 +0000 UTC m=+973.225529076" Dec 03 13:11:53 crc kubenswrapper[4986]: I1203 13:11:53.796161 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-8lb6z" podStartSLOduration=2.163471271 podStartE2EDuration="4.796144271s" podCreationTimestamp="2025-12-03 13:11:49 +0000 UTC" firstStartedPulling="2025-12-03 13:11:50.249122603 +0000 UTC m=+969.715553794" lastFinishedPulling="2025-12-03 13:11:52.881795593 +0000 UTC m=+972.348226794" observedRunningTime="2025-12-03 13:11:53.784415306 +0000 UTC m=+973.250846537" watchObservedRunningTime="2025-12-03 13:11:53.796144271 +0000 UTC m=+973.262575462" Dec 03 13:11:55 crc kubenswrapper[4986]: I1203 13:11:55.737694 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-ll678" event={"ID":"cb8d169c-9c96-403d-9f6b-357dd8ccc78a","Type":"ContainerStarted","Data":"a99e55455d1792c6ece35e6de3bdc1b6d4cc8e9b388312c90948a4bf22c7e708"} Dec 03 13:11:55 crc kubenswrapper[4986]: I1203 13:11:55.761625 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-ll678" podStartSLOduration=1.83738291 podStartE2EDuration="6.761602993s" podCreationTimestamp="2025-12-03 13:11:49 +0000 UTC" firstStartedPulling="2025-12-03 13:11:50.121029031 +0000 UTC m=+969.587460222" lastFinishedPulling="2025-12-03 13:11:55.045249104 +0000 UTC m=+974.511680305" observedRunningTime="2025-12-03 13:11:55.759348872 +0000 UTC m=+975.225780063" watchObservedRunningTime="2025-12-03 13:11:55.761602993 +0000 UTC m=+975.228034204" Dec 03 13:11:59 crc kubenswrapper[4986]: I1203 13:11:59.962697 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-rgwgv" Dec 03 13:12:00 crc kubenswrapper[4986]: I1203 13:12:00.284580 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-758c8fb5b-28296" Dec 03 13:12:00 crc kubenswrapper[4986]: I1203 13:12:00.284705 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-758c8fb5b-28296" Dec 03 13:12:00 crc kubenswrapper[4986]: I1203 13:12:00.294846 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-758c8fb5b-28296" Dec 03 13:12:00 crc kubenswrapper[4986]: I1203 13:12:00.777066 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-758c8fb5b-28296" Dec 03 13:12:00 crc kubenswrapper[4986]: I1203 13:12:00.838498 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-sk8ll"] Dec 03 13:12:03 crc kubenswrapper[4986]: I1203 13:12:03.491710 4986 patch_prober.go:28] interesting pod/machine-config-daemon-xggpj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 13:12:03 crc kubenswrapper[4986]: I1203 13:12:03.492069 4986 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 13:12:03 crc kubenswrapper[4986]: I1203 13:12:03.492124 4986 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" Dec 03 13:12:03 crc kubenswrapper[4986]: I1203 13:12:03.492829 4986 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8955c8a516ca8c4bf5f23613b3c8c76be6f843b93d44621c8aa9c9ffd7fe8443"} pod="openshift-machine-config-operator/machine-config-daemon-xggpj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 13:12:03 crc kubenswrapper[4986]: I1203 13:12:03.492900 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" containerName="machine-config-daemon" containerID="cri-o://8955c8a516ca8c4bf5f23613b3c8c76be6f843b93d44621c8aa9c9ffd7fe8443" gracePeriod=600 Dec 03 13:12:03 crc kubenswrapper[4986]: I1203 13:12:03.796696 4986 generic.go:334] "Generic (PLEG): container finished" podID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" containerID="8955c8a516ca8c4bf5f23613b3c8c76be6f843b93d44621c8aa9c9ffd7fe8443" exitCode=0 Dec 03 13:12:03 crc kubenswrapper[4986]: I1203 13:12:03.796934 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" event={"ID":"f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c","Type":"ContainerDied","Data":"8955c8a516ca8c4bf5f23613b3c8c76be6f843b93d44621c8aa9c9ffd7fe8443"} Dec 03 13:12:03 crc kubenswrapper[4986]: I1203 13:12:03.797058 4986 scope.go:117] "RemoveContainer" containerID="7d6eb68371a8474a49cff3df53b688047bada4287bac42533432af78bbe4483e" Dec 03 13:12:04 crc kubenswrapper[4986]: I1203 13:12:04.805105 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" event={"ID":"f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c","Type":"ContainerStarted","Data":"ab3578e12fb223968075bdef0a7259f6a8a78f54ad544fd0e2d4be111538db67"} Dec 03 13:12:10 crc kubenswrapper[4986]: I1203 13:12:10.562018 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-zztq9" Dec 03 13:12:23 crc kubenswrapper[4986]: I1203 13:12:23.848336 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83lg6jv"] Dec 03 13:12:23 crc kubenswrapper[4986]: I1203 13:12:23.850446 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83lg6jv" Dec 03 13:12:23 crc kubenswrapper[4986]: I1203 13:12:23.854707 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83lg6jv"] Dec 03 13:12:23 crc kubenswrapper[4986]: I1203 13:12:23.855316 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 03 13:12:24 crc kubenswrapper[4986]: I1203 13:12:24.029058 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8ec4e0a9-2294-4df2-b849-9c32dba275f9-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83lg6jv\" (UID: \"8ec4e0a9-2294-4df2-b849-9c32dba275f9\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83lg6jv" Dec 03 13:12:24 crc kubenswrapper[4986]: I1203 13:12:24.029191 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fprl\" (UniqueName: \"kubernetes.io/projected/8ec4e0a9-2294-4df2-b849-9c32dba275f9-kube-api-access-4fprl\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83lg6jv\" (UID: \"8ec4e0a9-2294-4df2-b849-9c32dba275f9\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83lg6jv" Dec 03 13:12:24 crc kubenswrapper[4986]: I1203 13:12:24.029252 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8ec4e0a9-2294-4df2-b849-9c32dba275f9-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83lg6jv\" (UID: \"8ec4e0a9-2294-4df2-b849-9c32dba275f9\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83lg6jv" Dec 03 13:12:24 crc kubenswrapper[4986]: I1203 13:12:24.130219 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8ec4e0a9-2294-4df2-b849-9c32dba275f9-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83lg6jv\" (UID: \"8ec4e0a9-2294-4df2-b849-9c32dba275f9\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83lg6jv" Dec 03 13:12:24 crc kubenswrapper[4986]: I1203 13:12:24.130393 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8ec4e0a9-2294-4df2-b849-9c32dba275f9-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83lg6jv\" (UID: \"8ec4e0a9-2294-4df2-b849-9c32dba275f9\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83lg6jv" Dec 03 13:12:24 crc kubenswrapper[4986]: I1203 13:12:24.130429 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fprl\" (UniqueName: \"kubernetes.io/projected/8ec4e0a9-2294-4df2-b849-9c32dba275f9-kube-api-access-4fprl\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83lg6jv\" (UID: \"8ec4e0a9-2294-4df2-b849-9c32dba275f9\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83lg6jv" Dec 03 13:12:24 crc kubenswrapper[4986]: I1203 13:12:24.131050 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8ec4e0a9-2294-4df2-b849-9c32dba275f9-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83lg6jv\" (UID: \"8ec4e0a9-2294-4df2-b849-9c32dba275f9\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83lg6jv" Dec 03 13:12:24 crc kubenswrapper[4986]: I1203 13:12:24.131107 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8ec4e0a9-2294-4df2-b849-9c32dba275f9-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83lg6jv\" (UID: \"8ec4e0a9-2294-4df2-b849-9c32dba275f9\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83lg6jv" Dec 03 13:12:24 crc kubenswrapper[4986]: I1203 13:12:24.160732 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fprl\" (UniqueName: \"kubernetes.io/projected/8ec4e0a9-2294-4df2-b849-9c32dba275f9-kube-api-access-4fprl\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83lg6jv\" (UID: \"8ec4e0a9-2294-4df2-b849-9c32dba275f9\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83lg6jv" Dec 03 13:12:24 crc kubenswrapper[4986]: I1203 13:12:24.171349 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83lg6jv" Dec 03 13:12:24 crc kubenswrapper[4986]: I1203 13:12:24.606832 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83lg6jv"] Dec 03 13:12:24 crc kubenswrapper[4986]: W1203 13:12:24.613816 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8ec4e0a9_2294_4df2_b849_9c32dba275f9.slice/crio-e842a7c59067934a2430266307d5fa3278f2a978b220b97f28dc1e4cb4b46d69 WatchSource:0}: Error finding container e842a7c59067934a2430266307d5fa3278f2a978b220b97f28dc1e4cb4b46d69: Status 404 returned error can't find the container with id e842a7c59067934a2430266307d5fa3278f2a978b220b97f28dc1e4cb4b46d69 Dec 03 13:12:24 crc kubenswrapper[4986]: I1203 13:12:24.923878 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83lg6jv" event={"ID":"8ec4e0a9-2294-4df2-b849-9c32dba275f9","Type":"ContainerStarted","Data":"e842a7c59067934a2430266307d5fa3278f2a978b220b97f28dc1e4cb4b46d69"} Dec 03 13:12:25 crc kubenswrapper[4986]: I1203 13:12:25.888862 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-sk8ll" podUID="a069c6b7-9e3b-40fd-b830-0d2a82fadd9a" containerName="console" containerID="cri-o://252f9f7b4a1580301afe8e1df466608770a731f0677c52e1af77c465dbe25ffe" gracePeriod=15 Dec 03 13:12:25 crc kubenswrapper[4986]: I1203 13:12:25.932257 4986 generic.go:334] "Generic (PLEG): container finished" podID="8ec4e0a9-2294-4df2-b849-9c32dba275f9" containerID="2cb40713ecc2563600f4ee8a8ec5fce7b9284021f2602b86b6e6d02871d4ff6b" exitCode=0 Dec 03 13:12:25 crc kubenswrapper[4986]: I1203 13:12:25.932311 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83lg6jv" event={"ID":"8ec4e0a9-2294-4df2-b849-9c32dba275f9","Type":"ContainerDied","Data":"2cb40713ecc2563600f4ee8a8ec5fce7b9284021f2602b86b6e6d02871d4ff6b"} Dec 03 13:12:26 crc kubenswrapper[4986]: I1203 13:12:26.275430 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-sk8ll_a069c6b7-9e3b-40fd-b830-0d2a82fadd9a/console/0.log" Dec 03 13:12:26 crc kubenswrapper[4986]: I1203 13:12:26.275511 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-sk8ll" Dec 03 13:12:26 crc kubenswrapper[4986]: I1203 13:12:26.365227 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a069c6b7-9e3b-40fd-b830-0d2a82fadd9a-trusted-ca-bundle\") pod \"a069c6b7-9e3b-40fd-b830-0d2a82fadd9a\" (UID: \"a069c6b7-9e3b-40fd-b830-0d2a82fadd9a\") " Dec 03 13:12:26 crc kubenswrapper[4986]: I1203 13:12:26.365312 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a069c6b7-9e3b-40fd-b830-0d2a82fadd9a-console-config\") pod \"a069c6b7-9e3b-40fd-b830-0d2a82fadd9a\" (UID: \"a069c6b7-9e3b-40fd-b830-0d2a82fadd9a\") " Dec 03 13:12:26 crc kubenswrapper[4986]: I1203 13:12:26.365374 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-df95s\" (UniqueName: \"kubernetes.io/projected/a069c6b7-9e3b-40fd-b830-0d2a82fadd9a-kube-api-access-df95s\") pod \"a069c6b7-9e3b-40fd-b830-0d2a82fadd9a\" (UID: \"a069c6b7-9e3b-40fd-b830-0d2a82fadd9a\") " Dec 03 13:12:26 crc kubenswrapper[4986]: I1203 13:12:26.365399 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a069c6b7-9e3b-40fd-b830-0d2a82fadd9a-oauth-serving-cert\") pod \"a069c6b7-9e3b-40fd-b830-0d2a82fadd9a\" (UID: \"a069c6b7-9e3b-40fd-b830-0d2a82fadd9a\") " Dec 03 13:12:26 crc kubenswrapper[4986]: I1203 13:12:26.365422 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a069c6b7-9e3b-40fd-b830-0d2a82fadd9a-console-oauth-config\") pod \"a069c6b7-9e3b-40fd-b830-0d2a82fadd9a\" (UID: \"a069c6b7-9e3b-40fd-b830-0d2a82fadd9a\") " Dec 03 13:12:26 crc kubenswrapper[4986]: I1203 13:12:26.365527 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a069c6b7-9e3b-40fd-b830-0d2a82fadd9a-service-ca\") pod \"a069c6b7-9e3b-40fd-b830-0d2a82fadd9a\" (UID: \"a069c6b7-9e3b-40fd-b830-0d2a82fadd9a\") " Dec 03 13:12:26 crc kubenswrapper[4986]: I1203 13:12:26.365566 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a069c6b7-9e3b-40fd-b830-0d2a82fadd9a-console-serving-cert\") pod \"a069c6b7-9e3b-40fd-b830-0d2a82fadd9a\" (UID: \"a069c6b7-9e3b-40fd-b830-0d2a82fadd9a\") " Dec 03 13:12:26 crc kubenswrapper[4986]: I1203 13:12:26.371664 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a069c6b7-9e3b-40fd-b830-0d2a82fadd9a-console-config" (OuterVolumeSpecName: "console-config") pod "a069c6b7-9e3b-40fd-b830-0d2a82fadd9a" (UID: "a069c6b7-9e3b-40fd-b830-0d2a82fadd9a"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:12:26 crc kubenswrapper[4986]: I1203 13:12:26.371723 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a069c6b7-9e3b-40fd-b830-0d2a82fadd9a-service-ca" (OuterVolumeSpecName: "service-ca") pod "a069c6b7-9e3b-40fd-b830-0d2a82fadd9a" (UID: "a069c6b7-9e3b-40fd-b830-0d2a82fadd9a"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:12:26 crc kubenswrapper[4986]: I1203 13:12:26.371772 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a069c6b7-9e3b-40fd-b830-0d2a82fadd9a-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "a069c6b7-9e3b-40fd-b830-0d2a82fadd9a" (UID: "a069c6b7-9e3b-40fd-b830-0d2a82fadd9a"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:12:26 crc kubenswrapper[4986]: I1203 13:12:26.371824 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a069c6b7-9e3b-40fd-b830-0d2a82fadd9a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "a069c6b7-9e3b-40fd-b830-0d2a82fadd9a" (UID: "a069c6b7-9e3b-40fd-b830-0d2a82fadd9a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:12:26 crc kubenswrapper[4986]: I1203 13:12:26.372391 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a069c6b7-9e3b-40fd-b830-0d2a82fadd9a-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "a069c6b7-9e3b-40fd-b830-0d2a82fadd9a" (UID: "a069c6b7-9e3b-40fd-b830-0d2a82fadd9a"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:12:26 crc kubenswrapper[4986]: I1203 13:12:26.374021 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a069c6b7-9e3b-40fd-b830-0d2a82fadd9a-kube-api-access-df95s" (OuterVolumeSpecName: "kube-api-access-df95s") pod "a069c6b7-9e3b-40fd-b830-0d2a82fadd9a" (UID: "a069c6b7-9e3b-40fd-b830-0d2a82fadd9a"). InnerVolumeSpecName "kube-api-access-df95s". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:12:26 crc kubenswrapper[4986]: I1203 13:12:26.374460 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a069c6b7-9e3b-40fd-b830-0d2a82fadd9a-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "a069c6b7-9e3b-40fd-b830-0d2a82fadd9a" (UID: "a069c6b7-9e3b-40fd-b830-0d2a82fadd9a"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:12:26 crc kubenswrapper[4986]: I1203 13:12:26.466399 4986 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a069c6b7-9e3b-40fd-b830-0d2a82fadd9a-console-config\") on node \"crc\" DevicePath \"\"" Dec 03 13:12:26 crc kubenswrapper[4986]: I1203 13:12:26.466430 4986 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a069c6b7-9e3b-40fd-b830-0d2a82fadd9a-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 13:12:26 crc kubenswrapper[4986]: I1203 13:12:26.466439 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-df95s\" (UniqueName: \"kubernetes.io/projected/a069c6b7-9e3b-40fd-b830-0d2a82fadd9a-kube-api-access-df95s\") on node \"crc\" DevicePath \"\"" Dec 03 13:12:26 crc kubenswrapper[4986]: I1203 13:12:26.466450 4986 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a069c6b7-9e3b-40fd-b830-0d2a82fadd9a-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 03 13:12:26 crc kubenswrapper[4986]: I1203 13:12:26.466459 4986 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a069c6b7-9e3b-40fd-b830-0d2a82fadd9a-service-ca\") on node \"crc\" DevicePath \"\"" Dec 03 13:12:26 crc kubenswrapper[4986]: I1203 13:12:26.466467 4986 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a069c6b7-9e3b-40fd-b830-0d2a82fadd9a-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 13:12:26 crc kubenswrapper[4986]: I1203 13:12:26.466474 4986 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a069c6b7-9e3b-40fd-b830-0d2a82fadd9a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 13:12:26 crc kubenswrapper[4986]: I1203 13:12:26.940384 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-sk8ll_a069c6b7-9e3b-40fd-b830-0d2a82fadd9a/console/0.log" Dec 03 13:12:26 crc kubenswrapper[4986]: I1203 13:12:26.940647 4986 generic.go:334] "Generic (PLEG): container finished" podID="a069c6b7-9e3b-40fd-b830-0d2a82fadd9a" containerID="252f9f7b4a1580301afe8e1df466608770a731f0677c52e1af77c465dbe25ffe" exitCode=2 Dec 03 13:12:26 crc kubenswrapper[4986]: I1203 13:12:26.940679 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-sk8ll" event={"ID":"a069c6b7-9e3b-40fd-b830-0d2a82fadd9a","Type":"ContainerDied","Data":"252f9f7b4a1580301afe8e1df466608770a731f0677c52e1af77c465dbe25ffe"} Dec 03 13:12:26 crc kubenswrapper[4986]: I1203 13:12:26.940707 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-sk8ll" event={"ID":"a069c6b7-9e3b-40fd-b830-0d2a82fadd9a","Type":"ContainerDied","Data":"98098d86768289e9c18d9532fc91ef320583cf5beb823415bd1b4310f79799f9"} Dec 03 13:12:26 crc kubenswrapper[4986]: I1203 13:12:26.940724 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-sk8ll" Dec 03 13:12:26 crc kubenswrapper[4986]: I1203 13:12:26.940726 4986 scope.go:117] "RemoveContainer" containerID="252f9f7b4a1580301afe8e1df466608770a731f0677c52e1af77c465dbe25ffe" Dec 03 13:12:26 crc kubenswrapper[4986]: I1203 13:12:26.975691 4986 scope.go:117] "RemoveContainer" containerID="252f9f7b4a1580301afe8e1df466608770a731f0677c52e1af77c465dbe25ffe" Dec 03 13:12:26 crc kubenswrapper[4986]: E1203 13:12:26.976229 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"252f9f7b4a1580301afe8e1df466608770a731f0677c52e1af77c465dbe25ffe\": container with ID starting with 252f9f7b4a1580301afe8e1df466608770a731f0677c52e1af77c465dbe25ffe not found: ID does not exist" containerID="252f9f7b4a1580301afe8e1df466608770a731f0677c52e1af77c465dbe25ffe" Dec 03 13:12:26 crc kubenswrapper[4986]: I1203 13:12:26.976302 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"252f9f7b4a1580301afe8e1df466608770a731f0677c52e1af77c465dbe25ffe"} err="failed to get container status \"252f9f7b4a1580301afe8e1df466608770a731f0677c52e1af77c465dbe25ffe\": rpc error: code = NotFound desc = could not find container \"252f9f7b4a1580301afe8e1df466608770a731f0677c52e1af77c465dbe25ffe\": container with ID starting with 252f9f7b4a1580301afe8e1df466608770a731f0677c52e1af77c465dbe25ffe not found: ID does not exist" Dec 03 13:12:26 crc kubenswrapper[4986]: I1203 13:12:26.976246 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-sk8ll"] Dec 03 13:12:26 crc kubenswrapper[4986]: I1203 13:12:26.981743 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-sk8ll"] Dec 03 13:12:28 crc kubenswrapper[4986]: I1203 13:12:28.954479 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a069c6b7-9e3b-40fd-b830-0d2a82fadd9a" path="/var/lib/kubelet/pods/a069c6b7-9e3b-40fd-b830-0d2a82fadd9a/volumes" Dec 03 13:12:28 crc kubenswrapper[4986]: I1203 13:12:28.957201 4986 generic.go:334] "Generic (PLEG): container finished" podID="8ec4e0a9-2294-4df2-b849-9c32dba275f9" containerID="cdaa3ca597546a54928416e7f032f5c56173e8788d62654c0dd1fd85261cc188" exitCode=0 Dec 03 13:12:28 crc kubenswrapper[4986]: I1203 13:12:28.957260 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83lg6jv" event={"ID":"8ec4e0a9-2294-4df2-b849-9c32dba275f9","Type":"ContainerDied","Data":"cdaa3ca597546a54928416e7f032f5c56173e8788d62654c0dd1fd85261cc188"} Dec 03 13:12:29 crc kubenswrapper[4986]: I1203 13:12:29.967922 4986 generic.go:334] "Generic (PLEG): container finished" podID="8ec4e0a9-2294-4df2-b849-9c32dba275f9" containerID="d9f9e1cf208910e341eb1ca7a465654531fd2960c2663869c5a2da4c81909de8" exitCode=0 Dec 03 13:12:29 crc kubenswrapper[4986]: I1203 13:12:29.967960 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83lg6jv" event={"ID":"8ec4e0a9-2294-4df2-b849-9c32dba275f9","Type":"ContainerDied","Data":"d9f9e1cf208910e341eb1ca7a465654531fd2960c2663869c5a2da4c81909de8"} Dec 03 13:12:31 crc kubenswrapper[4986]: I1203 13:12:31.262039 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83lg6jv" Dec 03 13:12:31 crc kubenswrapper[4986]: I1203 13:12:31.428871 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8ec4e0a9-2294-4df2-b849-9c32dba275f9-bundle\") pod \"8ec4e0a9-2294-4df2-b849-9c32dba275f9\" (UID: \"8ec4e0a9-2294-4df2-b849-9c32dba275f9\") " Dec 03 13:12:31 crc kubenswrapper[4986]: I1203 13:12:31.428970 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8ec4e0a9-2294-4df2-b849-9c32dba275f9-util\") pod \"8ec4e0a9-2294-4df2-b849-9c32dba275f9\" (UID: \"8ec4e0a9-2294-4df2-b849-9c32dba275f9\") " Dec 03 13:12:31 crc kubenswrapper[4986]: I1203 13:12:31.429004 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4fprl\" (UniqueName: \"kubernetes.io/projected/8ec4e0a9-2294-4df2-b849-9c32dba275f9-kube-api-access-4fprl\") pod \"8ec4e0a9-2294-4df2-b849-9c32dba275f9\" (UID: \"8ec4e0a9-2294-4df2-b849-9c32dba275f9\") " Dec 03 13:12:31 crc kubenswrapper[4986]: I1203 13:12:31.431424 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ec4e0a9-2294-4df2-b849-9c32dba275f9-bundle" (OuterVolumeSpecName: "bundle") pod "8ec4e0a9-2294-4df2-b849-9c32dba275f9" (UID: "8ec4e0a9-2294-4df2-b849-9c32dba275f9"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 13:12:31 crc kubenswrapper[4986]: I1203 13:12:31.436785 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ec4e0a9-2294-4df2-b849-9c32dba275f9-kube-api-access-4fprl" (OuterVolumeSpecName: "kube-api-access-4fprl") pod "8ec4e0a9-2294-4df2-b849-9c32dba275f9" (UID: "8ec4e0a9-2294-4df2-b849-9c32dba275f9"). InnerVolumeSpecName "kube-api-access-4fprl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:12:31 crc kubenswrapper[4986]: I1203 13:12:31.449511 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ec4e0a9-2294-4df2-b849-9c32dba275f9-util" (OuterVolumeSpecName: "util") pod "8ec4e0a9-2294-4df2-b849-9c32dba275f9" (UID: "8ec4e0a9-2294-4df2-b849-9c32dba275f9"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 13:12:31 crc kubenswrapper[4986]: I1203 13:12:31.530900 4986 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8ec4e0a9-2294-4df2-b849-9c32dba275f9-util\") on node \"crc\" DevicePath \"\"" Dec 03 13:12:31 crc kubenswrapper[4986]: I1203 13:12:31.530954 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4fprl\" (UniqueName: \"kubernetes.io/projected/8ec4e0a9-2294-4df2-b849-9c32dba275f9-kube-api-access-4fprl\") on node \"crc\" DevicePath \"\"" Dec 03 13:12:31 crc kubenswrapper[4986]: I1203 13:12:31.530975 4986 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8ec4e0a9-2294-4df2-b849-9c32dba275f9-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 13:12:31 crc kubenswrapper[4986]: I1203 13:12:31.997051 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83lg6jv" event={"ID":"8ec4e0a9-2294-4df2-b849-9c32dba275f9","Type":"ContainerDied","Data":"e842a7c59067934a2430266307d5fa3278f2a978b220b97f28dc1e4cb4b46d69"} Dec 03 13:12:31 crc kubenswrapper[4986]: I1203 13:12:31.997110 4986 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e842a7c59067934a2430266307d5fa3278f2a978b220b97f28dc1e4cb4b46d69" Dec 03 13:12:31 crc kubenswrapper[4986]: I1203 13:12:31.997444 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83lg6jv" Dec 03 13:12:42 crc kubenswrapper[4986]: I1203 13:12:42.107679 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-5b7fcdf964-xx85j"] Dec 03 13:12:42 crc kubenswrapper[4986]: E1203 13:12:42.108365 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ec4e0a9-2294-4df2-b849-9c32dba275f9" containerName="pull" Dec 03 13:12:42 crc kubenswrapper[4986]: I1203 13:12:42.108376 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ec4e0a9-2294-4df2-b849-9c32dba275f9" containerName="pull" Dec 03 13:12:42 crc kubenswrapper[4986]: E1203 13:12:42.108393 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ec4e0a9-2294-4df2-b849-9c32dba275f9" containerName="util" Dec 03 13:12:42 crc kubenswrapper[4986]: I1203 13:12:42.108399 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ec4e0a9-2294-4df2-b849-9c32dba275f9" containerName="util" Dec 03 13:12:42 crc kubenswrapper[4986]: E1203 13:12:42.108408 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a069c6b7-9e3b-40fd-b830-0d2a82fadd9a" containerName="console" Dec 03 13:12:42 crc kubenswrapper[4986]: I1203 13:12:42.108414 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="a069c6b7-9e3b-40fd-b830-0d2a82fadd9a" containerName="console" Dec 03 13:12:42 crc kubenswrapper[4986]: E1203 13:12:42.108421 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ec4e0a9-2294-4df2-b849-9c32dba275f9" containerName="extract" Dec 03 13:12:42 crc kubenswrapper[4986]: I1203 13:12:42.108427 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ec4e0a9-2294-4df2-b849-9c32dba275f9" containerName="extract" Dec 03 13:12:42 crc kubenswrapper[4986]: I1203 13:12:42.108521 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ec4e0a9-2294-4df2-b849-9c32dba275f9" containerName="extract" Dec 03 13:12:42 crc kubenswrapper[4986]: I1203 13:12:42.108533 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="a069c6b7-9e3b-40fd-b830-0d2a82fadd9a" containerName="console" Dec 03 13:12:42 crc kubenswrapper[4986]: I1203 13:12:42.108880 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5b7fcdf964-xx85j" Dec 03 13:12:42 crc kubenswrapper[4986]: I1203 13:12:42.112016 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Dec 03 13:12:42 crc kubenswrapper[4986]: I1203 13:12:42.112475 4986 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Dec 03 13:12:42 crc kubenswrapper[4986]: I1203 13:12:42.112743 4986 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-6g4wd" Dec 03 13:12:42 crc kubenswrapper[4986]: I1203 13:12:42.114974 4986 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Dec 03 13:12:42 crc kubenswrapper[4986]: I1203 13:12:42.115932 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Dec 03 13:12:42 crc kubenswrapper[4986]: I1203 13:12:42.128747 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5b7fcdf964-xx85j"] Dec 03 13:12:42 crc kubenswrapper[4986]: I1203 13:12:42.266249 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c6eca6fc-6cf0-4862-b2a8-caa0c4ec2c8a-webhook-cert\") pod \"metallb-operator-controller-manager-5b7fcdf964-xx85j\" (UID: \"c6eca6fc-6cf0-4862-b2a8-caa0c4ec2c8a\") " pod="metallb-system/metallb-operator-controller-manager-5b7fcdf964-xx85j" Dec 03 13:12:42 crc kubenswrapper[4986]: I1203 13:12:42.266327 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c6eca6fc-6cf0-4862-b2a8-caa0c4ec2c8a-apiservice-cert\") pod \"metallb-operator-controller-manager-5b7fcdf964-xx85j\" (UID: \"c6eca6fc-6cf0-4862-b2a8-caa0c4ec2c8a\") " pod="metallb-system/metallb-operator-controller-manager-5b7fcdf964-xx85j" Dec 03 13:12:42 crc kubenswrapper[4986]: I1203 13:12:42.266370 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbzvd\" (UniqueName: \"kubernetes.io/projected/c6eca6fc-6cf0-4862-b2a8-caa0c4ec2c8a-kube-api-access-lbzvd\") pod \"metallb-operator-controller-manager-5b7fcdf964-xx85j\" (UID: \"c6eca6fc-6cf0-4862-b2a8-caa0c4ec2c8a\") " pod="metallb-system/metallb-operator-controller-manager-5b7fcdf964-xx85j" Dec 03 13:12:42 crc kubenswrapper[4986]: I1203 13:12:42.367552 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c6eca6fc-6cf0-4862-b2a8-caa0c4ec2c8a-webhook-cert\") pod \"metallb-operator-controller-manager-5b7fcdf964-xx85j\" (UID: \"c6eca6fc-6cf0-4862-b2a8-caa0c4ec2c8a\") " pod="metallb-system/metallb-operator-controller-manager-5b7fcdf964-xx85j" Dec 03 13:12:42 crc kubenswrapper[4986]: I1203 13:12:42.367605 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c6eca6fc-6cf0-4862-b2a8-caa0c4ec2c8a-apiservice-cert\") pod \"metallb-operator-controller-manager-5b7fcdf964-xx85j\" (UID: \"c6eca6fc-6cf0-4862-b2a8-caa0c4ec2c8a\") " pod="metallb-system/metallb-operator-controller-manager-5b7fcdf964-xx85j" Dec 03 13:12:42 crc kubenswrapper[4986]: I1203 13:12:42.367637 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbzvd\" (UniqueName: \"kubernetes.io/projected/c6eca6fc-6cf0-4862-b2a8-caa0c4ec2c8a-kube-api-access-lbzvd\") pod \"metallb-operator-controller-manager-5b7fcdf964-xx85j\" (UID: \"c6eca6fc-6cf0-4862-b2a8-caa0c4ec2c8a\") " pod="metallb-system/metallb-operator-controller-manager-5b7fcdf964-xx85j" Dec 03 13:12:42 crc kubenswrapper[4986]: I1203 13:12:42.373172 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c6eca6fc-6cf0-4862-b2a8-caa0c4ec2c8a-webhook-cert\") pod \"metallb-operator-controller-manager-5b7fcdf964-xx85j\" (UID: \"c6eca6fc-6cf0-4862-b2a8-caa0c4ec2c8a\") " pod="metallb-system/metallb-operator-controller-manager-5b7fcdf964-xx85j" Dec 03 13:12:42 crc kubenswrapper[4986]: I1203 13:12:42.383392 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-ddbdbd445-x6ccv"] Dec 03 13:12:42 crc kubenswrapper[4986]: I1203 13:12:42.384313 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-ddbdbd445-x6ccv" Dec 03 13:12:42 crc kubenswrapper[4986]: I1203 13:12:42.386418 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c6eca6fc-6cf0-4862-b2a8-caa0c4ec2c8a-apiservice-cert\") pod \"metallb-operator-controller-manager-5b7fcdf964-xx85j\" (UID: \"c6eca6fc-6cf0-4862-b2a8-caa0c4ec2c8a\") " pod="metallb-system/metallb-operator-controller-manager-5b7fcdf964-xx85j" Dec 03 13:12:42 crc kubenswrapper[4986]: I1203 13:12:42.387008 4986 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-rkclz" Dec 03 13:12:42 crc kubenswrapper[4986]: I1203 13:12:42.387180 4986 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Dec 03 13:12:42 crc kubenswrapper[4986]: I1203 13:12:42.387305 4986 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Dec 03 13:12:42 crc kubenswrapper[4986]: I1203 13:12:42.388646 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-ddbdbd445-x6ccv"] Dec 03 13:12:42 crc kubenswrapper[4986]: I1203 13:12:42.397207 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbzvd\" (UniqueName: \"kubernetes.io/projected/c6eca6fc-6cf0-4862-b2a8-caa0c4ec2c8a-kube-api-access-lbzvd\") pod \"metallb-operator-controller-manager-5b7fcdf964-xx85j\" (UID: \"c6eca6fc-6cf0-4862-b2a8-caa0c4ec2c8a\") " pod="metallb-system/metallb-operator-controller-manager-5b7fcdf964-xx85j" Dec 03 13:12:42 crc kubenswrapper[4986]: I1203 13:12:42.429620 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5b7fcdf964-xx85j" Dec 03 13:12:42 crc kubenswrapper[4986]: I1203 13:12:42.468367 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c786d2ef-19b1-4e12-a803-3cf1c459f6a7-apiservice-cert\") pod \"metallb-operator-webhook-server-ddbdbd445-x6ccv\" (UID: \"c786d2ef-19b1-4e12-a803-3cf1c459f6a7\") " pod="metallb-system/metallb-operator-webhook-server-ddbdbd445-x6ccv" Dec 03 13:12:42 crc kubenswrapper[4986]: I1203 13:12:42.468681 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5q8dg\" (UniqueName: \"kubernetes.io/projected/c786d2ef-19b1-4e12-a803-3cf1c459f6a7-kube-api-access-5q8dg\") pod \"metallb-operator-webhook-server-ddbdbd445-x6ccv\" (UID: \"c786d2ef-19b1-4e12-a803-3cf1c459f6a7\") " pod="metallb-system/metallb-operator-webhook-server-ddbdbd445-x6ccv" Dec 03 13:12:42 crc kubenswrapper[4986]: I1203 13:12:42.468720 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c786d2ef-19b1-4e12-a803-3cf1c459f6a7-webhook-cert\") pod \"metallb-operator-webhook-server-ddbdbd445-x6ccv\" (UID: \"c786d2ef-19b1-4e12-a803-3cf1c459f6a7\") " pod="metallb-system/metallb-operator-webhook-server-ddbdbd445-x6ccv" Dec 03 13:12:42 crc kubenswrapper[4986]: I1203 13:12:42.583391 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c786d2ef-19b1-4e12-a803-3cf1c459f6a7-apiservice-cert\") pod \"metallb-operator-webhook-server-ddbdbd445-x6ccv\" (UID: \"c786d2ef-19b1-4e12-a803-3cf1c459f6a7\") " pod="metallb-system/metallb-operator-webhook-server-ddbdbd445-x6ccv" Dec 03 13:12:42 crc kubenswrapper[4986]: I1203 13:12:42.583489 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5q8dg\" (UniqueName: \"kubernetes.io/projected/c786d2ef-19b1-4e12-a803-3cf1c459f6a7-kube-api-access-5q8dg\") pod \"metallb-operator-webhook-server-ddbdbd445-x6ccv\" (UID: \"c786d2ef-19b1-4e12-a803-3cf1c459f6a7\") " pod="metallb-system/metallb-operator-webhook-server-ddbdbd445-x6ccv" Dec 03 13:12:42 crc kubenswrapper[4986]: I1203 13:12:42.583574 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c786d2ef-19b1-4e12-a803-3cf1c459f6a7-webhook-cert\") pod \"metallb-operator-webhook-server-ddbdbd445-x6ccv\" (UID: \"c786d2ef-19b1-4e12-a803-3cf1c459f6a7\") " pod="metallb-system/metallb-operator-webhook-server-ddbdbd445-x6ccv" Dec 03 13:12:42 crc kubenswrapper[4986]: I1203 13:12:42.593554 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c786d2ef-19b1-4e12-a803-3cf1c459f6a7-webhook-cert\") pod \"metallb-operator-webhook-server-ddbdbd445-x6ccv\" (UID: \"c786d2ef-19b1-4e12-a803-3cf1c459f6a7\") " pod="metallb-system/metallb-operator-webhook-server-ddbdbd445-x6ccv" Dec 03 13:12:42 crc kubenswrapper[4986]: I1203 13:12:42.598550 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c786d2ef-19b1-4e12-a803-3cf1c459f6a7-apiservice-cert\") pod \"metallb-operator-webhook-server-ddbdbd445-x6ccv\" (UID: \"c786d2ef-19b1-4e12-a803-3cf1c459f6a7\") " pod="metallb-system/metallb-operator-webhook-server-ddbdbd445-x6ccv" Dec 03 13:12:42 crc kubenswrapper[4986]: I1203 13:12:42.616157 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5q8dg\" (UniqueName: \"kubernetes.io/projected/c786d2ef-19b1-4e12-a803-3cf1c459f6a7-kube-api-access-5q8dg\") pod \"metallb-operator-webhook-server-ddbdbd445-x6ccv\" (UID: \"c786d2ef-19b1-4e12-a803-3cf1c459f6a7\") " pod="metallb-system/metallb-operator-webhook-server-ddbdbd445-x6ccv" Dec 03 13:12:42 crc kubenswrapper[4986]: I1203 13:12:42.735832 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-ddbdbd445-x6ccv" Dec 03 13:12:42 crc kubenswrapper[4986]: I1203 13:12:42.918442 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5b7fcdf964-xx85j"] Dec 03 13:12:43 crc kubenswrapper[4986]: I1203 13:12:43.058151 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5b7fcdf964-xx85j" event={"ID":"c6eca6fc-6cf0-4862-b2a8-caa0c4ec2c8a","Type":"ContainerStarted","Data":"88ab089b09490cdb57f03f43ed2e024b25d15cd6cf522867d2cd5fa7c8c8923b"} Dec 03 13:12:43 crc kubenswrapper[4986]: I1203 13:12:43.140557 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-ddbdbd445-x6ccv"] Dec 03 13:12:43 crc kubenswrapper[4986]: W1203 13:12:43.146468 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc786d2ef_19b1_4e12_a803_3cf1c459f6a7.slice/crio-5ce88c7e124417d4ae8046c3209fd0ef812e5be216dc4c583b8d18efe9e89e50 WatchSource:0}: Error finding container 5ce88c7e124417d4ae8046c3209fd0ef812e5be216dc4c583b8d18efe9e89e50: Status 404 returned error can't find the container with id 5ce88c7e124417d4ae8046c3209fd0ef812e5be216dc4c583b8d18efe9e89e50 Dec 03 13:12:44 crc kubenswrapper[4986]: I1203 13:12:44.071058 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-ddbdbd445-x6ccv" event={"ID":"c786d2ef-19b1-4e12-a803-3cf1c459f6a7","Type":"ContainerStarted","Data":"5ce88c7e124417d4ae8046c3209fd0ef812e5be216dc4c583b8d18efe9e89e50"} Dec 03 13:12:49 crc kubenswrapper[4986]: I1203 13:12:49.102574 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5b7fcdf964-xx85j" event={"ID":"c6eca6fc-6cf0-4862-b2a8-caa0c4ec2c8a","Type":"ContainerStarted","Data":"a7c042a7de8dbbdb40fccfeed5c36f5cd36fd29ec433469e176dc76c1e54e3a6"} Dec 03 13:12:49 crc kubenswrapper[4986]: I1203 13:12:49.103117 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-5b7fcdf964-xx85j" Dec 03 13:12:50 crc kubenswrapper[4986]: I1203 13:12:50.112024 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-ddbdbd445-x6ccv" event={"ID":"c786d2ef-19b1-4e12-a803-3cf1c459f6a7","Type":"ContainerStarted","Data":"9cf10cf515277d1bd7fcbd9a34f170bcbcfcd248725a59f408e4ee68e2080ff6"} Dec 03 13:12:50 crc kubenswrapper[4986]: I1203 13:12:50.112245 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-ddbdbd445-x6ccv" Dec 03 13:12:50 crc kubenswrapper[4986]: I1203 13:12:50.135166 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-5b7fcdf964-xx85j" podStartSLOduration=2.140796549 podStartE2EDuration="8.135142977s" podCreationTimestamp="2025-12-03 13:12:42 +0000 UTC" firstStartedPulling="2025-12-03 13:12:42.930776737 +0000 UTC m=+1022.397207928" lastFinishedPulling="2025-12-03 13:12:48.925123165 +0000 UTC m=+1028.391554356" observedRunningTime="2025-12-03 13:12:49.121763268 +0000 UTC m=+1028.588194459" watchObservedRunningTime="2025-12-03 13:12:50.135142977 +0000 UTC m=+1029.601574188" Dec 03 13:12:50 crc kubenswrapper[4986]: I1203 13:12:50.135623 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-ddbdbd445-x6ccv" podStartSLOduration=2.308364063 podStartE2EDuration="8.13561343s" podCreationTimestamp="2025-12-03 13:12:42 +0000 UTC" firstStartedPulling="2025-12-03 13:12:43.149480784 +0000 UTC m=+1022.615911985" lastFinishedPulling="2025-12-03 13:12:48.976730161 +0000 UTC m=+1028.443161352" observedRunningTime="2025-12-03 13:12:50.132984489 +0000 UTC m=+1029.599415690" watchObservedRunningTime="2025-12-03 13:12:50.13561343 +0000 UTC m=+1029.602044651" Dec 03 13:13:02 crc kubenswrapper[4986]: I1203 13:13:02.744705 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-ddbdbd445-x6ccv" Dec 03 13:13:22 crc kubenswrapper[4986]: I1203 13:13:22.432871 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-5b7fcdf964-xx85j" Dec 03 13:13:23 crc kubenswrapper[4986]: I1203 13:13:23.227812 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-zv59f"] Dec 03 13:13:23 crc kubenswrapper[4986]: I1203 13:13:23.230551 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-zv59f" Dec 03 13:13:23 crc kubenswrapper[4986]: I1203 13:13:23.234474 4986 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Dec 03 13:13:23 crc kubenswrapper[4986]: I1203 13:13:23.236699 4986 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-bhbz9" Dec 03 13:13:23 crc kubenswrapper[4986]: I1203 13:13:23.238471 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-bvs5x"] Dec 03 13:13:23 crc kubenswrapper[4986]: I1203 13:13:23.242728 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Dec 03 13:13:23 crc kubenswrapper[4986]: I1203 13:13:23.245415 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-bvs5x" Dec 03 13:13:23 crc kubenswrapper[4986]: I1203 13:13:23.251898 4986 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Dec 03 13:13:23 crc kubenswrapper[4986]: I1203 13:13:23.268563 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-bvs5x"] Dec 03 13:13:23 crc kubenswrapper[4986]: I1203 13:13:23.316069 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-d7x7x"] Dec 03 13:13:23 crc kubenswrapper[4986]: I1203 13:13:23.317037 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-d7x7x" Dec 03 13:13:23 crc kubenswrapper[4986]: I1203 13:13:23.319588 4986 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Dec 03 13:13:23 crc kubenswrapper[4986]: I1203 13:13:23.319741 4986 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Dec 03 13:13:23 crc kubenswrapper[4986]: I1203 13:13:23.319883 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Dec 03 13:13:23 crc kubenswrapper[4986]: I1203 13:13:23.319996 4986 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-cwr2s" Dec 03 13:13:23 crc kubenswrapper[4986]: I1203 13:13:23.330952 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-f8648f98b-qshgg"] Dec 03 13:13:23 crc kubenswrapper[4986]: I1203 13:13:23.334346 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-qshgg" Dec 03 13:13:23 crc kubenswrapper[4986]: I1203 13:13:23.336764 4986 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Dec 03 13:13:23 crc kubenswrapper[4986]: I1203 13:13:23.350668 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-qshgg"] Dec 03 13:13:23 crc kubenswrapper[4986]: I1203 13:13:23.420546 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/2db2fcc5-1b0c-48df-a5e9-321d28b4efb3-frr-conf\") pod \"frr-k8s-zv59f\" (UID: \"2db2fcc5-1b0c-48df-a5e9-321d28b4efb3\") " pod="metallb-system/frr-k8s-zv59f" Dec 03 13:13:23 crc kubenswrapper[4986]: I1203 13:13:23.420617 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/2db2fcc5-1b0c-48df-a5e9-321d28b4efb3-frr-startup\") pod \"frr-k8s-zv59f\" (UID: \"2db2fcc5-1b0c-48df-a5e9-321d28b4efb3\") " pod="metallb-system/frr-k8s-zv59f" Dec 03 13:13:23 crc kubenswrapper[4986]: I1203 13:13:23.420665 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/2db2fcc5-1b0c-48df-a5e9-321d28b4efb3-reloader\") pod \"frr-k8s-zv59f\" (UID: \"2db2fcc5-1b0c-48df-a5e9-321d28b4efb3\") " pod="metallb-system/frr-k8s-zv59f" Dec 03 13:13:23 crc kubenswrapper[4986]: I1203 13:13:23.420704 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/56c900c4-e165-4ada-a70f-3ab4f267441d-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-bvs5x\" (UID: \"56c900c4-e165-4ada-a70f-3ab4f267441d\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-bvs5x" Dec 03 13:13:23 crc kubenswrapper[4986]: I1203 13:13:23.420729 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhwmb\" (UniqueName: \"kubernetes.io/projected/559a743d-b60c-4a89-b256-4842e829043c-kube-api-access-xhwmb\") pod \"speaker-d7x7x\" (UID: \"559a743d-b60c-4a89-b256-4842e829043c\") " pod="metallb-system/speaker-d7x7x" Dec 03 13:13:23 crc kubenswrapper[4986]: I1203 13:13:23.420753 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/2db2fcc5-1b0c-48df-a5e9-321d28b4efb3-metrics\") pod \"frr-k8s-zv59f\" (UID: \"2db2fcc5-1b0c-48df-a5e9-321d28b4efb3\") " pod="metallb-system/frr-k8s-zv59f" Dec 03 13:13:23 crc kubenswrapper[4986]: I1203 13:13:23.420770 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2db2fcc5-1b0c-48df-a5e9-321d28b4efb3-metrics-certs\") pod \"frr-k8s-zv59f\" (UID: \"2db2fcc5-1b0c-48df-a5e9-321d28b4efb3\") " pod="metallb-system/frr-k8s-zv59f" Dec 03 13:13:23 crc kubenswrapper[4986]: I1203 13:13:23.420803 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6t2c\" (UniqueName: \"kubernetes.io/projected/56c900c4-e165-4ada-a70f-3ab4f267441d-kube-api-access-k6t2c\") pod \"frr-k8s-webhook-server-7fcb986d4-bvs5x\" (UID: \"56c900c4-e165-4ada-a70f-3ab4f267441d\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-bvs5x" Dec 03 13:13:23 crc kubenswrapper[4986]: I1203 13:13:23.420856 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/559a743d-b60c-4a89-b256-4842e829043c-metrics-certs\") pod \"speaker-d7x7x\" (UID: \"559a743d-b60c-4a89-b256-4842e829043c\") " pod="metallb-system/speaker-d7x7x" Dec 03 13:13:23 crc kubenswrapper[4986]: I1203 13:13:23.420873 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/559a743d-b60c-4a89-b256-4842e829043c-memberlist\") pod \"speaker-d7x7x\" (UID: \"559a743d-b60c-4a89-b256-4842e829043c\") " pod="metallb-system/speaker-d7x7x" Dec 03 13:13:23 crc kubenswrapper[4986]: I1203 13:13:23.420900 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5rnb\" (UniqueName: \"kubernetes.io/projected/2db2fcc5-1b0c-48df-a5e9-321d28b4efb3-kube-api-access-l5rnb\") pod \"frr-k8s-zv59f\" (UID: \"2db2fcc5-1b0c-48df-a5e9-321d28b4efb3\") " pod="metallb-system/frr-k8s-zv59f" Dec 03 13:13:23 crc kubenswrapper[4986]: I1203 13:13:23.420920 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/2db2fcc5-1b0c-48df-a5e9-321d28b4efb3-frr-sockets\") pod \"frr-k8s-zv59f\" (UID: \"2db2fcc5-1b0c-48df-a5e9-321d28b4efb3\") " pod="metallb-system/frr-k8s-zv59f" Dec 03 13:13:23 crc kubenswrapper[4986]: I1203 13:13:23.420944 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/559a743d-b60c-4a89-b256-4842e829043c-metallb-excludel2\") pod \"speaker-d7x7x\" (UID: \"559a743d-b60c-4a89-b256-4842e829043c\") " pod="metallb-system/speaker-d7x7x" Dec 03 13:13:23 crc kubenswrapper[4986]: I1203 13:13:23.522306 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/2db2fcc5-1b0c-48df-a5e9-321d28b4efb3-reloader\") pod \"frr-k8s-zv59f\" (UID: \"2db2fcc5-1b0c-48df-a5e9-321d28b4efb3\") " pod="metallb-system/frr-k8s-zv59f" Dec 03 13:13:23 crc kubenswrapper[4986]: I1203 13:13:23.522399 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/56c900c4-e165-4ada-a70f-3ab4f267441d-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-bvs5x\" (UID: \"56c900c4-e165-4ada-a70f-3ab4f267441d\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-bvs5x" Dec 03 13:13:23 crc kubenswrapper[4986]: I1203 13:13:23.522430 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0e60a343-90aa-4c8b-a745-020b111c0b76-cert\") pod \"controller-f8648f98b-qshgg\" (UID: \"0e60a343-90aa-4c8b-a745-020b111c0b76\") " pod="metallb-system/controller-f8648f98b-qshgg" Dec 03 13:13:23 crc kubenswrapper[4986]: I1203 13:13:23.522462 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhwmb\" (UniqueName: \"kubernetes.io/projected/559a743d-b60c-4a89-b256-4842e829043c-kube-api-access-xhwmb\") pod \"speaker-d7x7x\" (UID: \"559a743d-b60c-4a89-b256-4842e829043c\") " pod="metallb-system/speaker-d7x7x" Dec 03 13:13:23 crc kubenswrapper[4986]: I1203 13:13:23.522511 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/2db2fcc5-1b0c-48df-a5e9-321d28b4efb3-metrics\") pod \"frr-k8s-zv59f\" (UID: \"2db2fcc5-1b0c-48df-a5e9-321d28b4efb3\") " pod="metallb-system/frr-k8s-zv59f" Dec 03 13:13:23 crc kubenswrapper[4986]: I1203 13:13:23.522550 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2db2fcc5-1b0c-48df-a5e9-321d28b4efb3-metrics-certs\") pod \"frr-k8s-zv59f\" (UID: \"2db2fcc5-1b0c-48df-a5e9-321d28b4efb3\") " pod="metallb-system/frr-k8s-zv59f" Dec 03 13:13:23 crc kubenswrapper[4986]: I1203 13:13:23.522579 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slhzp\" (UniqueName: \"kubernetes.io/projected/0e60a343-90aa-4c8b-a745-020b111c0b76-kube-api-access-slhzp\") pod \"controller-f8648f98b-qshgg\" (UID: \"0e60a343-90aa-4c8b-a745-020b111c0b76\") " pod="metallb-system/controller-f8648f98b-qshgg" Dec 03 13:13:23 crc kubenswrapper[4986]: I1203 13:13:23.522613 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6t2c\" (UniqueName: \"kubernetes.io/projected/56c900c4-e165-4ada-a70f-3ab4f267441d-kube-api-access-k6t2c\") pod \"frr-k8s-webhook-server-7fcb986d4-bvs5x\" (UID: \"56c900c4-e165-4ada-a70f-3ab4f267441d\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-bvs5x" Dec 03 13:13:23 crc kubenswrapper[4986]: I1203 13:13:23.522654 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/559a743d-b60c-4a89-b256-4842e829043c-metrics-certs\") pod \"speaker-d7x7x\" (UID: \"559a743d-b60c-4a89-b256-4842e829043c\") " pod="metallb-system/speaker-d7x7x" Dec 03 13:13:23 crc kubenswrapper[4986]: I1203 13:13:23.522672 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/559a743d-b60c-4a89-b256-4842e829043c-memberlist\") pod \"speaker-d7x7x\" (UID: \"559a743d-b60c-4a89-b256-4842e829043c\") " pod="metallb-system/speaker-d7x7x" Dec 03 13:13:23 crc kubenswrapper[4986]: I1203 13:13:23.522697 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5rnb\" (UniqueName: \"kubernetes.io/projected/2db2fcc5-1b0c-48df-a5e9-321d28b4efb3-kube-api-access-l5rnb\") pod \"frr-k8s-zv59f\" (UID: \"2db2fcc5-1b0c-48df-a5e9-321d28b4efb3\") " pod="metallb-system/frr-k8s-zv59f" Dec 03 13:13:23 crc kubenswrapper[4986]: I1203 13:13:23.522716 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/2db2fcc5-1b0c-48df-a5e9-321d28b4efb3-frr-sockets\") pod \"frr-k8s-zv59f\" (UID: \"2db2fcc5-1b0c-48df-a5e9-321d28b4efb3\") " pod="metallb-system/frr-k8s-zv59f" Dec 03 13:13:23 crc kubenswrapper[4986]: I1203 13:13:23.522742 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/559a743d-b60c-4a89-b256-4842e829043c-metallb-excludel2\") pod \"speaker-d7x7x\" (UID: \"559a743d-b60c-4a89-b256-4842e829043c\") " pod="metallb-system/speaker-d7x7x" Dec 03 13:13:23 crc kubenswrapper[4986]: I1203 13:13:23.522774 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/2db2fcc5-1b0c-48df-a5e9-321d28b4efb3-frr-conf\") pod \"frr-k8s-zv59f\" (UID: \"2db2fcc5-1b0c-48df-a5e9-321d28b4efb3\") " pod="metallb-system/frr-k8s-zv59f" Dec 03 13:13:23 crc kubenswrapper[4986]: I1203 13:13:23.522800 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/2db2fcc5-1b0c-48df-a5e9-321d28b4efb3-frr-startup\") pod \"frr-k8s-zv59f\" (UID: \"2db2fcc5-1b0c-48df-a5e9-321d28b4efb3\") " pod="metallb-system/frr-k8s-zv59f" Dec 03 13:13:23 crc kubenswrapper[4986]: I1203 13:13:23.522822 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0e60a343-90aa-4c8b-a745-020b111c0b76-metrics-certs\") pod \"controller-f8648f98b-qshgg\" (UID: \"0e60a343-90aa-4c8b-a745-020b111c0b76\") " pod="metallb-system/controller-f8648f98b-qshgg" Dec 03 13:13:23 crc kubenswrapper[4986]: I1203 13:13:23.522892 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/2db2fcc5-1b0c-48df-a5e9-321d28b4efb3-reloader\") pod \"frr-k8s-zv59f\" (UID: \"2db2fcc5-1b0c-48df-a5e9-321d28b4efb3\") " pod="metallb-system/frr-k8s-zv59f" Dec 03 13:13:23 crc kubenswrapper[4986]: I1203 13:13:23.522918 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/2db2fcc5-1b0c-48df-a5e9-321d28b4efb3-metrics\") pod \"frr-k8s-zv59f\" (UID: \"2db2fcc5-1b0c-48df-a5e9-321d28b4efb3\") " pod="metallb-system/frr-k8s-zv59f" Dec 03 13:13:23 crc kubenswrapper[4986]: E1203 13:13:23.523134 4986 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 03 13:13:23 crc kubenswrapper[4986]: E1203 13:13:23.523204 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/559a743d-b60c-4a89-b256-4842e829043c-memberlist podName:559a743d-b60c-4a89-b256-4842e829043c nodeName:}" failed. No retries permitted until 2025-12-03 13:13:24.023183651 +0000 UTC m=+1063.489614932 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/559a743d-b60c-4a89-b256-4842e829043c-memberlist") pod "speaker-d7x7x" (UID: "559a743d-b60c-4a89-b256-4842e829043c") : secret "metallb-memberlist" not found Dec 03 13:13:23 crc kubenswrapper[4986]: I1203 13:13:23.523359 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/2db2fcc5-1b0c-48df-a5e9-321d28b4efb3-frr-sockets\") pod \"frr-k8s-zv59f\" (UID: \"2db2fcc5-1b0c-48df-a5e9-321d28b4efb3\") " pod="metallb-system/frr-k8s-zv59f" Dec 03 13:13:23 crc kubenswrapper[4986]: I1203 13:13:23.523836 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/2db2fcc5-1b0c-48df-a5e9-321d28b4efb3-frr-conf\") pod \"frr-k8s-zv59f\" (UID: \"2db2fcc5-1b0c-48df-a5e9-321d28b4efb3\") " pod="metallb-system/frr-k8s-zv59f" Dec 03 13:13:23 crc kubenswrapper[4986]: I1203 13:13:23.524029 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/559a743d-b60c-4a89-b256-4842e829043c-metallb-excludel2\") pod \"speaker-d7x7x\" (UID: \"559a743d-b60c-4a89-b256-4842e829043c\") " pod="metallb-system/speaker-d7x7x" Dec 03 13:13:23 crc kubenswrapper[4986]: I1203 13:13:23.524141 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/2db2fcc5-1b0c-48df-a5e9-321d28b4efb3-frr-startup\") pod \"frr-k8s-zv59f\" (UID: \"2db2fcc5-1b0c-48df-a5e9-321d28b4efb3\") " pod="metallb-system/frr-k8s-zv59f" Dec 03 13:13:23 crc kubenswrapper[4986]: I1203 13:13:23.528832 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/56c900c4-e165-4ada-a70f-3ab4f267441d-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-bvs5x\" (UID: \"56c900c4-e165-4ada-a70f-3ab4f267441d\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-bvs5x" Dec 03 13:13:23 crc kubenswrapper[4986]: I1203 13:13:23.529560 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/559a743d-b60c-4a89-b256-4842e829043c-metrics-certs\") pod \"speaker-d7x7x\" (UID: \"559a743d-b60c-4a89-b256-4842e829043c\") " pod="metallb-system/speaker-d7x7x" Dec 03 13:13:23 crc kubenswrapper[4986]: I1203 13:13:23.530092 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2db2fcc5-1b0c-48df-a5e9-321d28b4efb3-metrics-certs\") pod \"frr-k8s-zv59f\" (UID: \"2db2fcc5-1b0c-48df-a5e9-321d28b4efb3\") " pod="metallb-system/frr-k8s-zv59f" Dec 03 13:13:23 crc kubenswrapper[4986]: I1203 13:13:23.542696 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhwmb\" (UniqueName: \"kubernetes.io/projected/559a743d-b60c-4a89-b256-4842e829043c-kube-api-access-xhwmb\") pod \"speaker-d7x7x\" (UID: \"559a743d-b60c-4a89-b256-4842e829043c\") " pod="metallb-system/speaker-d7x7x" Dec 03 13:13:23 crc kubenswrapper[4986]: I1203 13:13:23.546088 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5rnb\" (UniqueName: \"kubernetes.io/projected/2db2fcc5-1b0c-48df-a5e9-321d28b4efb3-kube-api-access-l5rnb\") pod \"frr-k8s-zv59f\" (UID: \"2db2fcc5-1b0c-48df-a5e9-321d28b4efb3\") " pod="metallb-system/frr-k8s-zv59f" Dec 03 13:13:23 crc kubenswrapper[4986]: I1203 13:13:23.549924 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6t2c\" (UniqueName: \"kubernetes.io/projected/56c900c4-e165-4ada-a70f-3ab4f267441d-kube-api-access-k6t2c\") pod \"frr-k8s-webhook-server-7fcb986d4-bvs5x\" (UID: \"56c900c4-e165-4ada-a70f-3ab4f267441d\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-bvs5x" Dec 03 13:13:23 crc kubenswrapper[4986]: I1203 13:13:23.561749 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-zv59f" Dec 03 13:13:23 crc kubenswrapper[4986]: I1203 13:13:23.594005 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-bvs5x" Dec 03 13:13:23 crc kubenswrapper[4986]: I1203 13:13:23.624475 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0e60a343-90aa-4c8b-a745-020b111c0b76-metrics-certs\") pod \"controller-f8648f98b-qshgg\" (UID: \"0e60a343-90aa-4c8b-a745-020b111c0b76\") " pod="metallb-system/controller-f8648f98b-qshgg" Dec 03 13:13:23 crc kubenswrapper[4986]: I1203 13:13:23.624536 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0e60a343-90aa-4c8b-a745-020b111c0b76-cert\") pod \"controller-f8648f98b-qshgg\" (UID: \"0e60a343-90aa-4c8b-a745-020b111c0b76\") " pod="metallb-system/controller-f8648f98b-qshgg" Dec 03 13:13:23 crc kubenswrapper[4986]: I1203 13:13:23.624568 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slhzp\" (UniqueName: \"kubernetes.io/projected/0e60a343-90aa-4c8b-a745-020b111c0b76-kube-api-access-slhzp\") pod \"controller-f8648f98b-qshgg\" (UID: \"0e60a343-90aa-4c8b-a745-020b111c0b76\") " pod="metallb-system/controller-f8648f98b-qshgg" Dec 03 13:13:23 crc kubenswrapper[4986]: I1203 13:13:23.630113 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0e60a343-90aa-4c8b-a745-020b111c0b76-metrics-certs\") pod \"controller-f8648f98b-qshgg\" (UID: \"0e60a343-90aa-4c8b-a745-020b111c0b76\") " pod="metallb-system/controller-f8648f98b-qshgg" Dec 03 13:13:23 crc kubenswrapper[4986]: I1203 13:13:23.633546 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0e60a343-90aa-4c8b-a745-020b111c0b76-cert\") pod \"controller-f8648f98b-qshgg\" (UID: \"0e60a343-90aa-4c8b-a745-020b111c0b76\") " pod="metallb-system/controller-f8648f98b-qshgg" Dec 03 13:13:23 crc kubenswrapper[4986]: I1203 13:13:23.639844 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slhzp\" (UniqueName: \"kubernetes.io/projected/0e60a343-90aa-4c8b-a745-020b111c0b76-kube-api-access-slhzp\") pod \"controller-f8648f98b-qshgg\" (UID: \"0e60a343-90aa-4c8b-a745-020b111c0b76\") " pod="metallb-system/controller-f8648f98b-qshgg" Dec 03 13:13:23 crc kubenswrapper[4986]: I1203 13:13:23.651975 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-qshgg" Dec 03 13:13:23 crc kubenswrapper[4986]: I1203 13:13:23.865184 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-qshgg"] Dec 03 13:13:23 crc kubenswrapper[4986]: W1203 13:13:23.866475 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e60a343_90aa_4c8b_a745_020b111c0b76.slice/crio-88aec8f056a843705fa3a2b8e94e67c602c3d577e095ac0404451044110f4249 WatchSource:0}: Error finding container 88aec8f056a843705fa3a2b8e94e67c602c3d577e095ac0404451044110f4249: Status 404 returned error can't find the container with id 88aec8f056a843705fa3a2b8e94e67c602c3d577e095ac0404451044110f4249 Dec 03 13:13:24 crc kubenswrapper[4986]: I1203 13:13:24.008449 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-bvs5x"] Dec 03 13:13:24 crc kubenswrapper[4986]: W1203 13:13:24.014870 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod56c900c4_e165_4ada_a70f_3ab4f267441d.slice/crio-b62014228825c07150c5ea122392f49ce1407df0eb859ec757ce5667069bdf70 WatchSource:0}: Error finding container b62014228825c07150c5ea122392f49ce1407df0eb859ec757ce5667069bdf70: Status 404 returned error can't find the container with id b62014228825c07150c5ea122392f49ce1407df0eb859ec757ce5667069bdf70 Dec 03 13:13:24 crc kubenswrapper[4986]: I1203 13:13:24.031233 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/559a743d-b60c-4a89-b256-4842e829043c-memberlist\") pod \"speaker-d7x7x\" (UID: \"559a743d-b60c-4a89-b256-4842e829043c\") " pod="metallb-system/speaker-d7x7x" Dec 03 13:13:24 crc kubenswrapper[4986]: E1203 13:13:24.031475 4986 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 03 13:13:24 crc kubenswrapper[4986]: E1203 13:13:24.031526 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/559a743d-b60c-4a89-b256-4842e829043c-memberlist podName:559a743d-b60c-4a89-b256-4842e829043c nodeName:}" failed. No retries permitted until 2025-12-03 13:13:25.031510429 +0000 UTC m=+1064.497941620 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/559a743d-b60c-4a89-b256-4842e829043c-memberlist") pod "speaker-d7x7x" (UID: "559a743d-b60c-4a89-b256-4842e829043c") : secret "metallb-memberlist" not found Dec 03 13:13:24 crc kubenswrapper[4986]: I1203 13:13:24.309635 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-bvs5x" event={"ID":"56c900c4-e165-4ada-a70f-3ab4f267441d","Type":"ContainerStarted","Data":"b62014228825c07150c5ea122392f49ce1407df0eb859ec757ce5667069bdf70"} Dec 03 13:13:24 crc kubenswrapper[4986]: I1203 13:13:24.311187 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-qshgg" event={"ID":"0e60a343-90aa-4c8b-a745-020b111c0b76","Type":"ContainerStarted","Data":"f6b4c56280d81b4ec4b63f8382d3bcff50ef1d26e7e33d89d6863e6a677267ed"} Dec 03 13:13:24 crc kubenswrapper[4986]: I1203 13:13:24.311211 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-qshgg" event={"ID":"0e60a343-90aa-4c8b-a745-020b111c0b76","Type":"ContainerStarted","Data":"1a02d5b6abb21893f1101c0a289c30f9512ce248f50b08f90c19478e9bdf161e"} Dec 03 13:13:24 crc kubenswrapper[4986]: I1203 13:13:24.311222 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-qshgg" event={"ID":"0e60a343-90aa-4c8b-a745-020b111c0b76","Type":"ContainerStarted","Data":"88aec8f056a843705fa3a2b8e94e67c602c3d577e095ac0404451044110f4249"} Dec 03 13:13:24 crc kubenswrapper[4986]: I1203 13:13:24.311335 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-f8648f98b-qshgg" Dec 03 13:13:24 crc kubenswrapper[4986]: I1203 13:13:24.312148 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zv59f" event={"ID":"2db2fcc5-1b0c-48df-a5e9-321d28b4efb3","Type":"ContainerStarted","Data":"a6bfd4e4779e7f744d863f364ca214bfcbd9973bb823dbc0c926beb2fbb537f3"} Dec 03 13:13:24 crc kubenswrapper[4986]: I1203 13:13:24.328616 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-f8648f98b-qshgg" podStartSLOduration=1.328590552 podStartE2EDuration="1.328590552s" podCreationTimestamp="2025-12-03 13:13:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:13:24.324620855 +0000 UTC m=+1063.791052046" watchObservedRunningTime="2025-12-03 13:13:24.328590552 +0000 UTC m=+1063.795021743" Dec 03 13:13:25 crc kubenswrapper[4986]: I1203 13:13:25.043756 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/559a743d-b60c-4a89-b256-4842e829043c-memberlist\") pod \"speaker-d7x7x\" (UID: \"559a743d-b60c-4a89-b256-4842e829043c\") " pod="metallb-system/speaker-d7x7x" Dec 03 13:13:25 crc kubenswrapper[4986]: I1203 13:13:25.052724 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/559a743d-b60c-4a89-b256-4842e829043c-memberlist\") pod \"speaker-d7x7x\" (UID: \"559a743d-b60c-4a89-b256-4842e829043c\") " pod="metallb-system/speaker-d7x7x" Dec 03 13:13:25 crc kubenswrapper[4986]: I1203 13:13:25.137270 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-d7x7x" Dec 03 13:13:25 crc kubenswrapper[4986]: W1203 13:13:25.189903 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod559a743d_b60c_4a89_b256_4842e829043c.slice/crio-4c7c7a457d03605945dc30e132fc9965c546fff7d5b5a404e54c89f907a02020 WatchSource:0}: Error finding container 4c7c7a457d03605945dc30e132fc9965c546fff7d5b5a404e54c89f907a02020: Status 404 returned error can't find the container with id 4c7c7a457d03605945dc30e132fc9965c546fff7d5b5a404e54c89f907a02020 Dec 03 13:13:25 crc kubenswrapper[4986]: I1203 13:13:25.325836 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-d7x7x" event={"ID":"559a743d-b60c-4a89-b256-4842e829043c","Type":"ContainerStarted","Data":"4c7c7a457d03605945dc30e132fc9965c546fff7d5b5a404e54c89f907a02020"} Dec 03 13:13:26 crc kubenswrapper[4986]: I1203 13:13:26.333611 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-d7x7x" event={"ID":"559a743d-b60c-4a89-b256-4842e829043c","Type":"ContainerStarted","Data":"f411b2792ac82bce603ecd5735ccf81e8a86c571def25b720137884f997a8fca"} Dec 03 13:13:26 crc kubenswrapper[4986]: I1203 13:13:26.334029 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-d7x7x" Dec 03 13:13:26 crc kubenswrapper[4986]: I1203 13:13:26.334046 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-d7x7x" event={"ID":"559a743d-b60c-4a89-b256-4842e829043c","Type":"ContainerStarted","Data":"7a534e4c96d17891493cf8c52fe0dc4b9e492fc94f474afd78e203434072410b"} Dec 03 13:13:26 crc kubenswrapper[4986]: I1203 13:13:26.354934 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-d7x7x" podStartSLOduration=3.354909648 podStartE2EDuration="3.354909648s" podCreationTimestamp="2025-12-03 13:13:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:13:26.350276034 +0000 UTC m=+1065.816707235" watchObservedRunningTime="2025-12-03 13:13:26.354909648 +0000 UTC m=+1065.821340839" Dec 03 13:13:31 crc kubenswrapper[4986]: I1203 13:13:31.370392 4986 generic.go:334] "Generic (PLEG): container finished" podID="2db2fcc5-1b0c-48df-a5e9-321d28b4efb3" containerID="4d46cb8924a988e45d0c5984a5e3bf13e8db0259eb87cb0841fc38c6a41da70f" exitCode=0 Dec 03 13:13:31 crc kubenswrapper[4986]: I1203 13:13:31.370657 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zv59f" event={"ID":"2db2fcc5-1b0c-48df-a5e9-321d28b4efb3","Type":"ContainerDied","Data":"4d46cb8924a988e45d0c5984a5e3bf13e8db0259eb87cb0841fc38c6a41da70f"} Dec 03 13:13:31 crc kubenswrapper[4986]: I1203 13:13:31.375302 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-bvs5x" event={"ID":"56c900c4-e165-4ada-a70f-3ab4f267441d","Type":"ContainerStarted","Data":"e1312138296c0ecbdd422ec7912038b2048bc5fba359ab17f3e8820b68f1c36c"} Dec 03 13:13:31 crc kubenswrapper[4986]: I1203 13:13:31.376044 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-bvs5x" Dec 03 13:13:31 crc kubenswrapper[4986]: I1203 13:13:31.417871 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-bvs5x" podStartSLOduration=1.370258471 podStartE2EDuration="8.417848219s" podCreationTimestamp="2025-12-03 13:13:23 +0000 UTC" firstStartedPulling="2025-12-03 13:13:24.016382913 +0000 UTC m=+1063.482814104" lastFinishedPulling="2025-12-03 13:13:31.063972661 +0000 UTC m=+1070.530403852" observedRunningTime="2025-12-03 13:13:31.413703358 +0000 UTC m=+1070.880134579" watchObservedRunningTime="2025-12-03 13:13:31.417848219 +0000 UTC m=+1070.884279440" Dec 03 13:13:32 crc kubenswrapper[4986]: I1203 13:13:32.383560 4986 generic.go:334] "Generic (PLEG): container finished" podID="2db2fcc5-1b0c-48df-a5e9-321d28b4efb3" containerID="684671bc6556c6ce399191c2bcb3c6bd731c54690d6accab70b2f012807aee38" exitCode=0 Dec 03 13:13:32 crc kubenswrapper[4986]: I1203 13:13:32.383646 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zv59f" event={"ID":"2db2fcc5-1b0c-48df-a5e9-321d28b4efb3","Type":"ContainerDied","Data":"684671bc6556c6ce399191c2bcb3c6bd731c54690d6accab70b2f012807aee38"} Dec 03 13:13:33 crc kubenswrapper[4986]: I1203 13:13:33.392924 4986 generic.go:334] "Generic (PLEG): container finished" podID="2db2fcc5-1b0c-48df-a5e9-321d28b4efb3" containerID="514887250a9e6c1e0c5b5f07cec484995f4db2d5946be1840f189624db5d55fd" exitCode=0 Dec 03 13:13:33 crc kubenswrapper[4986]: I1203 13:13:33.392979 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zv59f" event={"ID":"2db2fcc5-1b0c-48df-a5e9-321d28b4efb3","Type":"ContainerDied","Data":"514887250a9e6c1e0c5b5f07cec484995f4db2d5946be1840f189624db5d55fd"} Dec 03 13:13:33 crc kubenswrapper[4986]: I1203 13:13:33.657669 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-f8648f98b-qshgg" Dec 03 13:13:34 crc kubenswrapper[4986]: I1203 13:13:34.404156 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zv59f" event={"ID":"2db2fcc5-1b0c-48df-a5e9-321d28b4efb3","Type":"ContainerStarted","Data":"c66806425be962c4dafb819f63cd432d6db3204e0f50d977f2cb8d5b9730417b"} Dec 03 13:13:34 crc kubenswrapper[4986]: I1203 13:13:34.404435 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zv59f" event={"ID":"2db2fcc5-1b0c-48df-a5e9-321d28b4efb3","Type":"ContainerStarted","Data":"6e0e0fb16c1bfc5e402eed95f77b0d088a34b8eecdfb840f2123c918e511e19e"} Dec 03 13:13:34 crc kubenswrapper[4986]: I1203 13:13:34.404449 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zv59f" event={"ID":"2db2fcc5-1b0c-48df-a5e9-321d28b4efb3","Type":"ContainerStarted","Data":"eff6e8636076f91e02f018df875148845f81ac9660fbbf95e056a745408505cb"} Dec 03 13:13:34 crc kubenswrapper[4986]: I1203 13:13:34.404477 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zv59f" event={"ID":"2db2fcc5-1b0c-48df-a5e9-321d28b4efb3","Type":"ContainerStarted","Data":"0cff9930b8f323b8c167dc859a1a6d38477862907586d8e3d7a753b6a8149d9e"} Dec 03 13:13:35 crc kubenswrapper[4986]: I1203 13:13:35.143447 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-d7x7x" Dec 03 13:13:35 crc kubenswrapper[4986]: I1203 13:13:35.417596 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zv59f" event={"ID":"2db2fcc5-1b0c-48df-a5e9-321d28b4efb3","Type":"ContainerStarted","Data":"9b03024bb47d2f3f69b3596da658156124ee2d9542b08422ff5b83017129e992"} Dec 03 13:13:35 crc kubenswrapper[4986]: I1203 13:13:35.417920 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zv59f" event={"ID":"2db2fcc5-1b0c-48df-a5e9-321d28b4efb3","Type":"ContainerStarted","Data":"6f2e92d9fb4e36f7c2fa7f5f76526c9b4cc922dd37496274e497e868aa97c532"} Dec 03 13:13:35 crc kubenswrapper[4986]: I1203 13:13:35.419041 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-zv59f" Dec 03 13:13:35 crc kubenswrapper[4986]: I1203 13:13:35.449920 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-zv59f" podStartSLOduration=5.246812074 podStartE2EDuration="12.449892469s" podCreationTimestamp="2025-12-03 13:13:23 +0000 UTC" firstStartedPulling="2025-12-03 13:13:23.846108048 +0000 UTC m=+1063.312539239" lastFinishedPulling="2025-12-03 13:13:31.049188443 +0000 UTC m=+1070.515619634" observedRunningTime="2025-12-03 13:13:35.442636325 +0000 UTC m=+1074.909067526" watchObservedRunningTime="2025-12-03 13:13:35.449892469 +0000 UTC m=+1074.916323700" Dec 03 13:13:38 crc kubenswrapper[4986]: I1203 13:13:38.201779 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-npxg7"] Dec 03 13:13:38 crc kubenswrapper[4986]: I1203 13:13:38.203022 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-npxg7" Dec 03 13:13:38 crc kubenswrapper[4986]: I1203 13:13:38.205732 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-qms6q" Dec 03 13:13:38 crc kubenswrapper[4986]: I1203 13:13:38.206349 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Dec 03 13:13:38 crc kubenswrapper[4986]: I1203 13:13:38.217180 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Dec 03 13:13:38 crc kubenswrapper[4986]: I1203 13:13:38.231873 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-npxg7"] Dec 03 13:13:38 crc kubenswrapper[4986]: I1203 13:13:38.324895 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdsmm\" (UniqueName: \"kubernetes.io/projected/5b0e7546-4217-41b6-acee-6d45803718dc-kube-api-access-gdsmm\") pod \"openstack-operator-index-npxg7\" (UID: \"5b0e7546-4217-41b6-acee-6d45803718dc\") " pod="openstack-operators/openstack-operator-index-npxg7" Dec 03 13:13:38 crc kubenswrapper[4986]: I1203 13:13:38.426667 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdsmm\" (UniqueName: \"kubernetes.io/projected/5b0e7546-4217-41b6-acee-6d45803718dc-kube-api-access-gdsmm\") pod \"openstack-operator-index-npxg7\" (UID: \"5b0e7546-4217-41b6-acee-6d45803718dc\") " pod="openstack-operators/openstack-operator-index-npxg7" Dec 03 13:13:38 crc kubenswrapper[4986]: I1203 13:13:38.444378 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdsmm\" (UniqueName: \"kubernetes.io/projected/5b0e7546-4217-41b6-acee-6d45803718dc-kube-api-access-gdsmm\") pod \"openstack-operator-index-npxg7\" (UID: \"5b0e7546-4217-41b6-acee-6d45803718dc\") " pod="openstack-operators/openstack-operator-index-npxg7" Dec 03 13:13:38 crc kubenswrapper[4986]: I1203 13:13:38.532680 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-npxg7" Dec 03 13:13:38 crc kubenswrapper[4986]: I1203 13:13:38.563012 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-zv59f" Dec 03 13:13:38 crc kubenswrapper[4986]: I1203 13:13:38.661791 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-zv59f" Dec 03 13:13:39 crc kubenswrapper[4986]: I1203 13:13:39.013392 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-npxg7"] Dec 03 13:13:39 crc kubenswrapper[4986]: I1203 13:13:39.451147 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-npxg7" event={"ID":"5b0e7546-4217-41b6-acee-6d45803718dc","Type":"ContainerStarted","Data":"bb5155bb18288c319e6d98c3e2981cdec55ec3c52cc00e7201930ef5394d2a8b"} Dec 03 13:13:41 crc kubenswrapper[4986]: I1203 13:13:41.581870 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-npxg7"] Dec 03 13:13:42 crc kubenswrapper[4986]: I1203 13:13:42.198121 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-rxgnt"] Dec 03 13:13:42 crc kubenswrapper[4986]: I1203 13:13:42.198901 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-rxgnt" Dec 03 13:13:42 crc kubenswrapper[4986]: I1203 13:13:42.212626 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-rxgnt"] Dec 03 13:13:42 crc kubenswrapper[4986]: I1203 13:13:42.388245 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rwrr\" (UniqueName: \"kubernetes.io/projected/dbe624e6-2210-4ead-ac45-77704177e0a4-kube-api-access-2rwrr\") pod \"openstack-operator-index-rxgnt\" (UID: \"dbe624e6-2210-4ead-ac45-77704177e0a4\") " pod="openstack-operators/openstack-operator-index-rxgnt" Dec 03 13:13:42 crc kubenswrapper[4986]: I1203 13:13:42.489063 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rwrr\" (UniqueName: \"kubernetes.io/projected/dbe624e6-2210-4ead-ac45-77704177e0a4-kube-api-access-2rwrr\") pod \"openstack-operator-index-rxgnt\" (UID: \"dbe624e6-2210-4ead-ac45-77704177e0a4\") " pod="openstack-operators/openstack-operator-index-rxgnt" Dec 03 13:13:42 crc kubenswrapper[4986]: I1203 13:13:42.509935 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rwrr\" (UniqueName: \"kubernetes.io/projected/dbe624e6-2210-4ead-ac45-77704177e0a4-kube-api-access-2rwrr\") pod \"openstack-operator-index-rxgnt\" (UID: \"dbe624e6-2210-4ead-ac45-77704177e0a4\") " pod="openstack-operators/openstack-operator-index-rxgnt" Dec 03 13:13:42 crc kubenswrapper[4986]: I1203 13:13:42.542636 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-rxgnt" Dec 03 13:13:43 crc kubenswrapper[4986]: I1203 13:13:43.154325 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-rxgnt"] Dec 03 13:13:43 crc kubenswrapper[4986]: I1203 13:13:43.477628 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-rxgnt" event={"ID":"dbe624e6-2210-4ead-ac45-77704177e0a4","Type":"ContainerStarted","Data":"2c244416822498862a3d4f3069b47a2b417676d03a3e6f14161f1c6e6d87a25b"} Dec 03 13:13:43 crc kubenswrapper[4986]: I1203 13:13:43.477675 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-rxgnt" event={"ID":"dbe624e6-2210-4ead-ac45-77704177e0a4","Type":"ContainerStarted","Data":"f0699aa9a92974c823cee22dd7c5c6a5015c6f7fc11ec2348b9a2636aa8bc2f7"} Dec 03 13:13:43 crc kubenswrapper[4986]: I1203 13:13:43.479318 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-npxg7" event={"ID":"5b0e7546-4217-41b6-acee-6d45803718dc","Type":"ContainerStarted","Data":"10e6450836a42ba11b9d9b9a42f0d5d58253ecda41a1cdb1de41c7a5911bb661"} Dec 03 13:13:43 crc kubenswrapper[4986]: I1203 13:13:43.479423 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-npxg7" podUID="5b0e7546-4217-41b6-acee-6d45803718dc" containerName="registry-server" containerID="cri-o://10e6450836a42ba11b9d9b9a42f0d5d58253ecda41a1cdb1de41c7a5911bb661" gracePeriod=2 Dec 03 13:13:43 crc kubenswrapper[4986]: I1203 13:13:43.496148 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-rxgnt" podStartSLOduration=1.436762597 podStartE2EDuration="1.496121022s" podCreationTimestamp="2025-12-03 13:13:42 +0000 UTC" firstStartedPulling="2025-12-03 13:13:43.163731381 +0000 UTC m=+1082.630162622" lastFinishedPulling="2025-12-03 13:13:43.223089856 +0000 UTC m=+1082.689521047" observedRunningTime="2025-12-03 13:13:43.493478531 +0000 UTC m=+1082.959909742" watchObservedRunningTime="2025-12-03 13:13:43.496121022 +0000 UTC m=+1082.962552223" Dec 03 13:13:43 crc kubenswrapper[4986]: I1203 13:13:43.519120 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-npxg7" podStartSLOduration=1.7758634880000002 podStartE2EDuration="5.519099129s" podCreationTimestamp="2025-12-03 13:13:38 +0000 UTC" firstStartedPulling="2025-12-03 13:13:39.030743717 +0000 UTC m=+1078.497174908" lastFinishedPulling="2025-12-03 13:13:42.773979358 +0000 UTC m=+1082.240410549" observedRunningTime="2025-12-03 13:13:43.514211348 +0000 UTC m=+1082.980642549" watchObservedRunningTime="2025-12-03 13:13:43.519099129 +0000 UTC m=+1082.985530320" Dec 03 13:13:43 crc kubenswrapper[4986]: I1203 13:13:43.565774 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-zv59f" Dec 03 13:13:43 crc kubenswrapper[4986]: I1203 13:13:43.601369 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-bvs5x" Dec 03 13:13:43 crc kubenswrapper[4986]: I1203 13:13:43.842452 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-npxg7" Dec 03 13:13:44 crc kubenswrapper[4986]: I1203 13:13:44.007948 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gdsmm\" (UniqueName: \"kubernetes.io/projected/5b0e7546-4217-41b6-acee-6d45803718dc-kube-api-access-gdsmm\") pod \"5b0e7546-4217-41b6-acee-6d45803718dc\" (UID: \"5b0e7546-4217-41b6-acee-6d45803718dc\") " Dec 03 13:13:44 crc kubenswrapper[4986]: I1203 13:13:44.021034 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b0e7546-4217-41b6-acee-6d45803718dc-kube-api-access-gdsmm" (OuterVolumeSpecName: "kube-api-access-gdsmm") pod "5b0e7546-4217-41b6-acee-6d45803718dc" (UID: "5b0e7546-4217-41b6-acee-6d45803718dc"). InnerVolumeSpecName "kube-api-access-gdsmm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:13:44 crc kubenswrapper[4986]: I1203 13:13:44.110097 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gdsmm\" (UniqueName: \"kubernetes.io/projected/5b0e7546-4217-41b6-acee-6d45803718dc-kube-api-access-gdsmm\") on node \"crc\" DevicePath \"\"" Dec 03 13:13:44 crc kubenswrapper[4986]: I1203 13:13:44.489123 4986 generic.go:334] "Generic (PLEG): container finished" podID="5b0e7546-4217-41b6-acee-6d45803718dc" containerID="10e6450836a42ba11b9d9b9a42f0d5d58253ecda41a1cdb1de41c7a5911bb661" exitCode=0 Dec 03 13:13:44 crc kubenswrapper[4986]: I1203 13:13:44.489182 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-npxg7" event={"ID":"5b0e7546-4217-41b6-acee-6d45803718dc","Type":"ContainerDied","Data":"10e6450836a42ba11b9d9b9a42f0d5d58253ecda41a1cdb1de41c7a5911bb661"} Dec 03 13:13:44 crc kubenswrapper[4986]: I1203 13:13:44.489158 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-npxg7" Dec 03 13:13:44 crc kubenswrapper[4986]: I1203 13:13:44.489231 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-npxg7" event={"ID":"5b0e7546-4217-41b6-acee-6d45803718dc","Type":"ContainerDied","Data":"bb5155bb18288c319e6d98c3e2981cdec55ec3c52cc00e7201930ef5394d2a8b"} Dec 03 13:13:44 crc kubenswrapper[4986]: I1203 13:13:44.489260 4986 scope.go:117] "RemoveContainer" containerID="10e6450836a42ba11b9d9b9a42f0d5d58253ecda41a1cdb1de41c7a5911bb661" Dec 03 13:13:44 crc kubenswrapper[4986]: I1203 13:13:44.530981 4986 scope.go:117] "RemoveContainer" containerID="10e6450836a42ba11b9d9b9a42f0d5d58253ecda41a1cdb1de41c7a5911bb661" Dec 03 13:13:44 crc kubenswrapper[4986]: E1203 13:13:44.532782 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10e6450836a42ba11b9d9b9a42f0d5d58253ecda41a1cdb1de41c7a5911bb661\": container with ID starting with 10e6450836a42ba11b9d9b9a42f0d5d58253ecda41a1cdb1de41c7a5911bb661 not found: ID does not exist" containerID="10e6450836a42ba11b9d9b9a42f0d5d58253ecda41a1cdb1de41c7a5911bb661" Dec 03 13:13:44 crc kubenswrapper[4986]: I1203 13:13:44.532826 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10e6450836a42ba11b9d9b9a42f0d5d58253ecda41a1cdb1de41c7a5911bb661"} err="failed to get container status \"10e6450836a42ba11b9d9b9a42f0d5d58253ecda41a1cdb1de41c7a5911bb661\": rpc error: code = NotFound desc = could not find container \"10e6450836a42ba11b9d9b9a42f0d5d58253ecda41a1cdb1de41c7a5911bb661\": container with ID starting with 10e6450836a42ba11b9d9b9a42f0d5d58253ecda41a1cdb1de41c7a5911bb661 not found: ID does not exist" Dec 03 13:13:44 crc kubenswrapper[4986]: I1203 13:13:44.566371 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-npxg7"] Dec 03 13:13:44 crc kubenswrapper[4986]: I1203 13:13:44.597645 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-npxg7"] Dec 03 13:13:44 crc kubenswrapper[4986]: I1203 13:13:44.967199 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b0e7546-4217-41b6-acee-6d45803718dc" path="/var/lib/kubelet/pods/5b0e7546-4217-41b6-acee-6d45803718dc/volumes" Dec 03 13:13:52 crc kubenswrapper[4986]: I1203 13:13:52.542987 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-rxgnt" Dec 03 13:13:52 crc kubenswrapper[4986]: I1203 13:13:52.543451 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-rxgnt" Dec 03 13:13:52 crc kubenswrapper[4986]: I1203 13:13:52.582960 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-rxgnt" Dec 03 13:13:52 crc kubenswrapper[4986]: I1203 13:13:52.625264 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-rxgnt" Dec 03 13:13:54 crc kubenswrapper[4986]: I1203 13:13:54.428734 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/824d2a61b6100fd280a9a09bd63ce89af5b90b0d18a4f901e4852c2491zpwrf"] Dec 03 13:13:54 crc kubenswrapper[4986]: E1203 13:13:54.429382 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b0e7546-4217-41b6-acee-6d45803718dc" containerName="registry-server" Dec 03 13:13:54 crc kubenswrapper[4986]: I1203 13:13:54.429400 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b0e7546-4217-41b6-acee-6d45803718dc" containerName="registry-server" Dec 03 13:13:54 crc kubenswrapper[4986]: I1203 13:13:54.429533 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b0e7546-4217-41b6-acee-6d45803718dc" containerName="registry-server" Dec 03 13:13:54 crc kubenswrapper[4986]: I1203 13:13:54.430550 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/824d2a61b6100fd280a9a09bd63ce89af5b90b0d18a4f901e4852c2491zpwrf" Dec 03 13:13:54 crc kubenswrapper[4986]: I1203 13:13:54.432970 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-xhnns" Dec 03 13:13:54 crc kubenswrapper[4986]: I1203 13:13:54.437682 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/824d2a61b6100fd280a9a09bd63ce89af5b90b0d18a4f901e4852c2491zpwrf"] Dec 03 13:13:54 crc kubenswrapper[4986]: I1203 13:13:54.471024 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b7926b50-9c30-44b4-ac3f-058edec517b9-bundle\") pod \"824d2a61b6100fd280a9a09bd63ce89af5b90b0d18a4f901e4852c2491zpwrf\" (UID: \"b7926b50-9c30-44b4-ac3f-058edec517b9\") " pod="openstack-operators/824d2a61b6100fd280a9a09bd63ce89af5b90b0d18a4f901e4852c2491zpwrf" Dec 03 13:13:54 crc kubenswrapper[4986]: I1203 13:13:54.471078 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nsds\" (UniqueName: \"kubernetes.io/projected/b7926b50-9c30-44b4-ac3f-058edec517b9-kube-api-access-4nsds\") pod \"824d2a61b6100fd280a9a09bd63ce89af5b90b0d18a4f901e4852c2491zpwrf\" (UID: \"b7926b50-9c30-44b4-ac3f-058edec517b9\") " pod="openstack-operators/824d2a61b6100fd280a9a09bd63ce89af5b90b0d18a4f901e4852c2491zpwrf" Dec 03 13:13:54 crc kubenswrapper[4986]: I1203 13:13:54.471237 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b7926b50-9c30-44b4-ac3f-058edec517b9-util\") pod \"824d2a61b6100fd280a9a09bd63ce89af5b90b0d18a4f901e4852c2491zpwrf\" (UID: \"b7926b50-9c30-44b4-ac3f-058edec517b9\") " pod="openstack-operators/824d2a61b6100fd280a9a09bd63ce89af5b90b0d18a4f901e4852c2491zpwrf" Dec 03 13:13:54 crc kubenswrapper[4986]: I1203 13:13:54.572510 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b7926b50-9c30-44b4-ac3f-058edec517b9-bundle\") pod \"824d2a61b6100fd280a9a09bd63ce89af5b90b0d18a4f901e4852c2491zpwrf\" (UID: \"b7926b50-9c30-44b4-ac3f-058edec517b9\") " pod="openstack-operators/824d2a61b6100fd280a9a09bd63ce89af5b90b0d18a4f901e4852c2491zpwrf" Dec 03 13:13:54 crc kubenswrapper[4986]: I1203 13:13:54.572583 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nsds\" (UniqueName: \"kubernetes.io/projected/b7926b50-9c30-44b4-ac3f-058edec517b9-kube-api-access-4nsds\") pod \"824d2a61b6100fd280a9a09bd63ce89af5b90b0d18a4f901e4852c2491zpwrf\" (UID: \"b7926b50-9c30-44b4-ac3f-058edec517b9\") " pod="openstack-operators/824d2a61b6100fd280a9a09bd63ce89af5b90b0d18a4f901e4852c2491zpwrf" Dec 03 13:13:54 crc kubenswrapper[4986]: I1203 13:13:54.572639 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b7926b50-9c30-44b4-ac3f-058edec517b9-util\") pod \"824d2a61b6100fd280a9a09bd63ce89af5b90b0d18a4f901e4852c2491zpwrf\" (UID: \"b7926b50-9c30-44b4-ac3f-058edec517b9\") " pod="openstack-operators/824d2a61b6100fd280a9a09bd63ce89af5b90b0d18a4f901e4852c2491zpwrf" Dec 03 13:13:54 crc kubenswrapper[4986]: I1203 13:13:54.573229 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b7926b50-9c30-44b4-ac3f-058edec517b9-util\") pod \"824d2a61b6100fd280a9a09bd63ce89af5b90b0d18a4f901e4852c2491zpwrf\" (UID: \"b7926b50-9c30-44b4-ac3f-058edec517b9\") " pod="openstack-operators/824d2a61b6100fd280a9a09bd63ce89af5b90b0d18a4f901e4852c2491zpwrf" Dec 03 13:13:54 crc kubenswrapper[4986]: I1203 13:13:54.573223 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b7926b50-9c30-44b4-ac3f-058edec517b9-bundle\") pod \"824d2a61b6100fd280a9a09bd63ce89af5b90b0d18a4f901e4852c2491zpwrf\" (UID: \"b7926b50-9c30-44b4-ac3f-058edec517b9\") " pod="openstack-operators/824d2a61b6100fd280a9a09bd63ce89af5b90b0d18a4f901e4852c2491zpwrf" Dec 03 13:13:54 crc kubenswrapper[4986]: I1203 13:13:54.591030 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nsds\" (UniqueName: \"kubernetes.io/projected/b7926b50-9c30-44b4-ac3f-058edec517b9-kube-api-access-4nsds\") pod \"824d2a61b6100fd280a9a09bd63ce89af5b90b0d18a4f901e4852c2491zpwrf\" (UID: \"b7926b50-9c30-44b4-ac3f-058edec517b9\") " pod="openstack-operators/824d2a61b6100fd280a9a09bd63ce89af5b90b0d18a4f901e4852c2491zpwrf" Dec 03 13:13:54 crc kubenswrapper[4986]: I1203 13:13:54.754700 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/824d2a61b6100fd280a9a09bd63ce89af5b90b0d18a4f901e4852c2491zpwrf" Dec 03 13:13:55 crc kubenswrapper[4986]: I1203 13:13:55.230979 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/824d2a61b6100fd280a9a09bd63ce89af5b90b0d18a4f901e4852c2491zpwrf"] Dec 03 13:13:55 crc kubenswrapper[4986]: I1203 13:13:55.575565 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/824d2a61b6100fd280a9a09bd63ce89af5b90b0d18a4f901e4852c2491zpwrf" event={"ID":"b7926b50-9c30-44b4-ac3f-058edec517b9","Type":"ContainerStarted","Data":"4eaeccf05006a7f929729b4d06b5a708f7e0139cedbc14c8bc7b9c7aa5a3fde8"} Dec 03 13:13:55 crc kubenswrapper[4986]: I1203 13:13:55.575674 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/824d2a61b6100fd280a9a09bd63ce89af5b90b0d18a4f901e4852c2491zpwrf" event={"ID":"b7926b50-9c30-44b4-ac3f-058edec517b9","Type":"ContainerStarted","Data":"aef81a989ced7e6ca66259f2573731bd3e0b8ffe796ddad75b41dbae305e59e0"} Dec 03 13:13:56 crc kubenswrapper[4986]: I1203 13:13:56.587592 4986 generic.go:334] "Generic (PLEG): container finished" podID="b7926b50-9c30-44b4-ac3f-058edec517b9" containerID="4eaeccf05006a7f929729b4d06b5a708f7e0139cedbc14c8bc7b9c7aa5a3fde8" exitCode=0 Dec 03 13:13:56 crc kubenswrapper[4986]: I1203 13:13:56.587705 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/824d2a61b6100fd280a9a09bd63ce89af5b90b0d18a4f901e4852c2491zpwrf" event={"ID":"b7926b50-9c30-44b4-ac3f-058edec517b9","Type":"ContainerDied","Data":"4eaeccf05006a7f929729b4d06b5a708f7e0139cedbc14c8bc7b9c7aa5a3fde8"} Dec 03 13:13:58 crc kubenswrapper[4986]: I1203 13:13:58.604314 4986 generic.go:334] "Generic (PLEG): container finished" podID="b7926b50-9c30-44b4-ac3f-058edec517b9" containerID="19d208e09999b3c3c90a48c7ffe50ee919f12a6b3e8bf35b2c7c5dae9229f3d6" exitCode=0 Dec 03 13:13:58 crc kubenswrapper[4986]: I1203 13:13:58.604419 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/824d2a61b6100fd280a9a09bd63ce89af5b90b0d18a4f901e4852c2491zpwrf" event={"ID":"b7926b50-9c30-44b4-ac3f-058edec517b9","Type":"ContainerDied","Data":"19d208e09999b3c3c90a48c7ffe50ee919f12a6b3e8bf35b2c7c5dae9229f3d6"} Dec 03 13:13:59 crc kubenswrapper[4986]: I1203 13:13:59.616492 4986 generic.go:334] "Generic (PLEG): container finished" podID="b7926b50-9c30-44b4-ac3f-058edec517b9" containerID="02a709b978f4b1a88b560301863de76177959391c76d87ddf37b6b6d1486f9ab" exitCode=0 Dec 03 13:13:59 crc kubenswrapper[4986]: I1203 13:13:59.616557 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/824d2a61b6100fd280a9a09bd63ce89af5b90b0d18a4f901e4852c2491zpwrf" event={"ID":"b7926b50-9c30-44b4-ac3f-058edec517b9","Type":"ContainerDied","Data":"02a709b978f4b1a88b560301863de76177959391c76d87ddf37b6b6d1486f9ab"} Dec 03 13:14:00 crc kubenswrapper[4986]: I1203 13:14:00.961566 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/824d2a61b6100fd280a9a09bd63ce89af5b90b0d18a4f901e4852c2491zpwrf" Dec 03 13:14:00 crc kubenswrapper[4986]: I1203 13:14:00.984183 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4nsds\" (UniqueName: \"kubernetes.io/projected/b7926b50-9c30-44b4-ac3f-058edec517b9-kube-api-access-4nsds\") pod \"b7926b50-9c30-44b4-ac3f-058edec517b9\" (UID: \"b7926b50-9c30-44b4-ac3f-058edec517b9\") " Dec 03 13:14:00 crc kubenswrapper[4986]: I1203 13:14:00.984225 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b7926b50-9c30-44b4-ac3f-058edec517b9-util\") pod \"b7926b50-9c30-44b4-ac3f-058edec517b9\" (UID: \"b7926b50-9c30-44b4-ac3f-058edec517b9\") " Dec 03 13:14:00 crc kubenswrapper[4986]: I1203 13:14:00.984322 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b7926b50-9c30-44b4-ac3f-058edec517b9-bundle\") pod \"b7926b50-9c30-44b4-ac3f-058edec517b9\" (UID: \"b7926b50-9c30-44b4-ac3f-058edec517b9\") " Dec 03 13:14:00 crc kubenswrapper[4986]: I1203 13:14:00.985045 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b7926b50-9c30-44b4-ac3f-058edec517b9-bundle" (OuterVolumeSpecName: "bundle") pod "b7926b50-9c30-44b4-ac3f-058edec517b9" (UID: "b7926b50-9c30-44b4-ac3f-058edec517b9"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 13:14:00 crc kubenswrapper[4986]: I1203 13:14:00.986640 4986 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b7926b50-9c30-44b4-ac3f-058edec517b9-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 13:14:00 crc kubenswrapper[4986]: I1203 13:14:00.994557 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7926b50-9c30-44b4-ac3f-058edec517b9-kube-api-access-4nsds" (OuterVolumeSpecName: "kube-api-access-4nsds") pod "b7926b50-9c30-44b4-ac3f-058edec517b9" (UID: "b7926b50-9c30-44b4-ac3f-058edec517b9"). InnerVolumeSpecName "kube-api-access-4nsds". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:14:01 crc kubenswrapper[4986]: I1203 13:14:01.060883 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b7926b50-9c30-44b4-ac3f-058edec517b9-util" (OuterVolumeSpecName: "util") pod "b7926b50-9c30-44b4-ac3f-058edec517b9" (UID: "b7926b50-9c30-44b4-ac3f-058edec517b9"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 13:14:01 crc kubenswrapper[4986]: I1203 13:14:01.087225 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4nsds\" (UniqueName: \"kubernetes.io/projected/b7926b50-9c30-44b4-ac3f-058edec517b9-kube-api-access-4nsds\") on node \"crc\" DevicePath \"\"" Dec 03 13:14:01 crc kubenswrapper[4986]: I1203 13:14:01.087268 4986 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b7926b50-9c30-44b4-ac3f-058edec517b9-util\") on node \"crc\" DevicePath \"\"" Dec 03 13:14:01 crc kubenswrapper[4986]: I1203 13:14:01.630743 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/824d2a61b6100fd280a9a09bd63ce89af5b90b0d18a4f901e4852c2491zpwrf" event={"ID":"b7926b50-9c30-44b4-ac3f-058edec517b9","Type":"ContainerDied","Data":"aef81a989ced7e6ca66259f2573731bd3e0b8ffe796ddad75b41dbae305e59e0"} Dec 03 13:14:01 crc kubenswrapper[4986]: I1203 13:14:01.630782 4986 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aef81a989ced7e6ca66259f2573731bd3e0b8ffe796ddad75b41dbae305e59e0" Dec 03 13:14:01 crc kubenswrapper[4986]: I1203 13:14:01.630837 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/824d2a61b6100fd280a9a09bd63ce89af5b90b0d18a4f901e4852c2491zpwrf" Dec 03 13:14:03 crc kubenswrapper[4986]: I1203 13:14:03.492222 4986 patch_prober.go:28] interesting pod/machine-config-daemon-xggpj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 13:14:03 crc kubenswrapper[4986]: I1203 13:14:03.492630 4986 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 13:14:06 crc kubenswrapper[4986]: I1203 13:14:06.644131 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-64576b8bc7-w9775"] Dec 03 13:14:06 crc kubenswrapper[4986]: E1203 13:14:06.644693 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7926b50-9c30-44b4-ac3f-058edec517b9" containerName="pull" Dec 03 13:14:06 crc kubenswrapper[4986]: I1203 13:14:06.644707 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7926b50-9c30-44b4-ac3f-058edec517b9" containerName="pull" Dec 03 13:14:06 crc kubenswrapper[4986]: E1203 13:14:06.644718 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7926b50-9c30-44b4-ac3f-058edec517b9" containerName="extract" Dec 03 13:14:06 crc kubenswrapper[4986]: I1203 13:14:06.644725 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7926b50-9c30-44b4-ac3f-058edec517b9" containerName="extract" Dec 03 13:14:06 crc kubenswrapper[4986]: E1203 13:14:06.644747 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7926b50-9c30-44b4-ac3f-058edec517b9" containerName="util" Dec 03 13:14:06 crc kubenswrapper[4986]: I1203 13:14:06.644754 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7926b50-9c30-44b4-ac3f-058edec517b9" containerName="util" Dec 03 13:14:06 crc kubenswrapper[4986]: I1203 13:14:06.644877 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7926b50-9c30-44b4-ac3f-058edec517b9" containerName="extract" Dec 03 13:14:06 crc kubenswrapper[4986]: I1203 13:14:06.645393 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-64576b8bc7-w9775" Dec 03 13:14:06 crc kubenswrapper[4986]: I1203 13:14:06.647623 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-2c6rp" Dec 03 13:14:06 crc kubenswrapper[4986]: I1203 13:14:06.663020 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7mzc\" (UniqueName: \"kubernetes.io/projected/bbe2108e-e5e6-4482-91b9-148932254640-kube-api-access-k7mzc\") pod \"openstack-operator-controller-operator-64576b8bc7-w9775\" (UID: \"bbe2108e-e5e6-4482-91b9-148932254640\") " pod="openstack-operators/openstack-operator-controller-operator-64576b8bc7-w9775" Dec 03 13:14:06 crc kubenswrapper[4986]: I1203 13:14:06.734810 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-64576b8bc7-w9775"] Dec 03 13:14:06 crc kubenswrapper[4986]: I1203 13:14:06.764798 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7mzc\" (UniqueName: \"kubernetes.io/projected/bbe2108e-e5e6-4482-91b9-148932254640-kube-api-access-k7mzc\") pod \"openstack-operator-controller-operator-64576b8bc7-w9775\" (UID: \"bbe2108e-e5e6-4482-91b9-148932254640\") " pod="openstack-operators/openstack-operator-controller-operator-64576b8bc7-w9775" Dec 03 13:14:06 crc kubenswrapper[4986]: I1203 13:14:06.807431 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7mzc\" (UniqueName: \"kubernetes.io/projected/bbe2108e-e5e6-4482-91b9-148932254640-kube-api-access-k7mzc\") pod \"openstack-operator-controller-operator-64576b8bc7-w9775\" (UID: \"bbe2108e-e5e6-4482-91b9-148932254640\") " pod="openstack-operators/openstack-operator-controller-operator-64576b8bc7-w9775" Dec 03 13:14:06 crc kubenswrapper[4986]: I1203 13:14:06.963408 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-64576b8bc7-w9775" Dec 03 13:14:07 crc kubenswrapper[4986]: I1203 13:14:07.425886 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-64576b8bc7-w9775"] Dec 03 13:14:07 crc kubenswrapper[4986]: W1203 13:14:07.439810 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbbe2108e_e5e6_4482_91b9_148932254640.slice/crio-30d45acebffa6f08502beba5693e52db6e665dc303d997a091812467b4d800e8 WatchSource:0}: Error finding container 30d45acebffa6f08502beba5693e52db6e665dc303d997a091812467b4d800e8: Status 404 returned error can't find the container with id 30d45acebffa6f08502beba5693e52db6e665dc303d997a091812467b4d800e8 Dec 03 13:14:07 crc kubenswrapper[4986]: I1203 13:14:07.672685 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-64576b8bc7-w9775" event={"ID":"bbe2108e-e5e6-4482-91b9-148932254640","Type":"ContainerStarted","Data":"30d45acebffa6f08502beba5693e52db6e665dc303d997a091812467b4d800e8"} Dec 03 13:14:12 crc kubenswrapper[4986]: I1203 13:14:12.725035 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-64576b8bc7-w9775" event={"ID":"bbe2108e-e5e6-4482-91b9-148932254640","Type":"ContainerStarted","Data":"0d1bf58ed5e0f0558952264540225f971114ae9ac4ccf3819f5a8a394e4edc89"} Dec 03 13:14:12 crc kubenswrapper[4986]: I1203 13:14:12.725741 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-64576b8bc7-w9775" Dec 03 13:14:12 crc kubenswrapper[4986]: I1203 13:14:12.762849 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-64576b8bc7-w9775" podStartSLOduration=2.556160959 podStartE2EDuration="6.762824834s" podCreationTimestamp="2025-12-03 13:14:06 +0000 UTC" firstStartedPulling="2025-12-03 13:14:07.442427236 +0000 UTC m=+1106.908858427" lastFinishedPulling="2025-12-03 13:14:11.649091101 +0000 UTC m=+1111.115522302" observedRunningTime="2025-12-03 13:14:12.756889951 +0000 UTC m=+1112.223321202" watchObservedRunningTime="2025-12-03 13:14:12.762824834 +0000 UTC m=+1112.229256035" Dec 03 13:14:16 crc kubenswrapper[4986]: I1203 13:14:16.965731 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-64576b8bc7-w9775" Dec 03 13:14:33 crc kubenswrapper[4986]: I1203 13:14:33.490949 4986 patch_prober.go:28] interesting pod/machine-config-daemon-xggpj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 13:14:33 crc kubenswrapper[4986]: I1203 13:14:33.491517 4986 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 13:14:36 crc kubenswrapper[4986]: I1203 13:14:36.546272 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-8z7gr"] Dec 03 13:14:36 crc kubenswrapper[4986]: I1203 13:14:36.547828 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-8z7gr" Dec 03 13:14:36 crc kubenswrapper[4986]: I1203 13:14:36.551254 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-svz96" Dec 03 13:14:36 crc kubenswrapper[4986]: I1203 13:14:36.558297 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-859b6ccc6-l48wq"] Dec 03 13:14:36 crc kubenswrapper[4986]: I1203 13:14:36.559616 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-l48wq" Dec 03 13:14:36 crc kubenswrapper[4986]: I1203 13:14:36.561004 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-8z7gr"] Dec 03 13:14:36 crc kubenswrapper[4986]: I1203 13:14:36.562084 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-gjjx4" Dec 03 13:14:36 crc kubenswrapper[4986]: I1203 13:14:36.568886 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-78b4bc895b-fw7nw"] Dec 03 13:14:36 crc kubenswrapper[4986]: I1203 13:14:36.570186 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-fw7nw" Dec 03 13:14:36 crc kubenswrapper[4986]: I1203 13:14:36.572388 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-xxzbl" Dec 03 13:14:36 crc kubenswrapper[4986]: I1203 13:14:36.574403 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-859b6ccc6-l48wq"] Dec 03 13:14:36 crc kubenswrapper[4986]: I1203 13:14:36.580619 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987cd8cd-5vc5p"] Dec 03 13:14:36 crc kubenswrapper[4986]: I1203 13:14:36.581526 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-5vc5p" Dec 03 13:14:36 crc kubenswrapper[4986]: I1203 13:14:36.583024 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-hvzdp" Dec 03 13:14:36 crc kubenswrapper[4986]: I1203 13:14:36.586816 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-78b4bc895b-fw7nw"] Dec 03 13:14:36 crc kubenswrapper[4986]: I1203 13:14:36.592482 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987cd8cd-5vc5p"] Dec 03 13:14:36 crc kubenswrapper[4986]: I1203 13:14:36.618281 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-nnhsz"] Dec 03 13:14:36 crc kubenswrapper[4986]: I1203 13:14:36.619418 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-nnhsz" Dec 03 13:14:36 crc kubenswrapper[4986]: I1203 13:14:36.625927 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-96d5k" Dec 03 13:14:36 crc kubenswrapper[4986]: I1203 13:14:36.635482 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-h49tf"] Dec 03 13:14:36 crc kubenswrapper[4986]: I1203 13:14:36.636668 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-h49tf" Dec 03 13:14:36 crc kubenswrapper[4986]: I1203 13:14:36.640759 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-rl6g8" Dec 03 13:14:36 crc kubenswrapper[4986]: I1203 13:14:36.642685 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-nnhsz"] Dec 03 13:14:36 crc kubenswrapper[4986]: I1203 13:14:36.668355 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-cgjf2"] Dec 03 13:14:36 crc kubenswrapper[4986]: I1203 13:14:36.669378 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-57548d458d-cgjf2" Dec 03 13:14:36 crc kubenswrapper[4986]: I1203 13:14:36.672440 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-vrwbj" Dec 03 13:14:36 crc kubenswrapper[4986]: I1203 13:14:36.672606 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Dec 03 13:14:36 crc kubenswrapper[4986]: I1203 13:14:36.683420 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldf46\" (UniqueName: \"kubernetes.io/projected/f2be2f63-f6d7-425a-8ce1-d2bc205e24f0-kube-api-access-ldf46\") pod \"horizon-operator-controller-manager-68c6d99b8f-h49tf\" (UID: \"f2be2f63-f6d7-425a-8ce1-d2bc205e24f0\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-h49tf" Dec 03 13:14:36 crc kubenswrapper[4986]: I1203 13:14:36.683483 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjjbn\" (UniqueName: \"kubernetes.io/projected/8af63121-8727-4c23-b872-554fe679fc2f-kube-api-access-rjjbn\") pod \"glance-operator-controller-manager-77987cd8cd-5vc5p\" (UID: \"8af63121-8727-4c23-b872-554fe679fc2f\") " pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-5vc5p" Dec 03 13:14:36 crc kubenswrapper[4986]: I1203 13:14:36.683512 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b47ead63-1562-466f-887b-54c155983ebf-cert\") pod \"infra-operator-controller-manager-57548d458d-cgjf2\" (UID: \"b47ead63-1562-466f-887b-54c155983ebf\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-cgjf2" Dec 03 13:14:36 crc kubenswrapper[4986]: I1203 13:14:36.683555 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7sn4r\" (UniqueName: \"kubernetes.io/projected/f3328b2b-d4e4-4b39-a949-bfd1463596f0-kube-api-access-7sn4r\") pod \"cinder-operator-controller-manager-859b6ccc6-l48wq\" (UID: \"f3328b2b-d4e4-4b39-a949-bfd1463596f0\") " pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-l48wq" Dec 03 13:14:36 crc kubenswrapper[4986]: I1203 13:14:36.683575 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kj8jl\" (UniqueName: \"kubernetes.io/projected/924573df-b6fe-4d17-add4-376f76084fab-kube-api-access-kj8jl\") pod \"designate-operator-controller-manager-78b4bc895b-fw7nw\" (UID: \"924573df-b6fe-4d17-add4-376f76084fab\") " pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-fw7nw" Dec 03 13:14:36 crc kubenswrapper[4986]: I1203 13:14:36.683603 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7msw\" (UniqueName: \"kubernetes.io/projected/69fed752-e65d-4007-a731-3faee6335366-kube-api-access-p7msw\") pod \"barbican-operator-controller-manager-7d9dfd778-8z7gr\" (UID: \"69fed752-e65d-4007-a731-3faee6335366\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-8z7gr" Dec 03 13:14:36 crc kubenswrapper[4986]: I1203 13:14:36.683640 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82xw2\" (UniqueName: \"kubernetes.io/projected/f6268841-12af-4fa7-a9ab-54927e3256cf-kube-api-access-82xw2\") pod \"heat-operator-controller-manager-5f64f6f8bb-nnhsz\" (UID: \"f6268841-12af-4fa7-a9ab-54927e3256cf\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-nnhsz" Dec 03 13:14:36 crc kubenswrapper[4986]: I1203 13:14:36.683664 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bm2nb\" (UniqueName: \"kubernetes.io/projected/b47ead63-1562-466f-887b-54c155983ebf-kube-api-access-bm2nb\") pod \"infra-operator-controller-manager-57548d458d-cgjf2\" (UID: \"b47ead63-1562-466f-887b-54c155983ebf\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-cgjf2" Dec 03 13:14:36 crc kubenswrapper[4986]: I1203 13:14:36.692966 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-h49tf"] Dec 03 13:14:36 crc kubenswrapper[4986]: I1203 13:14:36.707436 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-cgjf2"] Dec 03 13:14:36 crc kubenswrapper[4986]: I1203 13:14:36.725354 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6c548fd776-jgxqz"] Dec 03 13:14:36 crc kubenswrapper[4986]: I1203 13:14:36.726279 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-jgxqz" Dec 03 13:14:36 crc kubenswrapper[4986]: I1203 13:14:36.728488 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-nqxmn" Dec 03 13:14:36 crc kubenswrapper[4986]: I1203 13:14:36.734180 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6c548fd776-jgxqz"] Dec 03 13:14:36 crc kubenswrapper[4986]: I1203 13:14:36.745276 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-bm5lk"] Dec 03 13:14:36 crc kubenswrapper[4986]: I1203 13:14:36.746225 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-bm5lk" Dec 03 13:14:36 crc kubenswrapper[4986]: I1203 13:14:36.753210 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-gqpvx" Dec 03 13:14:36 crc kubenswrapper[4986]: I1203 13:14:36.770055 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-bm5lk"] Dec 03 13:14:36 crc kubenswrapper[4986]: I1203 13:14:36.776601 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-7c79b5df47-h5sqr"] Dec 03 13:14:36 crc kubenswrapper[4986]: I1203 13:14:36.777890 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-h5sqr" Dec 03 13:14:36 crc kubenswrapper[4986]: I1203 13:14:36.786286 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-p29pm" Dec 03 13:14:36 crc kubenswrapper[4986]: I1203 13:14:36.786675 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldf46\" (UniqueName: \"kubernetes.io/projected/f2be2f63-f6d7-425a-8ce1-d2bc205e24f0-kube-api-access-ldf46\") pod \"horizon-operator-controller-manager-68c6d99b8f-h49tf\" (UID: \"f2be2f63-f6d7-425a-8ce1-d2bc205e24f0\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-h49tf" Dec 03 13:14:36 crc kubenswrapper[4986]: I1203 13:14:36.786712 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjjbn\" (UniqueName: \"kubernetes.io/projected/8af63121-8727-4c23-b872-554fe679fc2f-kube-api-access-rjjbn\") pod \"glance-operator-controller-manager-77987cd8cd-5vc5p\" (UID: \"8af63121-8727-4c23-b872-554fe679fc2f\") " pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-5vc5p" Dec 03 13:14:36 crc kubenswrapper[4986]: I1203 13:14:36.786748 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b47ead63-1562-466f-887b-54c155983ebf-cert\") pod \"infra-operator-controller-manager-57548d458d-cgjf2\" (UID: \"b47ead63-1562-466f-887b-54c155983ebf\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-cgjf2" Dec 03 13:14:36 crc kubenswrapper[4986]: I1203 13:14:36.786776 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7sn4r\" (UniqueName: \"kubernetes.io/projected/f3328b2b-d4e4-4b39-a949-bfd1463596f0-kube-api-access-7sn4r\") pod \"cinder-operator-controller-manager-859b6ccc6-l48wq\" (UID: \"f3328b2b-d4e4-4b39-a949-bfd1463596f0\") " pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-l48wq" Dec 03 13:14:36 crc kubenswrapper[4986]: I1203 13:14:36.786796 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kj8jl\" (UniqueName: \"kubernetes.io/projected/924573df-b6fe-4d17-add4-376f76084fab-kube-api-access-kj8jl\") pod \"designate-operator-controller-manager-78b4bc895b-fw7nw\" (UID: \"924573df-b6fe-4d17-add4-376f76084fab\") " pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-fw7nw" Dec 03 13:14:36 crc kubenswrapper[4986]: I1203 13:14:36.786817 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7msw\" (UniqueName: \"kubernetes.io/projected/69fed752-e65d-4007-a731-3faee6335366-kube-api-access-p7msw\") pod \"barbican-operator-controller-manager-7d9dfd778-8z7gr\" (UID: \"69fed752-e65d-4007-a731-3faee6335366\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-8z7gr" Dec 03 13:14:36 crc kubenswrapper[4986]: I1203 13:14:36.786841 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82xw2\" (UniqueName: \"kubernetes.io/projected/f6268841-12af-4fa7-a9ab-54927e3256cf-kube-api-access-82xw2\") pod \"heat-operator-controller-manager-5f64f6f8bb-nnhsz\" (UID: \"f6268841-12af-4fa7-a9ab-54927e3256cf\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-nnhsz" Dec 03 13:14:36 crc kubenswrapper[4986]: I1203 13:14:36.786865 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bm2nb\" (UniqueName: \"kubernetes.io/projected/b47ead63-1562-466f-887b-54c155983ebf-kube-api-access-bm2nb\") pod \"infra-operator-controller-manager-57548d458d-cgjf2\" (UID: \"b47ead63-1562-466f-887b-54c155983ebf\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-cgjf2" Dec 03 13:14:36 crc kubenswrapper[4986]: E1203 13:14:36.787397 4986 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 03 13:14:36 crc kubenswrapper[4986]: E1203 13:14:36.787439 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b47ead63-1562-466f-887b-54c155983ebf-cert podName:b47ead63-1562-466f-887b-54c155983ebf nodeName:}" failed. No retries permitted until 2025-12-03 13:14:37.287423852 +0000 UTC m=+1136.753855043 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b47ead63-1562-466f-887b-54c155983ebf-cert") pod "infra-operator-controller-manager-57548d458d-cgjf2" (UID: "b47ead63-1562-466f-887b-54c155983ebf") : secret "infra-operator-webhook-server-cert" not found Dec 03 13:14:36 crc kubenswrapper[4986]: I1203 13:14:36.788782 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-bkf5w"] Dec 03 13:14:36 crc kubenswrapper[4986]: I1203 13:14:36.790205 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-bkf5w" Dec 03 13:14:36 crc kubenswrapper[4986]: I1203 13:14:36.792997 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-j2x6w" Dec 03 13:14:36 crc kubenswrapper[4986]: I1203 13:14:36.803828 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7c79b5df47-h5sqr"] Dec 03 13:14:36 crc kubenswrapper[4986]: I1203 13:14:36.813145 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7sn4r\" (UniqueName: \"kubernetes.io/projected/f3328b2b-d4e4-4b39-a949-bfd1463596f0-kube-api-access-7sn4r\") pod \"cinder-operator-controller-manager-859b6ccc6-l48wq\" (UID: \"f3328b2b-d4e4-4b39-a949-bfd1463596f0\") " pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-l48wq" Dec 03 13:14:36 crc kubenswrapper[4986]: I1203 13:14:36.814251 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-bkf5w"] Dec 03 13:14:36 crc kubenswrapper[4986]: I1203 13:14:36.815962 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82xw2\" (UniqueName: \"kubernetes.io/projected/f6268841-12af-4fa7-a9ab-54927e3256cf-kube-api-access-82xw2\") pod \"heat-operator-controller-manager-5f64f6f8bb-nnhsz\" (UID: \"f6268841-12af-4fa7-a9ab-54927e3256cf\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-nnhsz" Dec 03 13:14:36 crc kubenswrapper[4986]: I1203 13:14:36.817067 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjjbn\" (UniqueName: \"kubernetes.io/projected/8af63121-8727-4c23-b872-554fe679fc2f-kube-api-access-rjjbn\") pod \"glance-operator-controller-manager-77987cd8cd-5vc5p\" (UID: \"8af63121-8727-4c23-b872-554fe679fc2f\") " pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-5vc5p" Dec 03 13:14:36 crc kubenswrapper[4986]: I1203 13:14:36.817209 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bm2nb\" (UniqueName: \"kubernetes.io/projected/b47ead63-1562-466f-887b-54c155983ebf-kube-api-access-bm2nb\") pod \"infra-operator-controller-manager-57548d458d-cgjf2\" (UID: \"b47ead63-1562-466f-887b-54c155983ebf\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-cgjf2" Dec 03 13:14:36 crc kubenswrapper[4986]: I1203 13:14:36.817706 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldf46\" (UniqueName: \"kubernetes.io/projected/f2be2f63-f6d7-425a-8ce1-d2bc205e24f0-kube-api-access-ldf46\") pod \"horizon-operator-controller-manager-68c6d99b8f-h49tf\" (UID: \"f2be2f63-f6d7-425a-8ce1-d2bc205e24f0\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-h49tf" Dec 03 13:14:36 crc kubenswrapper[4986]: I1203 13:14:36.824610 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7msw\" (UniqueName: \"kubernetes.io/projected/69fed752-e65d-4007-a731-3faee6335366-kube-api-access-p7msw\") pod \"barbican-operator-controller-manager-7d9dfd778-8z7gr\" (UID: \"69fed752-e65d-4007-a731-3faee6335366\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-8z7gr" Dec 03 13:14:36 crc kubenswrapper[4986]: I1203 13:14:36.826325 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kj8jl\" (UniqueName: \"kubernetes.io/projected/924573df-b6fe-4d17-add4-376f76084fab-kube-api-access-kj8jl\") pod \"designate-operator-controller-manager-78b4bc895b-fw7nw\" (UID: \"924573df-b6fe-4d17-add4-376f76084fab\") " pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-fw7nw" Dec 03 13:14:36 crc kubenswrapper[4986]: I1203 13:14:36.830936 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-blk7z"] Dec 03 13:14:36 crc kubenswrapper[4986]: I1203 13:14:36.831854 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-blk7z" Dec 03 13:14:36 crc kubenswrapper[4986]: I1203 13:14:36.833533 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-bw9ww" Dec 03 13:14:36 crc kubenswrapper[4986]: I1203 13:14:36.838931 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-blk7z"] Dec 03 13:14:36 crc kubenswrapper[4986]: I1203 13:14:36.922103 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-8z7gr" Dec 03 13:14:36 crc kubenswrapper[4986]: I1203 13:14:36.922528 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-fw7nw" Dec 03 13:14:36 crc kubenswrapper[4986]: I1203 13:14:36.922824 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbbwg\" (UniqueName: \"kubernetes.io/projected/29ac6999-88ff-472f-a03e-0b95f1042d38-kube-api-access-sbbwg\") pod \"keystone-operator-controller-manager-7765d96ddf-bm5lk\" (UID: \"29ac6999-88ff-472f-a03e-0b95f1042d38\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-bm5lk" Dec 03 13:14:36 crc kubenswrapper[4986]: I1203 13:14:36.923208 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-l48wq" Dec 03 13:14:36 crc kubenswrapper[4986]: I1203 13:14:36.923800 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkzbn\" (UniqueName: \"kubernetes.io/projected/ac6af48b-36c2-427c-93ad-090cc34434f7-kube-api-access-rkzbn\") pod \"ironic-operator-controller-manager-6c548fd776-jgxqz\" (UID: \"ac6af48b-36c2-427c-93ad-090cc34434f7\") " pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-jgxqz" Dec 03 13:14:36 crc kubenswrapper[4986]: I1203 13:14:36.923860 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n844t\" (UniqueName: \"kubernetes.io/projected/f01db271-4787-4af7-b37b-5ba6e4e2e5b7-kube-api-access-n844t\") pod \"manila-operator-controller-manager-7c79b5df47-h5sqr\" (UID: \"f01db271-4787-4af7-b37b-5ba6e4e2e5b7\") " pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-h5sqr" Dec 03 13:14:36 crc kubenswrapper[4986]: I1203 13:14:36.930892 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-xcs9s"] Dec 03 13:14:36 crc kubenswrapper[4986]: I1203 13:14:36.931437 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-5vc5p" Dec 03 13:14:36 crc kubenswrapper[4986]: I1203 13:14:36.932711 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-998648c74-xcs9s" Dec 03 13:14:36 crc kubenswrapper[4986]: I1203 13:14:36.939265 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-29rn8" Dec 03 13:14:36 crc kubenswrapper[4986]: I1203 13:14:36.939848 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-7t7xl"] Dec 03 13:14:36 crc kubenswrapper[4986]: I1203 13:14:36.955757 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-nnhsz" Dec 03 13:14:36 crc kubenswrapper[4986]: I1203 13:14:36.957908 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-7t7xl" Dec 03 13:14:36 crc kubenswrapper[4986]: I1203 13:14:36.963344 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-94h8f" Dec 03 13:14:36 crc kubenswrapper[4986]: I1203 13:14:36.970262 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-7t7xl"] Dec 03 13:14:36 crc kubenswrapper[4986]: I1203 13:14:36.970323 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-xcs9s"] Dec 03 13:14:36 crc kubenswrapper[4986]: I1203 13:14:36.973782 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-h49tf" Dec 03 13:14:36 crc kubenswrapper[4986]: I1203 13:14:36.988098 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-lvpbh"] Dec 03 13:14:36 crc kubenswrapper[4986]: I1203 13:14:36.989654 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-lvpbh" Dec 03 13:14:36 crc kubenswrapper[4986]: I1203 13:14:36.991586 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-jjdcn" Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.007541 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-lvpbh"] Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.035645 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cqrx\" (UniqueName: \"kubernetes.io/projected/3d88c3a6-1643-4fff-acbe-2327b9878103-kube-api-access-4cqrx\") pod \"mariadb-operator-controller-manager-56bbcc9d85-bkf5w\" (UID: \"3d88c3a6-1643-4fff-acbe-2327b9878103\") " pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-bkf5w" Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.036000 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4v98c\" (UniqueName: \"kubernetes.io/projected/1ecc0034-a740-410d-a135-6b65d34ce64d-kube-api-access-4v98c\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-blk7z\" (UID: \"1ecc0034-a740-410d-a135-6b65d34ce64d\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-blk7z" Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.036365 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkzbn\" (UniqueName: \"kubernetes.io/projected/ac6af48b-36c2-427c-93ad-090cc34434f7-kube-api-access-rkzbn\") pod \"ironic-operator-controller-manager-6c548fd776-jgxqz\" (UID: \"ac6af48b-36c2-427c-93ad-090cc34434f7\") " pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-jgxqz" Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.036411 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n844t\" (UniqueName: \"kubernetes.io/projected/f01db271-4787-4af7-b37b-5ba6e4e2e5b7-kube-api-access-n844t\") pod \"manila-operator-controller-manager-7c79b5df47-h5sqr\" (UID: \"f01db271-4787-4af7-b37b-5ba6e4e2e5b7\") " pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-h5sqr" Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.036532 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbbwg\" (UniqueName: \"kubernetes.io/projected/29ac6999-88ff-472f-a03e-0b95f1042d38-kube-api-access-sbbwg\") pod \"keystone-operator-controller-manager-7765d96ddf-bm5lk\" (UID: \"29ac6999-88ff-472f-a03e-0b95f1042d38\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-bm5lk" Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.047750 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f8c65bbfc-c6wk7"] Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.048931 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-c6wk7" Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.052976 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-2qgs8" Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.059997 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n844t\" (UniqueName: \"kubernetes.io/projected/f01db271-4787-4af7-b37b-5ba6e4e2e5b7-kube-api-access-n844t\") pod \"manila-operator-controller-manager-7c79b5df47-h5sqr\" (UID: \"f01db271-4787-4af7-b37b-5ba6e4e2e5b7\") " pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-h5sqr" Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.062258 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkzbn\" (UniqueName: \"kubernetes.io/projected/ac6af48b-36c2-427c-93ad-090cc34434f7-kube-api-access-rkzbn\") pod \"ironic-operator-controller-manager-6c548fd776-jgxqz\" (UID: \"ac6af48b-36c2-427c-93ad-090cc34434f7\") " pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-jgxqz" Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.067958 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4fw9mm"] Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.061709 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbbwg\" (UniqueName: \"kubernetes.io/projected/29ac6999-88ff-472f-a03e-0b95f1042d38-kube-api-access-sbbwg\") pod \"keystone-operator-controller-manager-7765d96ddf-bm5lk\" (UID: \"29ac6999-88ff-472f-a03e-0b95f1042d38\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-bm5lk" Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.069056 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4fw9mm" Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.072213 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-zrvs2" Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.072230 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.077737 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-bm5lk" Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.080466 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-j4bg2"] Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.081711 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-78f8948974-j4bg2" Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.108157 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-h5sqr" Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.112337 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f8c65bbfc-c6wk7"] Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.117045 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-nx4xx" Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.124871 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-j4bg2"] Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.137976 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvzgl\" (UniqueName: \"kubernetes.io/projected/be530205-b10b-4d4b-9fa3-4d9d0548054c-kube-api-access-dvzgl\") pod \"octavia-operator-controller-manager-998648c74-xcs9s\" (UID: \"be530205-b10b-4d4b-9fa3-4d9d0548054c\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-xcs9s" Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.138033 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdhhl\" (UniqueName: \"kubernetes.io/projected/42fe5051-78e1-45ab-9766-dbd119c4e060-kube-api-access-pdhhl\") pod \"ovn-operator-controller-manager-b6456fdb6-lvpbh\" (UID: \"42fe5051-78e1-45ab-9766-dbd119c4e060\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-lvpbh" Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.138087 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swstz\" (UniqueName: \"kubernetes.io/projected/d601bb24-2bd9-478a-96d1-ed2001bd53b6-kube-api-access-swstz\") pod \"nova-operator-controller-manager-697bc559fc-7t7xl\" (UID: \"d601bb24-2bd9-478a-96d1-ed2001bd53b6\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-7t7xl" Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.138124 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4cqrx\" (UniqueName: \"kubernetes.io/projected/3d88c3a6-1643-4fff-acbe-2327b9878103-kube-api-access-4cqrx\") pod \"mariadb-operator-controller-manager-56bbcc9d85-bkf5w\" (UID: \"3d88c3a6-1643-4fff-acbe-2327b9878103\") " pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-bkf5w" Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.138159 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4v98c\" (UniqueName: \"kubernetes.io/projected/1ecc0034-a740-410d-a135-6b65d34ce64d-kube-api-access-4v98c\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-blk7z\" (UID: \"1ecc0034-a740-410d-a135-6b65d34ce64d\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-blk7z" Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.152124 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4fw9mm"] Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.163094 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4v98c\" (UniqueName: \"kubernetes.io/projected/1ecc0034-a740-410d-a135-6b65d34ce64d-kube-api-access-4v98c\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-blk7z\" (UID: \"1ecc0034-a740-410d-a135-6b65d34ce64d\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-blk7z" Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.163452 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cqrx\" (UniqueName: \"kubernetes.io/projected/3d88c3a6-1643-4fff-acbe-2327b9878103-kube-api-access-4cqrx\") pod \"mariadb-operator-controller-manager-56bbcc9d85-bkf5w\" (UID: \"3d88c3a6-1643-4fff-acbe-2327b9878103\") " pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-bkf5w" Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.172391 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-726wj"] Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.173625 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-726wj" Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.175876 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-f7m6m" Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.183605 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-bkf5w" Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.189866 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-726wj"] Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.212137 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-bg4vv"] Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.214309 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5854674fcc-bg4vv" Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.217375 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-bg4vv"] Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.219872 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-mvb92" Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.238905 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/400b4a35-c3f1-409e-83fe-019ff145c65a-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4fw9mm\" (UID: \"400b4a35-c3f1-409e-83fe-019ff145c65a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4fw9mm" Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.238944 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czdjt\" (UniqueName: \"kubernetes.io/projected/c6bc4b54-a3be-4a2c-813e-fa19eea0dbe8-kube-api-access-czdjt\") pod \"placement-operator-controller-manager-78f8948974-j4bg2\" (UID: \"c6bc4b54-a3be-4a2c-813e-fa19eea0dbe8\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-j4bg2" Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.238964 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n78gl\" (UniqueName: \"kubernetes.io/projected/a5090dbe-8e6f-4865-92a3-28720422db9f-kube-api-access-n78gl\") pod \"swift-operator-controller-manager-5f8c65bbfc-c6wk7\" (UID: \"a5090dbe-8e6f-4865-92a3-28720422db9f\") " pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-c6wk7" Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.238996 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swstz\" (UniqueName: \"kubernetes.io/projected/d601bb24-2bd9-478a-96d1-ed2001bd53b6-kube-api-access-swstz\") pod \"nova-operator-controller-manager-697bc559fc-7t7xl\" (UID: \"d601bb24-2bd9-478a-96d1-ed2001bd53b6\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-7t7xl" Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.239056 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7jql\" (UniqueName: \"kubernetes.io/projected/400b4a35-c3f1-409e-83fe-019ff145c65a-kube-api-access-g7jql\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4fw9mm\" (UID: \"400b4a35-c3f1-409e-83fe-019ff145c65a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4fw9mm" Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.239094 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvzgl\" (UniqueName: \"kubernetes.io/projected/be530205-b10b-4d4b-9fa3-4d9d0548054c-kube-api-access-dvzgl\") pod \"octavia-operator-controller-manager-998648c74-xcs9s\" (UID: \"be530205-b10b-4d4b-9fa3-4d9d0548054c\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-xcs9s" Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.239121 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdhhl\" (UniqueName: \"kubernetes.io/projected/42fe5051-78e1-45ab-9766-dbd119c4e060-kube-api-access-pdhhl\") pod \"ovn-operator-controller-manager-b6456fdb6-lvpbh\" (UID: \"42fe5051-78e1-45ab-9766-dbd119c4e060\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-lvpbh" Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.266604 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-blk7z" Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.272576 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swstz\" (UniqueName: \"kubernetes.io/projected/d601bb24-2bd9-478a-96d1-ed2001bd53b6-kube-api-access-swstz\") pod \"nova-operator-controller-manager-697bc559fc-7t7xl\" (UID: \"d601bb24-2bd9-478a-96d1-ed2001bd53b6\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-7t7xl" Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.283547 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdhhl\" (UniqueName: \"kubernetes.io/projected/42fe5051-78e1-45ab-9766-dbd119c4e060-kube-api-access-pdhhl\") pod \"ovn-operator-controller-manager-b6456fdb6-lvpbh\" (UID: \"42fe5051-78e1-45ab-9766-dbd119c4e060\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-lvpbh" Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.287354 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvzgl\" (UniqueName: \"kubernetes.io/projected/be530205-b10b-4d4b-9fa3-4d9d0548054c-kube-api-access-dvzgl\") pod \"octavia-operator-controller-manager-998648c74-xcs9s\" (UID: \"be530205-b10b-4d4b-9fa3-4d9d0548054c\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-xcs9s" Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.341355 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7jql\" (UniqueName: \"kubernetes.io/projected/400b4a35-c3f1-409e-83fe-019ff145c65a-kube-api-access-g7jql\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4fw9mm\" (UID: \"400b4a35-c3f1-409e-83fe-019ff145c65a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4fw9mm" Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.341695 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwkns\" (UniqueName: \"kubernetes.io/projected/a7413edd-cf2a-4756-b6b7-afe4e4e42fe6-kube-api-access-vwkns\") pod \"telemetry-operator-controller-manager-76cc84c6bb-726wj\" (UID: \"a7413edd-cf2a-4756-b6b7-afe4e4e42fe6\") " pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-726wj" Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.341736 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/400b4a35-c3f1-409e-83fe-019ff145c65a-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4fw9mm\" (UID: \"400b4a35-c3f1-409e-83fe-019ff145c65a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4fw9mm" Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.341754 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czdjt\" (UniqueName: \"kubernetes.io/projected/c6bc4b54-a3be-4a2c-813e-fa19eea0dbe8-kube-api-access-czdjt\") pod \"placement-operator-controller-manager-78f8948974-j4bg2\" (UID: \"c6bc4b54-a3be-4a2c-813e-fa19eea0dbe8\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-j4bg2" Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.341770 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n78gl\" (UniqueName: \"kubernetes.io/projected/a5090dbe-8e6f-4865-92a3-28720422db9f-kube-api-access-n78gl\") pod \"swift-operator-controller-manager-5f8c65bbfc-c6wk7\" (UID: \"a5090dbe-8e6f-4865-92a3-28720422db9f\") " pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-c6wk7" Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.341791 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b47ead63-1562-466f-887b-54c155983ebf-cert\") pod \"infra-operator-controller-manager-57548d458d-cgjf2\" (UID: \"b47ead63-1562-466f-887b-54c155983ebf\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-cgjf2" Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.341829 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjblj\" (UniqueName: \"kubernetes.io/projected/276259ca-95c1-41c2-803f-b82904067552-kube-api-access-jjblj\") pod \"test-operator-controller-manager-5854674fcc-bg4vv\" (UID: \"276259ca-95c1-41c2-803f-b82904067552\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-bg4vv" Dec 03 13:14:37 crc kubenswrapper[4986]: E1203 13:14:37.342177 4986 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 03 13:14:37 crc kubenswrapper[4986]: E1203 13:14:37.342213 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/400b4a35-c3f1-409e-83fe-019ff145c65a-cert podName:400b4a35-c3f1-409e-83fe-019ff145c65a nodeName:}" failed. No retries permitted until 2025-12-03 13:14:37.84220019 +0000 UTC m=+1137.308631371 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/400b4a35-c3f1-409e-83fe-019ff145c65a-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4fw9mm" (UID: "400b4a35-c3f1-409e-83fe-019ff145c65a") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.348108 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-769dc69bc-ph2qf"] Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.349093 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-ph2qf" Dec 03 13:14:37 crc kubenswrapper[4986]: E1203 13:14:37.349610 4986 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 03 13:14:37 crc kubenswrapper[4986]: E1203 13:14:37.349658 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b47ead63-1562-466f-887b-54c155983ebf-cert podName:b47ead63-1562-466f-887b-54c155983ebf nodeName:}" failed. No retries permitted until 2025-12-03 13:14:38.349643594 +0000 UTC m=+1137.816074785 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b47ead63-1562-466f-887b-54c155983ebf-cert") pod "infra-operator-controller-manager-57548d458d-cgjf2" (UID: "b47ead63-1562-466f-887b-54c155983ebf") : secret "infra-operator-webhook-server-cert" not found Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.352783 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-jgxqz" Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.353229 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-998648c74-xcs9s" Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.356827 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-bzqsg" Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.378426 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-lvpbh" Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.378885 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-769dc69bc-ph2qf"] Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.379414 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czdjt\" (UniqueName: \"kubernetes.io/projected/c6bc4b54-a3be-4a2c-813e-fa19eea0dbe8-kube-api-access-czdjt\") pod \"placement-operator-controller-manager-78f8948974-j4bg2\" (UID: \"c6bc4b54-a3be-4a2c-813e-fa19eea0dbe8\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-j4bg2" Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.381257 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-7t7xl" Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.385306 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n78gl\" (UniqueName: \"kubernetes.io/projected/a5090dbe-8e6f-4865-92a3-28720422db9f-kube-api-access-n78gl\") pod \"swift-operator-controller-manager-5f8c65bbfc-c6wk7\" (UID: \"a5090dbe-8e6f-4865-92a3-28720422db9f\") " pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-c6wk7" Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.405477 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7jql\" (UniqueName: \"kubernetes.io/projected/400b4a35-c3f1-409e-83fe-019ff145c65a-kube-api-access-g7jql\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4fw9mm\" (UID: \"400b4a35-c3f1-409e-83fe-019ff145c65a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4fw9mm" Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.416603 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-78f8948974-j4bg2" Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.442245 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwkns\" (UniqueName: \"kubernetes.io/projected/a7413edd-cf2a-4756-b6b7-afe4e4e42fe6-kube-api-access-vwkns\") pod \"telemetry-operator-controller-manager-76cc84c6bb-726wj\" (UID: \"a7413edd-cf2a-4756-b6b7-afe4e4e42fe6\") " pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-726wj" Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.442349 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjblj\" (UniqueName: \"kubernetes.io/projected/276259ca-95c1-41c2-803f-b82904067552-kube-api-access-jjblj\") pod \"test-operator-controller-manager-5854674fcc-bg4vv\" (UID: \"276259ca-95c1-41c2-803f-b82904067552\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-bg4vv" Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.442376 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7lfd\" (UniqueName: \"kubernetes.io/projected/2d441496-72d9-462b-aea6-e2588499fbf0-kube-api-access-z7lfd\") pod \"watcher-operator-controller-manager-769dc69bc-ph2qf\" (UID: \"2d441496-72d9-462b-aea6-e2588499fbf0\") " pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-ph2qf" Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.483229 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwkns\" (UniqueName: \"kubernetes.io/projected/a7413edd-cf2a-4756-b6b7-afe4e4e42fe6-kube-api-access-vwkns\") pod \"telemetry-operator-controller-manager-76cc84c6bb-726wj\" (UID: \"a7413edd-cf2a-4756-b6b7-afe4e4e42fe6\") " pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-726wj" Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.483849 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjblj\" (UniqueName: \"kubernetes.io/projected/276259ca-95c1-41c2-803f-b82904067552-kube-api-access-jjblj\") pod \"test-operator-controller-manager-5854674fcc-bg4vv\" (UID: \"276259ca-95c1-41c2-803f-b82904067552\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-bg4vv" Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.547829 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7lfd\" (UniqueName: \"kubernetes.io/projected/2d441496-72d9-462b-aea6-e2588499fbf0-kube-api-access-z7lfd\") pod \"watcher-operator-controller-manager-769dc69bc-ph2qf\" (UID: \"2d441496-72d9-462b-aea6-e2588499fbf0\") " pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-ph2qf" Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.559723 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-726wj" Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.568124 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5cd9cc65cb-tjrw7"] Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.570710 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5cd9cc65cb-tjrw7" Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.574709 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.574735 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.575448 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-qvht6" Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.589149 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5854674fcc-bg4vv" Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.612196 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5cd9cc65cb-tjrw7"] Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.616333 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7lfd\" (UniqueName: \"kubernetes.io/projected/2d441496-72d9-462b-aea6-e2588499fbf0-kube-api-access-z7lfd\") pod \"watcher-operator-controller-manager-769dc69bc-ph2qf\" (UID: \"2d441496-72d9-462b-aea6-e2588499fbf0\") " pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-ph2qf" Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.619404 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9ghmd"] Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.620771 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9ghmd" Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.623275 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-bq2kx" Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.631211 4986 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.638423 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9ghmd"] Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.669496 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-78b4bc895b-fw7nw"] Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.682308 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-c6wk7" Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.744039 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-ph2qf" Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.755300 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cd923320-06e2-4933-bc26-a4c947ab732b-webhook-certs\") pod \"openstack-operator-controller-manager-5cd9cc65cb-tjrw7\" (UID: \"cd923320-06e2-4933-bc26-a4c947ab732b\") " pod="openstack-operators/openstack-operator-controller-manager-5cd9cc65cb-tjrw7" Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.755365 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tndzx\" (UniqueName: \"kubernetes.io/projected/cd923320-06e2-4933-bc26-a4c947ab732b-kube-api-access-tndzx\") pod \"openstack-operator-controller-manager-5cd9cc65cb-tjrw7\" (UID: \"cd923320-06e2-4933-bc26-a4c947ab732b\") " pod="openstack-operators/openstack-operator-controller-manager-5cd9cc65cb-tjrw7" Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.755423 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzwdt\" (UniqueName: \"kubernetes.io/projected/b9df4fcc-97aa-4a32-acbf-25f42addf8cc-kube-api-access-zzwdt\") pod \"rabbitmq-cluster-operator-manager-668c99d594-9ghmd\" (UID: \"b9df4fcc-97aa-4a32-acbf-25f42addf8cc\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9ghmd" Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.755493 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cd923320-06e2-4933-bc26-a4c947ab732b-metrics-certs\") pod \"openstack-operator-controller-manager-5cd9cc65cb-tjrw7\" (UID: \"cd923320-06e2-4933-bc26-a4c947ab732b\") " pod="openstack-operators/openstack-operator-controller-manager-5cd9cc65cb-tjrw7" Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.859980 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzwdt\" (UniqueName: \"kubernetes.io/projected/b9df4fcc-97aa-4a32-acbf-25f42addf8cc-kube-api-access-zzwdt\") pod \"rabbitmq-cluster-operator-manager-668c99d594-9ghmd\" (UID: \"b9df4fcc-97aa-4a32-acbf-25f42addf8cc\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9ghmd" Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.860051 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cd923320-06e2-4933-bc26-a4c947ab732b-metrics-certs\") pod \"openstack-operator-controller-manager-5cd9cc65cb-tjrw7\" (UID: \"cd923320-06e2-4933-bc26-a4c947ab732b\") " pod="openstack-operators/openstack-operator-controller-manager-5cd9cc65cb-tjrw7" Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.860142 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cd923320-06e2-4933-bc26-a4c947ab732b-webhook-certs\") pod \"openstack-operator-controller-manager-5cd9cc65cb-tjrw7\" (UID: \"cd923320-06e2-4933-bc26-a4c947ab732b\") " pod="openstack-operators/openstack-operator-controller-manager-5cd9cc65cb-tjrw7" Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.860167 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tndzx\" (UniqueName: \"kubernetes.io/projected/cd923320-06e2-4933-bc26-a4c947ab732b-kube-api-access-tndzx\") pod \"openstack-operator-controller-manager-5cd9cc65cb-tjrw7\" (UID: \"cd923320-06e2-4933-bc26-a4c947ab732b\") " pod="openstack-operators/openstack-operator-controller-manager-5cd9cc65cb-tjrw7" Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.860200 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/400b4a35-c3f1-409e-83fe-019ff145c65a-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4fw9mm\" (UID: \"400b4a35-c3f1-409e-83fe-019ff145c65a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4fw9mm" Dec 03 13:14:37 crc kubenswrapper[4986]: E1203 13:14:37.860242 4986 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 03 13:14:37 crc kubenswrapper[4986]: E1203 13:14:37.860323 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cd923320-06e2-4933-bc26-a4c947ab732b-metrics-certs podName:cd923320-06e2-4933-bc26-a4c947ab732b nodeName:}" failed. No retries permitted until 2025-12-03 13:14:38.360304837 +0000 UTC m=+1137.826736028 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cd923320-06e2-4933-bc26-a4c947ab732b-metrics-certs") pod "openstack-operator-controller-manager-5cd9cc65cb-tjrw7" (UID: "cd923320-06e2-4933-bc26-a4c947ab732b") : secret "metrics-server-cert" not found Dec 03 13:14:37 crc kubenswrapper[4986]: E1203 13:14:37.860388 4986 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 03 13:14:37 crc kubenswrapper[4986]: E1203 13:14:37.860445 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/400b4a35-c3f1-409e-83fe-019ff145c65a-cert podName:400b4a35-c3f1-409e-83fe-019ff145c65a nodeName:}" failed. No retries permitted until 2025-12-03 13:14:38.860425642 +0000 UTC m=+1138.326856883 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/400b4a35-c3f1-409e-83fe-019ff145c65a-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4fw9mm" (UID: "400b4a35-c3f1-409e-83fe-019ff145c65a") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 03 13:14:37 crc kubenswrapper[4986]: E1203 13:14:37.860478 4986 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 03 13:14:37 crc kubenswrapper[4986]: E1203 13:14:37.860528 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cd923320-06e2-4933-bc26-a4c947ab732b-webhook-certs podName:cd923320-06e2-4933-bc26-a4c947ab732b nodeName:}" failed. No retries permitted until 2025-12-03 13:14:38.360512084 +0000 UTC m=+1137.826943275 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/cd923320-06e2-4933-bc26-a4c947ab732b-webhook-certs") pod "openstack-operator-controller-manager-5cd9cc65cb-tjrw7" (UID: "cd923320-06e2-4933-bc26-a4c947ab732b") : secret "webhook-server-cert" not found Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.893726 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzwdt\" (UniqueName: \"kubernetes.io/projected/b9df4fcc-97aa-4a32-acbf-25f42addf8cc-kube-api-access-zzwdt\") pod \"rabbitmq-cluster-operator-manager-668c99d594-9ghmd\" (UID: \"b9df4fcc-97aa-4a32-acbf-25f42addf8cc\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9ghmd" Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.894939 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tndzx\" (UniqueName: \"kubernetes.io/projected/cd923320-06e2-4933-bc26-a4c947ab732b-kube-api-access-tndzx\") pod \"openstack-operator-controller-manager-5cd9cc65cb-tjrw7\" (UID: \"cd923320-06e2-4933-bc26-a4c947ab732b\") " pod="openstack-operators/openstack-operator-controller-manager-5cd9cc65cb-tjrw7" Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.920797 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-fw7nw" event={"ID":"924573df-b6fe-4d17-add4-376f76084fab","Type":"ContainerStarted","Data":"6a36cb905e93dedba5c00cc5a46008d28f25ace447b7c40385bd36ea376eaace"} Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.923234 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-859b6ccc6-l48wq"] Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.938985 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-8z7gr"] Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.964873 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987cd8cd-5vc5p"] Dec 03 13:14:37 crc kubenswrapper[4986]: I1203 13:14:37.967209 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9ghmd" Dec 03 13:14:38 crc kubenswrapper[4986]: W1203 13:14:38.018028 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8af63121_8727_4c23_b872_554fe679fc2f.slice/crio-6a3881502e993769b22b9e332b48f4ecd0a7aaf9bc4b80d3bf0ff10dbef30564 WatchSource:0}: Error finding container 6a3881502e993769b22b9e332b48f4ecd0a7aaf9bc4b80d3bf0ff10dbef30564: Status 404 returned error can't find the container with id 6a3881502e993769b22b9e332b48f4ecd0a7aaf9bc4b80d3bf0ff10dbef30564 Dec 03 13:14:38 crc kubenswrapper[4986]: I1203 13:14:38.047088 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-bm5lk"] Dec 03 13:14:38 crc kubenswrapper[4986]: I1203 13:14:38.065069 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-nnhsz"] Dec 03 13:14:38 crc kubenswrapper[4986]: I1203 13:14:38.073280 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7c79b5df47-h5sqr"] Dec 03 13:14:38 crc kubenswrapper[4986]: W1203 13:14:38.073521 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod29ac6999_88ff_472f_a03e_0b95f1042d38.slice/crio-1363af7f3b11b715933232d5d745858e52b2dba1854ff740827fd7928361b4c9 WatchSource:0}: Error finding container 1363af7f3b11b715933232d5d745858e52b2dba1854ff740827fd7928361b4c9: Status 404 returned error can't find the container with id 1363af7f3b11b715933232d5d745858e52b2dba1854ff740827fd7928361b4c9 Dec 03 13:14:38 crc kubenswrapper[4986]: W1203 13:14:38.076697 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6268841_12af_4fa7_a9ab_54927e3256cf.slice/crio-51a5179b2ff5801e0b03eba32a53474b2068b0cc20ddefef522e7e37ec64be05 WatchSource:0}: Error finding container 51a5179b2ff5801e0b03eba32a53474b2068b0cc20ddefef522e7e37ec64be05: Status 404 returned error can't find the container with id 51a5179b2ff5801e0b03eba32a53474b2068b0cc20ddefef522e7e37ec64be05 Dec 03 13:14:38 crc kubenswrapper[4986]: W1203 13:14:38.082924 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf01db271_4787_4af7_b37b_5ba6e4e2e5b7.slice/crio-3d8cd6fe9f0ca338dc36dc93c0c08f4617fb899b42341b8acda5c534c787b7ef WatchSource:0}: Error finding container 3d8cd6fe9f0ca338dc36dc93c0c08f4617fb899b42341b8acda5c534c787b7ef: Status 404 returned error can't find the container with id 3d8cd6fe9f0ca338dc36dc93c0c08f4617fb899b42341b8acda5c534c787b7ef Dec 03 13:14:38 crc kubenswrapper[4986]: W1203 13:14:38.241639 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf2be2f63_f6d7_425a_8ce1_d2bc205e24f0.slice/crio-50908cde29e65ea39652357de449190a7c0302284e6cd07a03f8c9f721df79a9 WatchSource:0}: Error finding container 50908cde29e65ea39652357de449190a7c0302284e6cd07a03f8c9f721df79a9: Status 404 returned error can't find the container with id 50908cde29e65ea39652357de449190a7c0302284e6cd07a03f8c9f721df79a9 Dec 03 13:14:38 crc kubenswrapper[4986]: I1203 13:14:38.242240 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-h49tf"] Dec 03 13:14:38 crc kubenswrapper[4986]: I1203 13:14:38.258398 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-bkf5w"] Dec 03 13:14:38 crc kubenswrapper[4986]: I1203 13:14:38.263136 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-blk7z"] Dec 03 13:14:38 crc kubenswrapper[4986]: I1203 13:14:38.367399 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b47ead63-1562-466f-887b-54c155983ebf-cert\") pod \"infra-operator-controller-manager-57548d458d-cgjf2\" (UID: \"b47ead63-1562-466f-887b-54c155983ebf\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-cgjf2" Dec 03 13:14:38 crc kubenswrapper[4986]: I1203 13:14:38.367477 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cd923320-06e2-4933-bc26-a4c947ab732b-metrics-certs\") pod \"openstack-operator-controller-manager-5cd9cc65cb-tjrw7\" (UID: \"cd923320-06e2-4933-bc26-a4c947ab732b\") " pod="openstack-operators/openstack-operator-controller-manager-5cd9cc65cb-tjrw7" Dec 03 13:14:38 crc kubenswrapper[4986]: I1203 13:14:38.367536 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cd923320-06e2-4933-bc26-a4c947ab732b-webhook-certs\") pod \"openstack-operator-controller-manager-5cd9cc65cb-tjrw7\" (UID: \"cd923320-06e2-4933-bc26-a4c947ab732b\") " pod="openstack-operators/openstack-operator-controller-manager-5cd9cc65cb-tjrw7" Dec 03 13:14:38 crc kubenswrapper[4986]: E1203 13:14:38.367598 4986 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 03 13:14:38 crc kubenswrapper[4986]: E1203 13:14:38.367663 4986 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 03 13:14:38 crc kubenswrapper[4986]: E1203 13:14:38.367693 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b47ead63-1562-466f-887b-54c155983ebf-cert podName:b47ead63-1562-466f-887b-54c155983ebf nodeName:}" failed. No retries permitted until 2025-12-03 13:14:40.367666212 +0000 UTC m=+1139.834097463 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b47ead63-1562-466f-887b-54c155983ebf-cert") pod "infra-operator-controller-manager-57548d458d-cgjf2" (UID: "b47ead63-1562-466f-887b-54c155983ebf") : secret "infra-operator-webhook-server-cert" not found Dec 03 13:14:38 crc kubenswrapper[4986]: E1203 13:14:38.367719 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cd923320-06e2-4933-bc26-a4c947ab732b-webhook-certs podName:cd923320-06e2-4933-bc26-a4c947ab732b nodeName:}" failed. No retries permitted until 2025-12-03 13:14:39.367708853 +0000 UTC m=+1138.834140154 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/cd923320-06e2-4933-bc26-a4c947ab732b-webhook-certs") pod "openstack-operator-controller-manager-5cd9cc65cb-tjrw7" (UID: "cd923320-06e2-4933-bc26-a4c947ab732b") : secret "webhook-server-cert" not found Dec 03 13:14:38 crc kubenswrapper[4986]: E1203 13:14:38.367717 4986 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 03 13:14:38 crc kubenswrapper[4986]: E1203 13:14:38.367799 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cd923320-06e2-4933-bc26-a4c947ab732b-metrics-certs podName:cd923320-06e2-4933-bc26-a4c947ab732b nodeName:}" failed. No retries permitted until 2025-12-03 13:14:39.367779875 +0000 UTC m=+1138.834211056 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cd923320-06e2-4933-bc26-a4c947ab732b-metrics-certs") pod "openstack-operator-controller-manager-5cd9cc65cb-tjrw7" (UID: "cd923320-06e2-4933-bc26-a4c947ab732b") : secret "metrics-server-cert" not found Dec 03 13:14:38 crc kubenswrapper[4986]: I1203 13:14:38.404433 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-bg4vv"] Dec 03 13:14:38 crc kubenswrapper[4986]: W1203 13:14:38.406213 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod42fe5051_78e1_45ab_9766_dbd119c4e060.slice/crio-9022cc46ca46c6f54ff84fb3a036b1392cb80ff59943ffc4c23356263352ea18 WatchSource:0}: Error finding container 9022cc46ca46c6f54ff84fb3a036b1392cb80ff59943ffc4c23356263352ea18: Status 404 returned error can't find the container with id 9022cc46ca46c6f54ff84fb3a036b1392cb80ff59943ffc4c23356263352ea18 Dec 03 13:14:38 crc kubenswrapper[4986]: W1203 13:14:38.407501 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod276259ca_95c1_41c2_803f_b82904067552.slice/crio-dc1f88706a258162b638ba6ef343fb431d026e8b56649c24705f6e7291084513 WatchSource:0}: Error finding container dc1f88706a258162b638ba6ef343fb431d026e8b56649c24705f6e7291084513: Status 404 returned error can't find the container with id dc1f88706a258162b638ba6ef343fb431d026e8b56649c24705f6e7291084513 Dec 03 13:14:38 crc kubenswrapper[4986]: I1203 13:14:38.410939 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-lvpbh"] Dec 03 13:14:38 crc kubenswrapper[4986]: W1203 13:14:38.412585 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda7413edd_cf2a_4756_b6b7_afe4e4e42fe6.slice/crio-414fd07a2b4247cfa711b5f654a2f25087411bcb7664cd4dd8dbd8792e6f71fb WatchSource:0}: Error finding container 414fd07a2b4247cfa711b5f654a2f25087411bcb7664cd4dd8dbd8792e6f71fb: Status 404 returned error can't find the container with id 414fd07a2b4247cfa711b5f654a2f25087411bcb7664cd4dd8dbd8792e6f71fb Dec 03 13:14:38 crc kubenswrapper[4986]: I1203 13:14:38.416153 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-726wj"] Dec 03 13:14:38 crc kubenswrapper[4986]: E1203 13:14:38.416982 4986 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:7d66757c0af67104f0389e851a7cc0daa44443ad202d157417bd86bbb57cc385,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vwkns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-76cc84c6bb-726wj_openstack-operators(a7413edd-cf2a-4756-b6b7-afe4e4e42fe6): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 03 13:14:38 crc kubenswrapper[4986]: E1203 13:14:38.419211 4986 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vwkns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-76cc84c6bb-726wj_openstack-operators(a7413edd-cf2a-4756-b6b7-afe4e4e42fe6): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 03 13:14:38 crc kubenswrapper[4986]: E1203 13:14:38.420951 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-726wj" podUID="a7413edd-cf2a-4756-b6b7-afe4e4e42fe6" Dec 03 13:14:38 crc kubenswrapper[4986]: I1203 13:14:38.611062 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-7t7xl"] Dec 03 13:14:38 crc kubenswrapper[4986]: W1203 13:14:38.616168 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc6bc4b54_a3be_4a2c_813e_fa19eea0dbe8.slice/crio-cb9782e73dc6d88e416a318efd93b9cce8a67864394e4582b4b807258df88b69 WatchSource:0}: Error finding container cb9782e73dc6d88e416a318efd93b9cce8a67864394e4582b4b807258df88b69: Status 404 returned error can't find the container with id cb9782e73dc6d88e416a318efd93b9cce8a67864394e4582b4b807258df88b69 Dec 03 13:14:38 crc kubenswrapper[4986]: I1203 13:14:38.617484 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-xcs9s"] Dec 03 13:14:38 crc kubenswrapper[4986]: I1203 13:14:38.627557 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6c548fd776-jgxqz"] Dec 03 13:14:38 crc kubenswrapper[4986]: E1203 13:14:38.628540 4986 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-swstz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-697bc559fc-7t7xl_openstack-operators(d601bb24-2bd9-478a-96d1-ed2001bd53b6): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 03 13:14:38 crc kubenswrapper[4986]: E1203 13:14:38.632770 4986 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-swstz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-697bc559fc-7t7xl_openstack-operators(d601bb24-2bd9-478a-96d1-ed2001bd53b6): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 03 13:14:38 crc kubenswrapper[4986]: E1203 13:14:38.633440 4986 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:0f523b7e2fa9e86fef986acf07d0c42d5658c475d565f11eaea926ebffcb6530,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rkzbn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-6c548fd776-jgxqz_openstack-operators(ac6af48b-36c2-427c-93ad-090cc34434f7): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 03 13:14:38 crc kubenswrapper[4986]: E1203 13:14:38.633490 4986 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dvzgl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-998648c74-xcs9s_openstack-operators(be530205-b10b-4d4b-9fa3-4d9d0548054c): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 03 13:14:38 crc kubenswrapper[4986]: E1203 13:14:38.633693 4986 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:2a3d21728a8bfb4e64617e63e61e2d1cb70a383ea3e8f846e0c3c3c02d2b0a9d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-n78gl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-5f8c65bbfc-c6wk7_openstack-operators(a5090dbe-8e6f-4865-92a3-28720422db9f): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 03 13:14:38 crc kubenswrapper[4986]: E1203 13:14:38.634203 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-7t7xl" podUID="d601bb24-2bd9-478a-96d1-ed2001bd53b6" Dec 03 13:14:38 crc kubenswrapper[4986]: I1203 13:14:38.634762 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-j4bg2"] Dec 03 13:14:38 crc kubenswrapper[4986]: E1203 13:14:38.636220 4986 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-n78gl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-5f8c65bbfc-c6wk7_openstack-operators(a5090dbe-8e6f-4865-92a3-28720422db9f): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 03 13:14:38 crc kubenswrapper[4986]: E1203 13:14:38.636224 4986 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rkzbn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-6c548fd776-jgxqz_openstack-operators(ac6af48b-36c2-427c-93ad-090cc34434f7): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 03 13:14:38 crc kubenswrapper[4986]: E1203 13:14:38.636379 4986 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dvzgl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-998648c74-xcs9s_openstack-operators(be530205-b10b-4d4b-9fa3-4d9d0548054c): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 03 13:14:38 crc kubenswrapper[4986]: E1203 13:14:38.637356 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-jgxqz" podUID="ac6af48b-36c2-427c-93ad-090cc34434f7" Dec 03 13:14:38 crc kubenswrapper[4986]: E1203 13:14:38.637396 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-c6wk7" podUID="a5090dbe-8e6f-4865-92a3-28720422db9f" Dec 03 13:14:38 crc kubenswrapper[4986]: E1203 13:14:38.637425 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/octavia-operator-controller-manager-998648c74-xcs9s" podUID="be530205-b10b-4d4b-9fa3-4d9d0548054c" Dec 03 13:14:38 crc kubenswrapper[4986]: I1203 13:14:38.648093 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f8c65bbfc-c6wk7"] Dec 03 13:14:38 crc kubenswrapper[4986]: W1203 13:14:38.648450 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2d441496_72d9_462b_aea6_e2588499fbf0.slice/crio-4010f3b3f9f01359db8b332c32d6fa3428fbc0f9f065e735d0b11405043fb4de WatchSource:0}: Error finding container 4010f3b3f9f01359db8b332c32d6fa3428fbc0f9f065e735d0b11405043fb4de: Status 404 returned error can't find the container with id 4010f3b3f9f01359db8b332c32d6fa3428fbc0f9f065e735d0b11405043fb4de Dec 03 13:14:38 crc kubenswrapper[4986]: E1203 13:14:38.651474 4986 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:9aa8c03633e4b934c57868c1660acf47e7d386ac86bcb344df262c9ad76b8621,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z7lfd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-769dc69bc-ph2qf_openstack-operators(2d441496-72d9-462b-aea6-e2588499fbf0): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 03 13:14:38 crc kubenswrapper[4986]: E1203 13:14:38.654025 4986 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z7lfd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-769dc69bc-ph2qf_openstack-operators(2d441496-72d9-462b-aea6-e2588499fbf0): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 03 13:14:38 crc kubenswrapper[4986]: I1203 13:14:38.654857 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-769dc69bc-ph2qf"] Dec 03 13:14:38 crc kubenswrapper[4986]: E1203 13:14:38.655340 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-ph2qf" podUID="2d441496-72d9-462b-aea6-e2588499fbf0" Dec 03 13:14:38 crc kubenswrapper[4986]: I1203 13:14:38.661516 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9ghmd"] Dec 03 13:14:38 crc kubenswrapper[4986]: W1203 13:14:38.666921 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb9df4fcc_97aa_4a32_acbf_25f42addf8cc.slice/crio-e8ad164442eb6df7b393e8ace3be3d14ba28b8f152c1bdecc5fc834c9f730bde WatchSource:0}: Error finding container e8ad164442eb6df7b393e8ace3be3d14ba28b8f152c1bdecc5fc834c9f730bde: Status 404 returned error can't find the container with id e8ad164442eb6df7b393e8ace3be3d14ba28b8f152c1bdecc5fc834c9f730bde Dec 03 13:14:38 crc kubenswrapper[4986]: I1203 13:14:38.873339 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/400b4a35-c3f1-409e-83fe-019ff145c65a-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4fw9mm\" (UID: \"400b4a35-c3f1-409e-83fe-019ff145c65a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4fw9mm" Dec 03 13:14:38 crc kubenswrapper[4986]: E1203 13:14:38.873508 4986 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 03 13:14:38 crc kubenswrapper[4986]: E1203 13:14:38.873587 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/400b4a35-c3f1-409e-83fe-019ff145c65a-cert podName:400b4a35-c3f1-409e-83fe-019ff145c65a nodeName:}" failed. No retries permitted until 2025-12-03 13:14:40.873567675 +0000 UTC m=+1140.339998866 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/400b4a35-c3f1-409e-83fe-019ff145c65a-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4fw9mm" (UID: "400b4a35-c3f1-409e-83fe-019ff145c65a") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 03 13:14:38 crc kubenswrapper[4986]: I1203 13:14:38.933517 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-5vc5p" event={"ID":"8af63121-8727-4c23-b872-554fe679fc2f","Type":"ContainerStarted","Data":"6a3881502e993769b22b9e332b48f4ecd0a7aaf9bc4b80d3bf0ff10dbef30564"} Dec 03 13:14:38 crc kubenswrapper[4986]: I1203 13:14:38.934647 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9ghmd" event={"ID":"b9df4fcc-97aa-4a32-acbf-25f42addf8cc","Type":"ContainerStarted","Data":"e8ad164442eb6df7b393e8ace3be3d14ba28b8f152c1bdecc5fc834c9f730bde"} Dec 03 13:14:38 crc kubenswrapper[4986]: I1203 13:14:38.935573 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-bg4vv" event={"ID":"276259ca-95c1-41c2-803f-b82904067552","Type":"ContainerStarted","Data":"dc1f88706a258162b638ba6ef343fb431d026e8b56649c24705f6e7291084513"} Dec 03 13:14:38 crc kubenswrapper[4986]: I1203 13:14:38.937694 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-jgxqz" event={"ID":"ac6af48b-36c2-427c-93ad-090cc34434f7","Type":"ContainerStarted","Data":"c0f9d91cba6574fade98e7973531b0279a68640a2715c548f95f794777f2dbe9"} Dec 03 13:14:38 crc kubenswrapper[4986]: E1203 13:14:38.942195 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:0f523b7e2fa9e86fef986acf07d0c42d5658c475d565f11eaea926ebffcb6530\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-jgxqz" podUID="ac6af48b-36c2-427c-93ad-090cc34434f7" Dec 03 13:14:38 crc kubenswrapper[4986]: I1203 13:14:38.942714 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-bm5lk" event={"ID":"29ac6999-88ff-472f-a03e-0b95f1042d38","Type":"ContainerStarted","Data":"1363af7f3b11b715933232d5d745858e52b2dba1854ff740827fd7928361b4c9"} Dec 03 13:14:38 crc kubenswrapper[4986]: E1203 13:14:38.947139 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/octavia-operator-controller-manager-998648c74-xcs9s" podUID="be530205-b10b-4d4b-9fa3-4d9d0548054c" Dec 03 13:14:38 crc kubenswrapper[4986]: I1203 13:14:38.966365 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-xcs9s" event={"ID":"be530205-b10b-4d4b-9fa3-4d9d0548054c","Type":"ContainerStarted","Data":"faabcadd044c6185be623c3a20c01dfe65b76f0a4a5e44f2e9e78f161275f26a"} Dec 03 13:14:38 crc kubenswrapper[4986]: I1203 13:14:38.966423 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-l48wq" event={"ID":"f3328b2b-d4e4-4b39-a949-bfd1463596f0","Type":"ContainerStarted","Data":"1911c1cc4f4363a5baed445ead6be50be487b00697b61fe08717f2ccb3da55a6"} Dec 03 13:14:38 crc kubenswrapper[4986]: I1203 13:14:38.966439 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-blk7z" event={"ID":"1ecc0034-a740-410d-a135-6b65d34ce64d","Type":"ContainerStarted","Data":"3a8812e10e5601aed5d783747ba313142194d977414e1f26d0d0d5d45e2095b3"} Dec 03 13:14:38 crc kubenswrapper[4986]: I1203 13:14:38.966455 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-ph2qf" event={"ID":"2d441496-72d9-462b-aea6-e2588499fbf0","Type":"ContainerStarted","Data":"4010f3b3f9f01359db8b332c32d6fa3428fbc0f9f065e735d0b11405043fb4de"} Dec 03 13:14:38 crc kubenswrapper[4986]: E1203 13:14:38.968606 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:9aa8c03633e4b934c57868c1660acf47e7d386ac86bcb344df262c9ad76b8621\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-ph2qf" podUID="2d441496-72d9-462b-aea6-e2588499fbf0" Dec 03 13:14:38 crc kubenswrapper[4986]: I1203 13:14:38.983479 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-nnhsz" event={"ID":"f6268841-12af-4fa7-a9ab-54927e3256cf","Type":"ContainerStarted","Data":"51a5179b2ff5801e0b03eba32a53474b2068b0cc20ddefef522e7e37ec64be05"} Dec 03 13:14:38 crc kubenswrapper[4986]: I1203 13:14:38.992327 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-c6wk7" event={"ID":"a5090dbe-8e6f-4865-92a3-28720422db9f","Type":"ContainerStarted","Data":"37ebdbb1d8ce8523f1ad76aa1a2eba28d264b4c5a81d75f20c0762cd60cb28bb"} Dec 03 13:14:38 crc kubenswrapper[4986]: I1203 13:14:38.994497 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-lvpbh" event={"ID":"42fe5051-78e1-45ab-9766-dbd119c4e060","Type":"ContainerStarted","Data":"9022cc46ca46c6f54ff84fb3a036b1392cb80ff59943ffc4c23356263352ea18"} Dec 03 13:14:38 crc kubenswrapper[4986]: I1203 13:14:38.995515 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-h49tf" event={"ID":"f2be2f63-f6d7-425a-8ce1-d2bc205e24f0","Type":"ContainerStarted","Data":"50908cde29e65ea39652357de449190a7c0302284e6cd07a03f8c9f721df79a9"} Dec 03 13:14:39 crc kubenswrapper[4986]: E1203 13:14:39.005543 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:2a3d21728a8bfb4e64617e63e61e2d1cb70a383ea3e8f846e0c3c3c02d2b0a9d\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-c6wk7" podUID="a5090dbe-8e6f-4865-92a3-28720422db9f" Dec 03 13:14:39 crc kubenswrapper[4986]: I1203 13:14:39.006401 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-726wj" event={"ID":"a7413edd-cf2a-4756-b6b7-afe4e4e42fe6","Type":"ContainerStarted","Data":"414fd07a2b4247cfa711b5f654a2f25087411bcb7664cd4dd8dbd8792e6f71fb"} Dec 03 13:14:39 crc kubenswrapper[4986]: I1203 13:14:39.010438 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-8z7gr" event={"ID":"69fed752-e65d-4007-a731-3faee6335366","Type":"ContainerStarted","Data":"a7309e7e6aa8dc3ae10b79d8257ec01a097c979f1880cfe3dc8a1cb5b9174aa3"} Dec 03 13:14:39 crc kubenswrapper[4986]: E1203 13:14:39.010451 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:7d66757c0af67104f0389e851a7cc0daa44443ad202d157417bd86bbb57cc385\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-726wj" podUID="a7413edd-cf2a-4756-b6b7-afe4e4e42fe6" Dec 03 13:14:39 crc kubenswrapper[4986]: I1203 13:14:39.013593 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-7t7xl" event={"ID":"d601bb24-2bd9-478a-96d1-ed2001bd53b6","Type":"ContainerStarted","Data":"15b1ecadfae63ec27eada8186c1bd829a0cbb0582bd9e64208d85942d385b4ac"} Dec 03 13:14:39 crc kubenswrapper[4986]: I1203 13:14:39.015793 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-j4bg2" event={"ID":"c6bc4b54-a3be-4a2c-813e-fa19eea0dbe8","Type":"ContainerStarted","Data":"cb9782e73dc6d88e416a318efd93b9cce8a67864394e4582b4b807258df88b69"} Dec 03 13:14:39 crc kubenswrapper[4986]: E1203 13:14:39.015859 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-7t7xl" podUID="d601bb24-2bd9-478a-96d1-ed2001bd53b6" Dec 03 13:14:39 crc kubenswrapper[4986]: I1203 13:14:39.020140 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-bkf5w" event={"ID":"3d88c3a6-1643-4fff-acbe-2327b9878103","Type":"ContainerStarted","Data":"468d067935e9f03e62693fc534e557e4912b80f42fb168ca86310ee70409d5d1"} Dec 03 13:14:39 crc kubenswrapper[4986]: I1203 13:14:39.023357 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-h5sqr" event={"ID":"f01db271-4787-4af7-b37b-5ba6e4e2e5b7","Type":"ContainerStarted","Data":"3d8cd6fe9f0ca338dc36dc93c0c08f4617fb899b42341b8acda5c534c787b7ef"} Dec 03 13:14:39 crc kubenswrapper[4986]: I1203 13:14:39.380965 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cd923320-06e2-4933-bc26-a4c947ab732b-webhook-certs\") pod \"openstack-operator-controller-manager-5cd9cc65cb-tjrw7\" (UID: \"cd923320-06e2-4933-bc26-a4c947ab732b\") " pod="openstack-operators/openstack-operator-controller-manager-5cd9cc65cb-tjrw7" Dec 03 13:14:39 crc kubenswrapper[4986]: I1203 13:14:39.381093 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cd923320-06e2-4933-bc26-a4c947ab732b-metrics-certs\") pod \"openstack-operator-controller-manager-5cd9cc65cb-tjrw7\" (UID: \"cd923320-06e2-4933-bc26-a4c947ab732b\") " pod="openstack-operators/openstack-operator-controller-manager-5cd9cc65cb-tjrw7" Dec 03 13:14:39 crc kubenswrapper[4986]: E1203 13:14:39.381189 4986 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 03 13:14:39 crc kubenswrapper[4986]: E1203 13:14:39.381228 4986 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 03 13:14:39 crc kubenswrapper[4986]: E1203 13:14:39.381301 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cd923320-06e2-4933-bc26-a4c947ab732b-webhook-certs podName:cd923320-06e2-4933-bc26-a4c947ab732b nodeName:}" failed. No retries permitted until 2025-12-03 13:14:41.381260898 +0000 UTC m=+1140.847692159 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/cd923320-06e2-4933-bc26-a4c947ab732b-webhook-certs") pod "openstack-operator-controller-manager-5cd9cc65cb-tjrw7" (UID: "cd923320-06e2-4933-bc26-a4c947ab732b") : secret "webhook-server-cert" not found Dec 03 13:14:39 crc kubenswrapper[4986]: E1203 13:14:39.381325 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cd923320-06e2-4933-bc26-a4c947ab732b-metrics-certs podName:cd923320-06e2-4933-bc26-a4c947ab732b nodeName:}" failed. No retries permitted until 2025-12-03 13:14:41.381317299 +0000 UTC m=+1140.847748570 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cd923320-06e2-4933-bc26-a4c947ab732b-metrics-certs") pod "openstack-operator-controller-manager-5cd9cc65cb-tjrw7" (UID: "cd923320-06e2-4933-bc26-a4c947ab732b") : secret "metrics-server-cert" not found Dec 03 13:14:40 crc kubenswrapper[4986]: E1203 13:14:40.033438 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-7t7xl" podUID="d601bb24-2bd9-478a-96d1-ed2001bd53b6" Dec 03 13:14:40 crc kubenswrapper[4986]: E1203 13:14:40.033557 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:0f523b7e2fa9e86fef986acf07d0c42d5658c475d565f11eaea926ebffcb6530\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-jgxqz" podUID="ac6af48b-36c2-427c-93ad-090cc34434f7" Dec 03 13:14:40 crc kubenswrapper[4986]: E1203 13:14:40.033710 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:2a3d21728a8bfb4e64617e63e61e2d1cb70a383ea3e8f846e0c3c3c02d2b0a9d\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-c6wk7" podUID="a5090dbe-8e6f-4865-92a3-28720422db9f" Dec 03 13:14:40 crc kubenswrapper[4986]: E1203 13:14:40.034354 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:9aa8c03633e4b934c57868c1660acf47e7d386ac86bcb344df262c9ad76b8621\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-ph2qf" podUID="2d441496-72d9-462b-aea6-e2588499fbf0" Dec 03 13:14:40 crc kubenswrapper[4986]: E1203 13:14:40.034390 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:7d66757c0af67104f0389e851a7cc0daa44443ad202d157417bd86bbb57cc385\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-726wj" podUID="a7413edd-cf2a-4756-b6b7-afe4e4e42fe6" Dec 03 13:14:40 crc kubenswrapper[4986]: E1203 13:14:40.037195 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/octavia-operator-controller-manager-998648c74-xcs9s" podUID="be530205-b10b-4d4b-9fa3-4d9d0548054c" Dec 03 13:14:40 crc kubenswrapper[4986]: I1203 13:14:40.402250 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b47ead63-1562-466f-887b-54c155983ebf-cert\") pod \"infra-operator-controller-manager-57548d458d-cgjf2\" (UID: \"b47ead63-1562-466f-887b-54c155983ebf\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-cgjf2" Dec 03 13:14:40 crc kubenswrapper[4986]: E1203 13:14:40.402425 4986 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 03 13:14:40 crc kubenswrapper[4986]: E1203 13:14:40.402495 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b47ead63-1562-466f-887b-54c155983ebf-cert podName:b47ead63-1562-466f-887b-54c155983ebf nodeName:}" failed. No retries permitted until 2025-12-03 13:14:44.402477712 +0000 UTC m=+1143.868908893 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b47ead63-1562-466f-887b-54c155983ebf-cert") pod "infra-operator-controller-manager-57548d458d-cgjf2" (UID: "b47ead63-1562-466f-887b-54c155983ebf") : secret "infra-operator-webhook-server-cert" not found Dec 03 13:14:40 crc kubenswrapper[4986]: I1203 13:14:40.910176 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/400b4a35-c3f1-409e-83fe-019ff145c65a-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4fw9mm\" (UID: \"400b4a35-c3f1-409e-83fe-019ff145c65a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4fw9mm" Dec 03 13:14:40 crc kubenswrapper[4986]: E1203 13:14:40.910402 4986 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 03 13:14:40 crc kubenswrapper[4986]: E1203 13:14:40.910460 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/400b4a35-c3f1-409e-83fe-019ff145c65a-cert podName:400b4a35-c3f1-409e-83fe-019ff145c65a nodeName:}" failed. No retries permitted until 2025-12-03 13:14:44.910444122 +0000 UTC m=+1144.376875313 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/400b4a35-c3f1-409e-83fe-019ff145c65a-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4fw9mm" (UID: "400b4a35-c3f1-409e-83fe-019ff145c65a") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 03 13:14:41 crc kubenswrapper[4986]: I1203 13:14:41.418329 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cd923320-06e2-4933-bc26-a4c947ab732b-metrics-certs\") pod \"openstack-operator-controller-manager-5cd9cc65cb-tjrw7\" (UID: \"cd923320-06e2-4933-bc26-a4c947ab732b\") " pod="openstack-operators/openstack-operator-controller-manager-5cd9cc65cb-tjrw7" Dec 03 13:14:41 crc kubenswrapper[4986]: I1203 13:14:41.418673 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cd923320-06e2-4933-bc26-a4c947ab732b-webhook-certs\") pod \"openstack-operator-controller-manager-5cd9cc65cb-tjrw7\" (UID: \"cd923320-06e2-4933-bc26-a4c947ab732b\") " pod="openstack-operators/openstack-operator-controller-manager-5cd9cc65cb-tjrw7" Dec 03 13:14:41 crc kubenswrapper[4986]: E1203 13:14:41.418554 4986 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 03 13:14:41 crc kubenswrapper[4986]: E1203 13:14:41.418855 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cd923320-06e2-4933-bc26-a4c947ab732b-metrics-certs podName:cd923320-06e2-4933-bc26-a4c947ab732b nodeName:}" failed. No retries permitted until 2025-12-03 13:14:45.418839774 +0000 UTC m=+1144.885270965 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cd923320-06e2-4933-bc26-a4c947ab732b-metrics-certs") pod "openstack-operator-controller-manager-5cd9cc65cb-tjrw7" (UID: "cd923320-06e2-4933-bc26-a4c947ab732b") : secret "metrics-server-cert" not found Dec 03 13:14:41 crc kubenswrapper[4986]: E1203 13:14:41.418807 4986 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 03 13:14:41 crc kubenswrapper[4986]: E1203 13:14:41.419207 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cd923320-06e2-4933-bc26-a4c947ab732b-webhook-certs podName:cd923320-06e2-4933-bc26-a4c947ab732b nodeName:}" failed. No retries permitted until 2025-12-03 13:14:45.419197994 +0000 UTC m=+1144.885629185 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/cd923320-06e2-4933-bc26-a4c947ab732b-webhook-certs") pod "openstack-operator-controller-manager-5cd9cc65cb-tjrw7" (UID: "cd923320-06e2-4933-bc26-a4c947ab732b") : secret "webhook-server-cert" not found Dec 03 13:14:44 crc kubenswrapper[4986]: I1203 13:14:44.463036 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b47ead63-1562-466f-887b-54c155983ebf-cert\") pod \"infra-operator-controller-manager-57548d458d-cgjf2\" (UID: \"b47ead63-1562-466f-887b-54c155983ebf\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-cgjf2" Dec 03 13:14:44 crc kubenswrapper[4986]: E1203 13:14:44.463498 4986 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 03 13:14:44 crc kubenswrapper[4986]: E1203 13:14:44.463555 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b47ead63-1562-466f-887b-54c155983ebf-cert podName:b47ead63-1562-466f-887b-54c155983ebf nodeName:}" failed. No retries permitted until 2025-12-03 13:14:52.46353613 +0000 UTC m=+1151.929967321 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b47ead63-1562-466f-887b-54c155983ebf-cert") pod "infra-operator-controller-manager-57548d458d-cgjf2" (UID: "b47ead63-1562-466f-887b-54c155983ebf") : secret "infra-operator-webhook-server-cert" not found Dec 03 13:14:44 crc kubenswrapper[4986]: I1203 13:14:44.970201 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/400b4a35-c3f1-409e-83fe-019ff145c65a-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4fw9mm\" (UID: \"400b4a35-c3f1-409e-83fe-019ff145c65a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4fw9mm" Dec 03 13:14:44 crc kubenswrapper[4986]: E1203 13:14:44.970902 4986 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 03 13:14:44 crc kubenswrapper[4986]: E1203 13:14:44.970954 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/400b4a35-c3f1-409e-83fe-019ff145c65a-cert podName:400b4a35-c3f1-409e-83fe-019ff145c65a nodeName:}" failed. No retries permitted until 2025-12-03 13:14:52.970938764 +0000 UTC m=+1152.437369955 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/400b4a35-c3f1-409e-83fe-019ff145c65a-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4fw9mm" (UID: "400b4a35-c3f1-409e-83fe-019ff145c65a") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 03 13:14:45 crc kubenswrapper[4986]: I1203 13:14:45.475237 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cd923320-06e2-4933-bc26-a4c947ab732b-metrics-certs\") pod \"openstack-operator-controller-manager-5cd9cc65cb-tjrw7\" (UID: \"cd923320-06e2-4933-bc26-a4c947ab732b\") " pod="openstack-operators/openstack-operator-controller-manager-5cd9cc65cb-tjrw7" Dec 03 13:14:45 crc kubenswrapper[4986]: I1203 13:14:45.475383 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cd923320-06e2-4933-bc26-a4c947ab732b-webhook-certs\") pod \"openstack-operator-controller-manager-5cd9cc65cb-tjrw7\" (UID: \"cd923320-06e2-4933-bc26-a4c947ab732b\") " pod="openstack-operators/openstack-operator-controller-manager-5cd9cc65cb-tjrw7" Dec 03 13:14:45 crc kubenswrapper[4986]: E1203 13:14:45.475524 4986 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 03 13:14:45 crc kubenswrapper[4986]: E1203 13:14:45.475547 4986 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 03 13:14:45 crc kubenswrapper[4986]: E1203 13:14:45.475599 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cd923320-06e2-4933-bc26-a4c947ab732b-webhook-certs podName:cd923320-06e2-4933-bc26-a4c947ab732b nodeName:}" failed. No retries permitted until 2025-12-03 13:14:53.475585543 +0000 UTC m=+1152.942016734 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/cd923320-06e2-4933-bc26-a4c947ab732b-webhook-certs") pod "openstack-operator-controller-manager-5cd9cc65cb-tjrw7" (UID: "cd923320-06e2-4933-bc26-a4c947ab732b") : secret "webhook-server-cert" not found Dec 03 13:14:45 crc kubenswrapper[4986]: E1203 13:14:45.475627 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cd923320-06e2-4933-bc26-a4c947ab732b-metrics-certs podName:cd923320-06e2-4933-bc26-a4c947ab732b nodeName:}" failed. No retries permitted until 2025-12-03 13:14:53.475609744 +0000 UTC m=+1152.942040935 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cd923320-06e2-4933-bc26-a4c947ab732b-metrics-certs") pod "openstack-operator-controller-manager-5cd9cc65cb-tjrw7" (UID: "cd923320-06e2-4933-bc26-a4c947ab732b") : secret "metrics-server-cert" not found Dec 03 13:14:51 crc kubenswrapper[4986]: E1203 13:14:51.122015 4986 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f" Dec 03 13:14:51 crc kubenswrapper[4986]: E1203 13:14:51.122822 4986 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-czdjt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-78f8948974-j4bg2_openstack-operators(c6bc4b54-a3be-4a2c-813e-fa19eea0dbe8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 13:14:51 crc kubenswrapper[4986]: E1203 13:14:51.578595 4986 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/cinder-operator@sha256:1d60701214b39cdb0fa70bbe5710f9b131139a9f4b482c2db4058a04daefb801" Dec 03 13:14:51 crc kubenswrapper[4986]: E1203 13:14:51.578818 4986 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/cinder-operator@sha256:1d60701214b39cdb0fa70bbe5710f9b131139a9f4b482c2db4058a04daefb801,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7sn4r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-859b6ccc6-l48wq_openstack-operators(f3328b2b-d4e4-4b39-a949-bfd1463596f0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 13:14:52 crc kubenswrapper[4986]: E1203 13:14:52.085953 4986 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:abdb733b01e92ac17f565762f30f1d075b44c16421bd06e557f6bb3c319e1809" Dec 03 13:14:52 crc kubenswrapper[4986]: E1203 13:14:52.086167 4986 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:abdb733b01e92ac17f565762f30f1d075b44c16421bd06e557f6bb3c319e1809,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rjjbn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-77987cd8cd-5vc5p_openstack-operators(8af63121-8727-4c23-b872-554fe679fc2f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 13:14:52 crc kubenswrapper[4986]: E1203 13:14:52.473836 4986 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:600ca007e493d3af0fcc2ebac92e8da5efd2afe812b62d7d3d4dd0115bdf05d7" Dec 03 13:14:52 crc kubenswrapper[4986]: E1203 13:14:52.474055 4986 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:600ca007e493d3af0fcc2ebac92e8da5efd2afe812b62d7d3d4dd0115bdf05d7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4cqrx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-56bbcc9d85-bkf5w_openstack-operators(3d88c3a6-1643-4fff-acbe-2327b9878103): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 13:14:52 crc kubenswrapper[4986]: I1203 13:14:52.477309 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b47ead63-1562-466f-887b-54c155983ebf-cert\") pod \"infra-operator-controller-manager-57548d458d-cgjf2\" (UID: \"b47ead63-1562-466f-887b-54c155983ebf\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-cgjf2" Dec 03 13:14:52 crc kubenswrapper[4986]: I1203 13:14:52.487886 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b47ead63-1562-466f-887b-54c155983ebf-cert\") pod \"infra-operator-controller-manager-57548d458d-cgjf2\" (UID: \"b47ead63-1562-466f-887b-54c155983ebf\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-cgjf2" Dec 03 13:14:52 crc kubenswrapper[4986]: I1203 13:14:52.591703 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-vrwbj" Dec 03 13:14:52 crc kubenswrapper[4986]: I1203 13:14:52.600922 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-57548d458d-cgjf2" Dec 03 13:14:52 crc kubenswrapper[4986]: I1203 13:14:52.983352 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/400b4a35-c3f1-409e-83fe-019ff145c65a-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4fw9mm\" (UID: \"400b4a35-c3f1-409e-83fe-019ff145c65a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4fw9mm" Dec 03 13:14:52 crc kubenswrapper[4986]: E1203 13:14:52.986155 4986 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59" Dec 03 13:14:52 crc kubenswrapper[4986]: E1203 13:14:52.986361 4986 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pdhhl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-b6456fdb6-lvpbh_openstack-operators(42fe5051-78e1-45ab-9766-dbd119c4e060): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 13:14:52 crc kubenswrapper[4986]: I1203 13:14:52.998918 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/400b4a35-c3f1-409e-83fe-019ff145c65a-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4fw9mm\" (UID: \"400b4a35-c3f1-409e-83fe-019ff145c65a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4fw9mm" Dec 03 13:14:53 crc kubenswrapper[4986]: I1203 13:14:53.300940 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-zrvs2" Dec 03 13:14:53 crc kubenswrapper[4986]: I1203 13:14:53.308853 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4fw9mm" Dec 03 13:14:53 crc kubenswrapper[4986]: I1203 13:14:53.490140 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cd923320-06e2-4933-bc26-a4c947ab732b-webhook-certs\") pod \"openstack-operator-controller-manager-5cd9cc65cb-tjrw7\" (UID: \"cd923320-06e2-4933-bc26-a4c947ab732b\") " pod="openstack-operators/openstack-operator-controller-manager-5cd9cc65cb-tjrw7" Dec 03 13:14:53 crc kubenswrapper[4986]: I1203 13:14:53.490218 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cd923320-06e2-4933-bc26-a4c947ab732b-metrics-certs\") pod \"openstack-operator-controller-manager-5cd9cc65cb-tjrw7\" (UID: \"cd923320-06e2-4933-bc26-a4c947ab732b\") " pod="openstack-operators/openstack-operator-controller-manager-5cd9cc65cb-tjrw7" Dec 03 13:14:53 crc kubenswrapper[4986]: E1203 13:14:53.490363 4986 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 03 13:14:53 crc kubenswrapper[4986]: E1203 13:14:53.490432 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cd923320-06e2-4933-bc26-a4c947ab732b-webhook-certs podName:cd923320-06e2-4933-bc26-a4c947ab732b nodeName:}" failed. No retries permitted until 2025-12-03 13:15:09.490412917 +0000 UTC m=+1168.956844108 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/cd923320-06e2-4933-bc26-a4c947ab732b-webhook-certs") pod "openstack-operator-controller-manager-5cd9cc65cb-tjrw7" (UID: "cd923320-06e2-4933-bc26-a4c947ab732b") : secret "webhook-server-cert" not found Dec 03 13:14:53 crc kubenswrapper[4986]: I1203 13:14:53.507981 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cd923320-06e2-4933-bc26-a4c947ab732b-metrics-certs\") pod \"openstack-operator-controller-manager-5cd9cc65cb-tjrw7\" (UID: \"cd923320-06e2-4933-bc26-a4c947ab732b\") " pod="openstack-operators/openstack-operator-controller-manager-5cd9cc65cb-tjrw7" Dec 03 13:14:54 crc kubenswrapper[4986]: I1203 13:14:54.929752 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-cgjf2"] Dec 03 13:14:55 crc kubenswrapper[4986]: W1203 13:14:55.689396 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb47ead63_1562_466f_887b_54c155983ebf.slice/crio-2942af9000ac9b669f2533c17146525417f78c2b025844d1e74984df5e06b730 WatchSource:0}: Error finding container 2942af9000ac9b669f2533c17146525417f78c2b025844d1e74984df5e06b730: Status 404 returned error can't find the container with id 2942af9000ac9b669f2533c17146525417f78c2b025844d1e74984df5e06b730 Dec 03 13:14:56 crc kubenswrapper[4986]: I1203 13:14:56.110020 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4fw9mm"] Dec 03 13:14:56 crc kubenswrapper[4986]: I1203 13:14:56.161018 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-cgjf2" event={"ID":"b47ead63-1562-466f-887b-54c155983ebf","Type":"ContainerStarted","Data":"2942af9000ac9b669f2533c17146525417f78c2b025844d1e74984df5e06b730"} Dec 03 13:15:00 crc kubenswrapper[4986]: I1203 13:15:00.143959 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412795-l86x6"] Dec 03 13:15:00 crc kubenswrapper[4986]: I1203 13:15:00.145655 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412795-l86x6" Dec 03 13:15:00 crc kubenswrapper[4986]: I1203 13:15:00.148923 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 03 13:15:00 crc kubenswrapper[4986]: I1203 13:15:00.149130 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 03 13:15:00 crc kubenswrapper[4986]: I1203 13:15:00.152510 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412795-l86x6"] Dec 03 13:15:00 crc kubenswrapper[4986]: I1203 13:15:00.200017 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4fw9mm" event={"ID":"400b4a35-c3f1-409e-83fe-019ff145c65a","Type":"ContainerStarted","Data":"e872df66057501d500e5d6e036f59c4d3929c74050fd9cde6655a6c32928bfb5"} Dec 03 13:15:00 crc kubenswrapper[4986]: I1203 13:15:00.286599 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrqjz\" (UniqueName: \"kubernetes.io/projected/dd86e79f-37fe-4393-8bd8-28a14d6d5537-kube-api-access-jrqjz\") pod \"collect-profiles-29412795-l86x6\" (UID: \"dd86e79f-37fe-4393-8bd8-28a14d6d5537\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412795-l86x6" Dec 03 13:15:00 crc kubenswrapper[4986]: I1203 13:15:00.286654 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dd86e79f-37fe-4393-8bd8-28a14d6d5537-secret-volume\") pod \"collect-profiles-29412795-l86x6\" (UID: \"dd86e79f-37fe-4393-8bd8-28a14d6d5537\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412795-l86x6" Dec 03 13:15:00 crc kubenswrapper[4986]: I1203 13:15:00.286684 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dd86e79f-37fe-4393-8bd8-28a14d6d5537-config-volume\") pod \"collect-profiles-29412795-l86x6\" (UID: \"dd86e79f-37fe-4393-8bd8-28a14d6d5537\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412795-l86x6" Dec 03 13:15:00 crc kubenswrapper[4986]: I1203 13:15:00.388355 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dd86e79f-37fe-4393-8bd8-28a14d6d5537-secret-volume\") pod \"collect-profiles-29412795-l86x6\" (UID: \"dd86e79f-37fe-4393-8bd8-28a14d6d5537\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412795-l86x6" Dec 03 13:15:00 crc kubenswrapper[4986]: I1203 13:15:00.388463 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dd86e79f-37fe-4393-8bd8-28a14d6d5537-config-volume\") pod \"collect-profiles-29412795-l86x6\" (UID: \"dd86e79f-37fe-4393-8bd8-28a14d6d5537\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412795-l86x6" Dec 03 13:15:00 crc kubenswrapper[4986]: I1203 13:15:00.388650 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrqjz\" (UniqueName: \"kubernetes.io/projected/dd86e79f-37fe-4393-8bd8-28a14d6d5537-kube-api-access-jrqjz\") pod \"collect-profiles-29412795-l86x6\" (UID: \"dd86e79f-37fe-4393-8bd8-28a14d6d5537\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412795-l86x6" Dec 03 13:15:00 crc kubenswrapper[4986]: I1203 13:15:00.389350 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dd86e79f-37fe-4393-8bd8-28a14d6d5537-config-volume\") pod \"collect-profiles-29412795-l86x6\" (UID: \"dd86e79f-37fe-4393-8bd8-28a14d6d5537\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412795-l86x6" Dec 03 13:15:00 crc kubenswrapper[4986]: I1203 13:15:00.396825 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dd86e79f-37fe-4393-8bd8-28a14d6d5537-secret-volume\") pod \"collect-profiles-29412795-l86x6\" (UID: \"dd86e79f-37fe-4393-8bd8-28a14d6d5537\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412795-l86x6" Dec 03 13:15:00 crc kubenswrapper[4986]: I1203 13:15:00.403024 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrqjz\" (UniqueName: \"kubernetes.io/projected/dd86e79f-37fe-4393-8bd8-28a14d6d5537-kube-api-access-jrqjz\") pod \"collect-profiles-29412795-l86x6\" (UID: \"dd86e79f-37fe-4393-8bd8-28a14d6d5537\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412795-l86x6" Dec 03 13:15:00 crc kubenswrapper[4986]: I1203 13:15:00.469398 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412795-l86x6" Dec 03 13:15:01 crc kubenswrapper[4986]: I1203 13:15:01.212994 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-blk7z" event={"ID":"1ecc0034-a740-410d-a135-6b65d34ce64d","Type":"ContainerStarted","Data":"f4ea5f29de190b9d766c49e06ca26930c21f72a7388cc82e781475def34650c4"} Dec 03 13:15:01 crc kubenswrapper[4986]: I1203 13:15:01.216948 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-nnhsz" event={"ID":"f6268841-12af-4fa7-a9ab-54927e3256cf","Type":"ContainerStarted","Data":"e6f6f23a1374d9d48917dd374acea4efe0279b29f8d783ab1c71f5527b0809a3"} Dec 03 13:15:01 crc kubenswrapper[4986]: I1203 13:15:01.218719 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-fw7nw" event={"ID":"924573df-b6fe-4d17-add4-376f76084fab","Type":"ContainerStarted","Data":"60b8bd48b5ca8e596d4eb40f3694588f8c8ef34165f7201d75ecce52ed536571"} Dec 03 13:15:01 crc kubenswrapper[4986]: I1203 13:15:01.220088 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-h49tf" event={"ID":"f2be2f63-f6d7-425a-8ce1-d2bc205e24f0","Type":"ContainerStarted","Data":"a64d52eb0cf34dc141794237b7a9f60b9579127a8259cf6ff7a2ddf5f5912d8c"} Dec 03 13:15:01 crc kubenswrapper[4986]: I1203 13:15:01.221645 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-bm5lk" event={"ID":"29ac6999-88ff-472f-a03e-0b95f1042d38","Type":"ContainerStarted","Data":"15a2c90e066e71c3238e3d36c738de3b05360c3d149fe112649e83709db7199c"} Dec 03 13:15:01 crc kubenswrapper[4986]: I1203 13:15:01.223433 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-8z7gr" event={"ID":"69fed752-e65d-4007-a731-3faee6335366","Type":"ContainerStarted","Data":"308982b08db9607cb063272e15c06b73f060f1657b85b8459e279a8f36516bde"} Dec 03 13:15:01 crc kubenswrapper[4986]: I1203 13:15:01.714323 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412795-l86x6"] Dec 03 13:15:03 crc kubenswrapper[4986]: I1203 13:15:03.491467 4986 patch_prober.go:28] interesting pod/machine-config-daemon-xggpj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 13:15:03 crc kubenswrapper[4986]: I1203 13:15:03.491543 4986 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 13:15:03 crc kubenswrapper[4986]: I1203 13:15:03.491666 4986 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" Dec 03 13:15:03 crc kubenswrapper[4986]: I1203 13:15:03.492272 4986 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ab3578e12fb223968075bdef0a7259f6a8a78f54ad544fd0e2d4be111538db67"} pod="openshift-machine-config-operator/machine-config-daemon-xggpj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 13:15:03 crc kubenswrapper[4986]: I1203 13:15:03.492355 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" containerName="machine-config-daemon" containerID="cri-o://ab3578e12fb223968075bdef0a7259f6a8a78f54ad544fd0e2d4be111538db67" gracePeriod=600 Dec 03 13:15:09 crc kubenswrapper[4986]: I1203 13:15:09.526858 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cd923320-06e2-4933-bc26-a4c947ab732b-webhook-certs\") pod \"openstack-operator-controller-manager-5cd9cc65cb-tjrw7\" (UID: \"cd923320-06e2-4933-bc26-a4c947ab732b\") " pod="openstack-operators/openstack-operator-controller-manager-5cd9cc65cb-tjrw7" Dec 03 13:15:09 crc kubenswrapper[4986]: I1203 13:15:09.535207 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cd923320-06e2-4933-bc26-a4c947ab732b-webhook-certs\") pod \"openstack-operator-controller-manager-5cd9cc65cb-tjrw7\" (UID: \"cd923320-06e2-4933-bc26-a4c947ab732b\") " pod="openstack-operators/openstack-operator-controller-manager-5cd9cc65cb-tjrw7" Dec 03 13:15:09 crc kubenswrapper[4986]: I1203 13:15:09.719231 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-qvht6" Dec 03 13:15:09 crc kubenswrapper[4986]: I1203 13:15:09.728474 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5cd9cc65cb-tjrw7" Dec 03 13:15:11 crc kubenswrapper[4986]: I1203 13:15:11.296661 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-h5sqr" event={"ID":"f01db271-4787-4af7-b37b-5ba6e4e2e5b7","Type":"ContainerStarted","Data":"b62b87dbe30fa829a29faf16ceab0844e56b789a6843fd357957164ee961b53e"} Dec 03 13:15:11 crc kubenswrapper[4986]: I1203 13:15:11.298582 4986 generic.go:334] "Generic (PLEG): container finished" podID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" containerID="ab3578e12fb223968075bdef0a7259f6a8a78f54ad544fd0e2d4be111538db67" exitCode=0 Dec 03 13:15:11 crc kubenswrapper[4986]: I1203 13:15:11.298617 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" event={"ID":"f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c","Type":"ContainerDied","Data":"ab3578e12fb223968075bdef0a7259f6a8a78f54ad544fd0e2d4be111538db67"} Dec 03 13:15:11 crc kubenswrapper[4986]: I1203 13:15:11.298645 4986 scope.go:117] "RemoveContainer" containerID="8955c8a516ca8c4bf5f23613b3c8c76be6f843b93d44621c8aa9c9ffd7fe8443" Dec 03 13:15:13 crc kubenswrapper[4986]: I1203 13:15:13.323310 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412795-l86x6" event={"ID":"dd86e79f-37fe-4393-8bd8-28a14d6d5537","Type":"ContainerStarted","Data":"4e0010e3aeb155ad21c6b4dad796016acc2022fbdb61d451028605b5000cc06c"} Dec 03 13:15:14 crc kubenswrapper[4986]: E1203 13:15:14.719502 4986 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168" Dec 03 13:15:14 crc kubenswrapper[4986]: E1203 13:15:14.719724 4986 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dvzgl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-998648c74-xcs9s_openstack-operators(be530205-b10b-4d4b-9fa3-4d9d0548054c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 13:15:15 crc kubenswrapper[4986]: E1203 13:15:15.216271 4986 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670" Dec 03 13:15:15 crc kubenswrapper[4986]: E1203 13:15:15.216470 4986 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-swstz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-697bc559fc-7t7xl_openstack-operators(d601bb24-2bd9-478a-96d1-ed2001bd53b6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 13:15:15 crc kubenswrapper[4986]: E1203 13:15:15.813636 4986 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:14cfad6ea2e7f7ecc4cb2aafceb9c61514b3d04b66668832d1e4ac3b19f1ab81" Dec 03 13:15:15 crc kubenswrapper[4986]: E1203 13:15:15.814125 4986 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:14cfad6ea2e7f7ecc4cb2aafceb9c61514b3d04b66668832d1e4ac3b19f1ab81,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-baremetal-operator-agent:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_ANSIBLEEE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_EVALUATOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-evaluator:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-listener:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_NOTIFIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-notifier:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_APACHE_IMAGE_URL_DEFAULT,Value:registry.redhat.io/ubi9/httpd-24:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_KEYSTONE_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-keystone-listener:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_IPMI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_MYSQLD_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/mysqld-exporter:v0.15.1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_NOTIFICATION_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-notification:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_SGCORE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/sg-core:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_BACKUP_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-backup:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_VOLUME_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-volume:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CLOUDKITTY_API_IMAGE_URL_DEFAULT,Value:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CLOUDKITTY_PROC_IMAGE_URL_DEFAULT,Value:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-processor:current,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_BACKENDBIND9_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-backend-bind9:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-central:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_MDNS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-mdns:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_PRODUCER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-producer:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_UNBOUND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-unbound:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_FRR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-frr:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_ISCSID_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-iscsid:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_KEPLER_IMAGE_URL_DEFAULT,Value:quay.io/sustainable_computing_io/kepler:release-0.7.12,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_LOGROTATE_CROND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cron:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_MULTIPATHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-multipathd:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_DHCP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_METADATA_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_OVN_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-ovn-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_SRIOV_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NODE_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/node-exporter:v1.5.0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_OVN_BGP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-bgp-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_PODMAN_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/navidys/prometheus-podman-exporter:v1.10.1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_GLANCE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_CFNAPI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api-cfn:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HORIZON_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_MEMCACHED_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_REDIS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-redis:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-conductor:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_INSPECTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-inspector:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_NEUTRON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-neutron-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PXE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-pxe:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PYTHON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/ironic-python-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KEYSTONE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-keystone:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KSM_IMAGE_URL_DEFAULT,Value:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SHARE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-share:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MARIADB_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NET_UTILS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-netutils:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NEUTRON_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_NOVNC_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-novncproxy:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HEALTHMANAGER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-health-manager:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HOUSEKEEPING_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-housekeeping:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_RSYSLOG_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rsyslog:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_CLIENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_MUST_GATHER_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-must-gather:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_NETWORK_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OS_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/edpm-hardened-uefi:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_OVS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NORTHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-northd:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_SB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PLACEMENT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_RABBITMQ_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_ACCOUNT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-account:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-container:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_OBJECT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-object:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_PROXY_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-proxy-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_TEST_TEMPEST_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_APPLIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-applier:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_DECISION_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-decision-engine:current-podified,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g7jql,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-baremetal-operator-controller-manager-64bc77cfd4fw9mm_openstack-operators(400b4a35-c3f1-409e-83fe-019ff145c65a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 13:15:16 crc kubenswrapper[4986]: E1203 13:15:16.494642 4986 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/infra-operator@sha256:09a6d0613ee2d3c1c809fc36c22678458ac271e0da87c970aec0a5339f5423f7" Dec 03 13:15:16 crc kubenswrapper[4986]: E1203 13:15:16.495175 4986 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/infra-operator@sha256:09a6d0613ee2d3c1c809fc36c22678458ac271e0da87c970aec0a5339f5423f7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{600 -3} {} 600m DecimalSI},memory: {{2147483648 0} {} 2Gi BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{536870912 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bm2nb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infra-operator-controller-manager-57548d458d-cgjf2_openstack-operators(b47ead63-1562-466f-887b-54c155983ebf): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 13:15:17 crc kubenswrapper[4986]: E1203 13:15:17.115454 4986 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = writing blob: storing blob to file \"/var/tmp/container_images_storage2477888404/5\": happened during read: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 03 13:15:17 crc kubenswrapper[4986]: E1203 13:15:17.115924 4986 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7sn4r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-859b6ccc6-l48wq_openstack-operators(f3328b2b-d4e4-4b39-a949-bfd1463596f0): ErrImagePull: rpc error: code = Canceled desc = writing blob: storing blob to file \"/var/tmp/container_images_storage2477888404/5\": happened during read: context canceled" logger="UnhandledError" Dec 03 13:15:17 crc kubenswrapper[4986]: E1203 13:15:17.117213 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = writing blob: storing blob to file \\\"/var/tmp/container_images_storage2477888404/5\\\": happened during read: context canceled\"]" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-l48wq" podUID="f3328b2b-d4e4-4b39-a949-bfd1463596f0" Dec 03 13:15:17 crc kubenswrapper[4986]: E1203 13:15:17.230978 4986 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying layer: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 03 13:15:17 crc kubenswrapper[4986]: E1203 13:15:17.232225 4986 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4v98c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-5fdfd5b6b5-blk7z_openstack-operators(1ecc0034-a740-410d-a135-6b65d34ce64d): ErrImagePull: rpc error: code = Canceled desc = copying layer: context canceled" logger="UnhandledError" Dec 03 13:15:17 crc kubenswrapper[4986]: E1203 13:15:17.233657 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying layer: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-blk7z" podUID="1ecc0034-a740-410d-a135-6b65d34ce64d" Dec 03 13:15:17 crc kubenswrapper[4986]: I1203 13:15:17.295702 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5cd9cc65cb-tjrw7"] Dec 03 13:15:17 crc kubenswrapper[4986]: I1203 13:15:17.361435 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-726wj" event={"ID":"a7413edd-cf2a-4756-b6b7-afe4e4e42fe6","Type":"ContainerStarted","Data":"7b5233a813c82ff7cc45414eb11a3e6d3bdd2baa75f36da64875dd5371dc090f"} Dec 03 13:15:17 crc kubenswrapper[4986]: I1203 13:15:17.364014 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-ph2qf" event={"ID":"2d441496-72d9-462b-aea6-e2588499fbf0","Type":"ContainerStarted","Data":"2509041e6f9bf58ceb9339eb7c8528e83049e70da04376c1ad06b8d818a9bb9e"} Dec 03 13:15:17 crc kubenswrapper[4986]: I1203 13:15:17.364871 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9ghmd" event={"ID":"b9df4fcc-97aa-4a32-acbf-25f42addf8cc","Type":"ContainerStarted","Data":"34d62fc22f7e25469c8455bcb329b2eec970270113efd8d076d1aa7726a56b17"} Dec 03 13:15:17 crc kubenswrapper[4986]: I1203 13:15:17.369041 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-c6wk7" event={"ID":"a5090dbe-8e6f-4865-92a3-28720422db9f","Type":"ContainerStarted","Data":"eeac3ca01afaca4faf933bd5a9f1776b10019ce3abd6daeae3d693bb6d090c45"} Dec 03 13:15:17 crc kubenswrapper[4986]: I1203 13:15:17.375422 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-bg4vv" event={"ID":"276259ca-95c1-41c2-803f-b82904067552","Type":"ContainerStarted","Data":"8fa97f3e1422763347e78a9c89b649bf867519f16c3df35eb6d0e65a274b470b"} Dec 03 13:15:17 crc kubenswrapper[4986]: I1203 13:15:17.381383 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-jgxqz" event={"ID":"ac6af48b-36c2-427c-93ad-090cc34434f7","Type":"ContainerStarted","Data":"576c72155e4fa0ab1766996ed055f22696ebfc223042428c8d13ebba70a15b8d"} Dec 03 13:15:17 crc kubenswrapper[4986]: I1203 13:15:17.385966 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-blk7z" Dec 03 13:15:17 crc kubenswrapper[4986]: E1203 13:15:17.390335 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-blk7z" podUID="1ecc0034-a740-410d-a135-6b65d34ce64d" Dec 03 13:15:17 crc kubenswrapper[4986]: I1203 13:15:17.401912 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-blk7z" Dec 03 13:15:17 crc kubenswrapper[4986]: I1203 13:15:17.404578 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9ghmd" podStartSLOduration=18.886198963 podStartE2EDuration="40.404556906s" podCreationTimestamp="2025-12-03 13:14:37 +0000 UTC" firstStartedPulling="2025-12-03 13:14:38.669670304 +0000 UTC m=+1138.136101495" lastFinishedPulling="2025-12-03 13:15:00.188028237 +0000 UTC m=+1159.654459438" observedRunningTime="2025-12-03 13:15:17.390904873 +0000 UTC m=+1176.857336064" watchObservedRunningTime="2025-12-03 13:15:17.404556906 +0000 UTC m=+1176.870988107" Dec 03 13:15:17 crc kubenswrapper[4986]: I1203 13:15:17.508444 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29412795-l86x6" podStartSLOduration=17.508424395 podStartE2EDuration="17.508424395s" podCreationTimestamp="2025-12-03 13:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:15:17.50755523 +0000 UTC m=+1176.973986431" watchObservedRunningTime="2025-12-03 13:15:17.508424395 +0000 UTC m=+1176.974855586" Dec 03 13:15:18 crc kubenswrapper[4986]: E1203 13:15:18.318695 4986 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying layer: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 03 13:15:18 crc kubenswrapper[4986]: E1203 13:15:18.319059 4986 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-82xw2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-5f64f6f8bb-nnhsz_openstack-operators(f6268841-12af-4fa7-a9ab-54927e3256cf): ErrImagePull: rpc error: code = Canceled desc = copying layer: context canceled" logger="UnhandledError" Dec 03 13:15:18 crc kubenswrapper[4986]: E1203 13:15:18.322619 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying layer: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-nnhsz" podUID="f6268841-12af-4fa7-a9ab-54927e3256cf" Dec 03 13:15:18 crc kubenswrapper[4986]: W1203 13:15:18.322999 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcd923320_06e2_4933_bc26_a4c947ab732b.slice/crio-efa8f94ba0eaa9be684e7c0566f067c7ceb956eadbfd2479638c84884367d214 WatchSource:0}: Error finding container efa8f94ba0eaa9be684e7c0566f067c7ceb956eadbfd2479638c84884367d214: Status 404 returned error can't find the container with id efa8f94ba0eaa9be684e7c0566f067c7ceb956eadbfd2479638c84884367d214 Dec 03 13:15:18 crc kubenswrapper[4986]: E1203 13:15:18.323241 4986 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying layer: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 03 13:15:18 crc kubenswrapper[4986]: E1203 13:15:18.323402 4986 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p7msw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-7d9dfd778-8z7gr_openstack-operators(69fed752-e65d-4007-a731-3faee6335366): ErrImagePull: rpc error: code = Canceled desc = copying layer: context canceled" logger="UnhandledError" Dec 03 13:15:18 crc kubenswrapper[4986]: E1203 13:15:18.324645 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying layer: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-8z7gr" podUID="69fed752-e65d-4007-a731-3faee6335366" Dec 03 13:15:18 crc kubenswrapper[4986]: E1203 13:15:18.326046 4986 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying layer: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 03 13:15:18 crc kubenswrapper[4986]: E1203 13:15:18.326169 4986 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sbbwg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-7765d96ddf-bm5lk_openstack-operators(29ac6999-88ff-472f-a03e-0b95f1042d38): ErrImagePull: rpc error: code = Canceled desc = copying layer: context canceled" logger="UnhandledError" Dec 03 13:15:18 crc kubenswrapper[4986]: E1203 13:15:18.327358 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying layer: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-bm5lk" podUID="29ac6999-88ff-472f-a03e-0b95f1042d38" Dec 03 13:15:18 crc kubenswrapper[4986]: E1203 13:15:18.332613 4986 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 03 13:15:18 crc kubenswrapper[4986]: E1203 13:15:18.332740 4986 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-czdjt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-78f8948974-j4bg2_openstack-operators(c6bc4b54-a3be-4a2c-813e-fa19eea0dbe8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 13:15:18 crc kubenswrapper[4986]: E1203 13:15:18.333927 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"]" pod="openstack-operators/placement-operator-controller-manager-78f8948974-j4bg2" podUID="c6bc4b54-a3be-4a2c-813e-fa19eea0dbe8" Dec 03 13:15:18 crc kubenswrapper[4986]: I1203 13:15:18.395874 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5cd9cc65cb-tjrw7" event={"ID":"cd923320-06e2-4933-bc26-a4c947ab732b","Type":"ContainerStarted","Data":"efa8f94ba0eaa9be684e7c0566f067c7ceb956eadbfd2479638c84884367d214"} Dec 03 13:15:18 crc kubenswrapper[4986]: I1203 13:15:18.397109 4986 generic.go:334] "Generic (PLEG): container finished" podID="dd86e79f-37fe-4393-8bd8-28a14d6d5537" containerID="0ece90572c8bcfc6111a92ea297f6581fb5268fbe4e96f263b4af737c4687b04" exitCode=0 Dec 03 13:15:18 crc kubenswrapper[4986]: I1203 13:15:18.397804 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412795-l86x6" event={"ID":"dd86e79f-37fe-4393-8bd8-28a14d6d5537","Type":"ContainerDied","Data":"0ece90572c8bcfc6111a92ea297f6581fb5268fbe4e96f263b4af737c4687b04"} Dec 03 13:15:18 crc kubenswrapper[4986]: E1203 13:15:18.398418 4986 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 03 13:15:18 crc kubenswrapper[4986]: E1203 13:15:18.398614 4986 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rjjbn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-77987cd8cd-5vc5p_openstack-operators(8af63121-8727-4c23-b872-554fe679fc2f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 13:15:18 crc kubenswrapper[4986]: I1203 13:15:18.398893 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-nnhsz" Dec 03 13:15:18 crc kubenswrapper[4986]: I1203 13:15:18.398921 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-8z7gr" Dec 03 13:15:18 crc kubenswrapper[4986]: I1203 13:15:18.399144 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-bm5lk" Dec 03 13:15:18 crc kubenswrapper[4986]: I1203 13:15:18.401245 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-8z7gr" Dec 03 13:15:18 crc kubenswrapper[4986]: E1203 13:15:18.401614 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"]" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-5vc5p" podUID="8af63121-8727-4c23-b872-554fe679fc2f" Dec 03 13:15:18 crc kubenswrapper[4986]: I1203 13:15:18.401872 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-nnhsz" Dec 03 13:15:18 crc kubenswrapper[4986]: I1203 13:15:18.402249 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-bm5lk" Dec 03 13:15:18 crc kubenswrapper[4986]: E1203 13:15:18.515516 4986 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 03 13:15:18 crc kubenswrapper[4986]: E1203 13:15:18.515661 4986 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pdhhl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-b6456fdb6-lvpbh_openstack-operators(42fe5051-78e1-45ab-9766-dbd119c4e060): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 13:15:18 crc kubenswrapper[4986]: E1203 13:15:18.519524 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"]" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-lvpbh" podUID="42fe5051-78e1-45ab-9766-dbd119c4e060" Dec 03 13:15:19 crc kubenswrapper[4986]: E1203 13:15:19.265989 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/infra-operator-controller-manager-57548d458d-cgjf2" podUID="b47ead63-1562-466f-887b-54c155983ebf" Dec 03 13:15:19 crc kubenswrapper[4986]: E1203 13:15:19.395453 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-bkf5w" podUID="3d88c3a6-1643-4fff-acbe-2327b9878103" Dec 03 13:15:19 crc kubenswrapper[4986]: I1203 13:15:19.411057 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-cgjf2" event={"ID":"b47ead63-1562-466f-887b-54c155983ebf","Type":"ContainerStarted","Data":"402b981dde44f18b33311b92c7330c5c67f1acc2bccad9916d608b7a6e73e4c8"} Dec 03 13:15:19 crc kubenswrapper[4986]: E1203 13:15:19.414403 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/infra-operator@sha256:09a6d0613ee2d3c1c809fc36c22678458ac271e0da87c970aec0a5339f5423f7\\\"\"" pod="openstack-operators/infra-operator-controller-manager-57548d458d-cgjf2" podUID="b47ead63-1562-466f-887b-54c155983ebf" Dec 03 13:15:19 crc kubenswrapper[4986]: E1203 13:15:19.418916 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4fw9mm" podUID="400b4a35-c3f1-409e-83fe-019ff145c65a" Dec 03 13:15:19 crc kubenswrapper[4986]: I1203 13:15:19.419164 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-bm5lk" event={"ID":"29ac6999-88ff-472f-a03e-0b95f1042d38","Type":"ContainerStarted","Data":"568eebf8346c353839e5a47a008ea182bf915def5e6e52e2260c114ce6011e93"} Dec 03 13:15:19 crc kubenswrapper[4986]: I1203 13:15:19.422857 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-l48wq" event={"ID":"f3328b2b-d4e4-4b39-a949-bfd1463596f0","Type":"ContainerStarted","Data":"6f477b1952c795f2453291602073e6fba5caa3f3558c5f89306ca7d00814805c"} Dec 03 13:15:19 crc kubenswrapper[4986]: E1203 13:15:19.426293 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-998648c74-xcs9s" podUID="be530205-b10b-4d4b-9fa3-4d9d0548054c" Dec 03 13:15:19 crc kubenswrapper[4986]: I1203 13:15:19.426728 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-8z7gr" event={"ID":"69fed752-e65d-4007-a731-3faee6335366","Type":"ContainerStarted","Data":"d2890d92786f7acc268ebb6beb1f9ecb18d6424eeb679d70167747d8a6b19014"} Dec 03 13:15:19 crc kubenswrapper[4986]: I1203 13:15:19.428625 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5cd9cc65cb-tjrw7" event={"ID":"cd923320-06e2-4933-bc26-a4c947ab732b","Type":"ContainerStarted","Data":"006f4a69407fe1131a13feb11a29daed896fae882663cca15d7bcc48384c1149"} Dec 03 13:15:19 crc kubenswrapper[4986]: I1203 13:15:19.428764 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-5cd9cc65cb-tjrw7" Dec 03 13:15:19 crc kubenswrapper[4986]: I1203 13:15:19.430953 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-nnhsz" event={"ID":"f6268841-12af-4fa7-a9ab-54927e3256cf","Type":"ContainerStarted","Data":"985e89bd06c44f6ba1399e67e3787424005f6e0adc625e2212e5063385384730"} Dec 03 13:15:19 crc kubenswrapper[4986]: I1203 13:15:19.432789 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-bg4vv" event={"ID":"276259ca-95c1-41c2-803f-b82904067552","Type":"ContainerStarted","Data":"794195a3431818a20f366fc5f606546186b1a4993d68a0ff586b2107184c1adb"} Dec 03 13:15:19 crc kubenswrapper[4986]: I1203 13:15:19.434488 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5854674fcc-bg4vv" Dec 03 13:15:19 crc kubenswrapper[4986]: I1203 13:15:19.460968 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-fw7nw" event={"ID":"924573df-b6fe-4d17-add4-376f76084fab","Type":"ContainerStarted","Data":"a803f6c3a5bbca0bd5e24256f4b0b4dbf9634a54d48dccf095f08ba0479420d4"} Dec 03 13:15:19 crc kubenswrapper[4986]: I1203 13:15:19.461029 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-fw7nw" Dec 03 13:15:19 crc kubenswrapper[4986]: I1203 13:15:19.470025 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-fw7nw" Dec 03 13:15:19 crc kubenswrapper[4986]: I1203 13:15:19.485453 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-5cd9cc65cb-tjrw7" podStartSLOduration=42.485436826 podStartE2EDuration="42.485436826s" podCreationTimestamp="2025-12-03 13:14:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:15:19.482856765 +0000 UTC m=+1178.949287956" watchObservedRunningTime="2025-12-03 13:15:19.485436826 +0000 UTC m=+1178.951868007" Dec 03 13:15:19 crc kubenswrapper[4986]: I1203 13:15:19.502060 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-blk7z" event={"ID":"1ecc0034-a740-410d-a135-6b65d34ce64d","Type":"ContainerStarted","Data":"3305ee3e8d33b987d588d02d59a084a5c2c682a98b1b6e28c5e593cf0c0b3b9f"} Dec 03 13:15:19 crc kubenswrapper[4986]: I1203 13:15:19.510374 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-8z7gr" podStartSLOduration=27.09683437 podStartE2EDuration="43.510359596s" podCreationTimestamp="2025-12-03 13:14:36 +0000 UTC" firstStartedPulling="2025-12-03 13:14:38.06312914 +0000 UTC m=+1137.529560331" lastFinishedPulling="2025-12-03 13:14:54.476654356 +0000 UTC m=+1153.943085557" observedRunningTime="2025-12-03 13:15:19.508943297 +0000 UTC m=+1178.975374498" watchObservedRunningTime="2025-12-03 13:15:19.510359596 +0000 UTC m=+1178.976790787" Dec 03 13:15:19 crc kubenswrapper[4986]: I1203 13:15:19.547974 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" event={"ID":"f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c","Type":"ContainerStarted","Data":"c5f89c3f3b886e65cc9822e8ac97f262b26f801c62b7d2e9ded96fcd903af71d"} Dec 03 13:15:19 crc kubenswrapper[4986]: I1203 13:15:19.548414 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-nnhsz" podStartSLOduration=25.982658667 podStartE2EDuration="43.548397016s" podCreationTimestamp="2025-12-03 13:14:36 +0000 UTC" firstStartedPulling="2025-12-03 13:14:38.079622811 +0000 UTC m=+1137.546054012" lastFinishedPulling="2025-12-03 13:14:55.64536117 +0000 UTC m=+1155.111792361" observedRunningTime="2025-12-03 13:15:19.544359526 +0000 UTC m=+1179.010790717" watchObservedRunningTime="2025-12-03 13:15:19.548397016 +0000 UTC m=+1179.014828207" Dec 03 13:15:19 crc kubenswrapper[4986]: E1203 13:15:19.599410 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:14cfad6ea2e7f7ecc4cb2aafceb9c61514b3d04b66668832d1e4ac3b19f1ab81\\\"\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4fw9mm" podUID="400b4a35-c3f1-409e-83fe-019ff145c65a" Dec 03 13:15:19 crc kubenswrapper[4986]: I1203 13:15:19.621383 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-bm5lk" podStartSLOduration=27.220018497 podStartE2EDuration="43.62136623s" podCreationTimestamp="2025-12-03 13:14:36 +0000 UTC" firstStartedPulling="2025-12-03 13:14:38.075547979 +0000 UTC m=+1137.541979170" lastFinishedPulling="2025-12-03 13:14:54.476895712 +0000 UTC m=+1153.943326903" observedRunningTime="2025-12-03 13:15:19.577432859 +0000 UTC m=+1179.043864050" watchObservedRunningTime="2025-12-03 13:15:19.62136623 +0000 UTC m=+1179.087797421" Dec 03 13:15:19 crc kubenswrapper[4986]: I1203 13:15:19.636878 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-h49tf" event={"ID":"f2be2f63-f6d7-425a-8ce1-d2bc205e24f0","Type":"ContainerStarted","Data":"1d4483b35df79ee9b5ce42308680a162b9e75996eb3bbfc2f6198afc311a4fab"} Dec 03 13:15:19 crc kubenswrapper[4986]: I1203 13:15:19.637640 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-h49tf" Dec 03 13:15:19 crc kubenswrapper[4986]: I1203 13:15:19.640655 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5854674fcc-bg4vv" podStartSLOduration=2.556476485 podStartE2EDuration="42.640636366s" podCreationTimestamp="2025-12-03 13:14:37 +0000 UTC" firstStartedPulling="2025-12-03 13:14:38.411984202 +0000 UTC m=+1137.878415393" lastFinishedPulling="2025-12-03 13:15:18.496144083 +0000 UTC m=+1177.962575274" observedRunningTime="2025-12-03 13:15:19.620771663 +0000 UTC m=+1179.087202854" watchObservedRunningTime="2025-12-03 13:15:19.640636366 +0000 UTC m=+1179.107067557" Dec 03 13:15:19 crc kubenswrapper[4986]: I1203 13:15:19.646923 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-h49tf" Dec 03 13:15:19 crc kubenswrapper[4986]: I1203 13:15:19.679656 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-jgxqz" event={"ID":"ac6af48b-36c2-427c-93ad-090cc34434f7","Type":"ContainerStarted","Data":"89cdc4401b63d47df81c2aad0f4f88bb4e749f5af49ee8442782291775443bf8"} Dec 03 13:15:19 crc kubenswrapper[4986]: I1203 13:15:19.680387 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-jgxqz" Dec 03 13:15:19 crc kubenswrapper[4986]: I1203 13:15:19.734585 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-h5sqr" event={"ID":"f01db271-4787-4af7-b37b-5ba6e4e2e5b7","Type":"ContainerStarted","Data":"d8286ed1fee345fd9aa1e319fee0651d89e091059a31fdd803fec48b1809fe0e"} Dec 03 13:15:19 crc kubenswrapper[4986]: I1203 13:15:19.745208 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-h5sqr" Dec 03 13:15:19 crc kubenswrapper[4986]: I1203 13:15:19.747411 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-726wj" event={"ID":"a7413edd-cf2a-4756-b6b7-afe4e4e42fe6","Type":"ContainerStarted","Data":"5e034b057ddac136b704f68616d8628d945d306bf6dd11fd7cbc7f3c12e6be05"} Dec 03 13:15:19 crc kubenswrapper[4986]: I1203 13:15:19.747969 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-726wj" Dec 03 13:15:19 crc kubenswrapper[4986]: I1203 13:15:19.749439 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-ph2qf" event={"ID":"2d441496-72d9-462b-aea6-e2588499fbf0","Type":"ContainerStarted","Data":"fb23621081b09af17af18d6e4e5a5822c3ba1be37db34eccfadf1a9c9863fab1"} Dec 03 13:15:19 crc kubenswrapper[4986]: I1203 13:15:19.749776 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-ph2qf" Dec 03 13:15:19 crc kubenswrapper[4986]: I1203 13:15:19.765015 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-h5sqr" Dec 03 13:15:19 crc kubenswrapper[4986]: I1203 13:15:19.766184 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-bkf5w" event={"ID":"3d88c3a6-1643-4fff-acbe-2327b9878103","Type":"ContainerStarted","Data":"b51b18909b78e64633dc341fed9033d82704a6fb73b3372f90686843332af399"} Dec 03 13:15:19 crc kubenswrapper[4986]: I1203 13:15:19.770599 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-blk7z" podStartSLOduration=27.565846857 podStartE2EDuration="43.770587517s" podCreationTimestamp="2025-12-03 13:14:36 +0000 UTC" firstStartedPulling="2025-12-03 13:14:38.272156372 +0000 UTC m=+1137.738587563" lastFinishedPulling="2025-12-03 13:14:54.476897032 +0000 UTC m=+1153.943328223" observedRunningTime="2025-12-03 13:15:19.744858384 +0000 UTC m=+1179.211289575" watchObservedRunningTime="2025-12-03 13:15:19.770587517 +0000 UTC m=+1179.237018698" Dec 03 13:15:19 crc kubenswrapper[4986]: I1203 13:15:19.771272 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-fw7nw" podStartSLOduration=2.9051447379999997 podStartE2EDuration="43.771265586s" podCreationTimestamp="2025-12-03 13:14:36 +0000 UTC" firstStartedPulling="2025-12-03 13:14:37.630947221 +0000 UTC m=+1137.097378412" lastFinishedPulling="2025-12-03 13:15:18.497068069 +0000 UTC m=+1177.963499260" observedRunningTime="2025-12-03 13:15:19.70044041 +0000 UTC m=+1179.166871601" watchObservedRunningTime="2025-12-03 13:15:19.771265586 +0000 UTC m=+1179.237696777" Dec 03 13:15:19 crc kubenswrapper[4986]: I1203 13:15:19.839596 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-c6wk7" event={"ID":"a5090dbe-8e6f-4865-92a3-28720422db9f","Type":"ContainerStarted","Data":"a9edcb14c1a923d4a064e0a9c134ed0b38441111ce3b7358b3eab24941bf4428"} Dec 03 13:15:19 crc kubenswrapper[4986]: I1203 13:15:19.839953 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-c6wk7" Dec 03 13:15:19 crc kubenswrapper[4986]: I1203 13:15:19.889390 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-h49tf" podStartSLOduration=3.6371809710000003 podStartE2EDuration="43.889373483s" podCreationTimestamp="2025-12-03 13:14:36 +0000 UTC" firstStartedPulling="2025-12-03 13:14:38.243950391 +0000 UTC m=+1137.710381582" lastFinishedPulling="2025-12-03 13:15:18.496142903 +0000 UTC m=+1177.962574094" observedRunningTime="2025-12-03 13:15:19.875943466 +0000 UTC m=+1179.342374657" watchObservedRunningTime="2025-12-03 13:15:19.889373483 +0000 UTC m=+1179.355804674" Dec 03 13:15:20 crc kubenswrapper[4986]: I1203 13:15:19.992318 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-ph2qf" podStartSLOduration=3.120594181 podStartE2EDuration="42.992302456s" podCreationTimestamp="2025-12-03 13:14:37 +0000 UTC" firstStartedPulling="2025-12-03 13:14:38.651347633 +0000 UTC m=+1138.117778834" lastFinishedPulling="2025-12-03 13:15:18.523055918 +0000 UTC m=+1177.989487109" observedRunningTime="2025-12-03 13:15:19.936441009 +0000 UTC m=+1179.402872200" watchObservedRunningTime="2025-12-03 13:15:19.992302456 +0000 UTC m=+1179.458733647" Dec 03 13:15:20 crc kubenswrapper[4986]: I1203 13:15:20.111497 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-726wj" podStartSLOduration=4.01814562 podStartE2EDuration="44.111483092s" podCreationTimestamp="2025-12-03 13:14:36 +0000 UTC" firstStartedPulling="2025-12-03 13:14:38.416840465 +0000 UTC m=+1137.883271656" lastFinishedPulling="2025-12-03 13:15:18.510177947 +0000 UTC m=+1177.976609128" observedRunningTime="2025-12-03 13:15:20.109688342 +0000 UTC m=+1179.576119533" watchObservedRunningTime="2025-12-03 13:15:20.111483092 +0000 UTC m=+1179.577914283" Dec 03 13:15:20 crc kubenswrapper[4986]: I1203 13:15:20.134296 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-c6wk7" podStartSLOduration=4.214937238 podStartE2EDuration="44.134267485s" podCreationTimestamp="2025-12-03 13:14:36 +0000 UTC" firstStartedPulling="2025-12-03 13:14:38.633622179 +0000 UTC m=+1138.100053370" lastFinishedPulling="2025-12-03 13:15:18.552952426 +0000 UTC m=+1178.019383617" observedRunningTime="2025-12-03 13:15:20.129944016 +0000 UTC m=+1179.596375207" watchObservedRunningTime="2025-12-03 13:15:20.134267485 +0000 UTC m=+1179.600698676" Dec 03 13:15:20 crc kubenswrapper[4986]: E1203 13:15:20.139329 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-7t7xl" podUID="d601bb24-2bd9-478a-96d1-ed2001bd53b6" Dec 03 13:15:20 crc kubenswrapper[4986]: I1203 13:15:20.151682 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-jgxqz" podStartSLOduration=4.171654784 podStartE2EDuration="44.151664329s" podCreationTimestamp="2025-12-03 13:14:36 +0000 UTC" firstStartedPulling="2025-12-03 13:14:38.633336071 +0000 UTC m=+1138.099767262" lastFinishedPulling="2025-12-03 13:15:18.613345616 +0000 UTC m=+1178.079776807" observedRunningTime="2025-12-03 13:15:20.150136788 +0000 UTC m=+1179.616567979" watchObservedRunningTime="2025-12-03 13:15:20.151664329 +0000 UTC m=+1179.618095520" Dec 03 13:15:20 crc kubenswrapper[4986]: I1203 13:15:20.182048 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-h5sqr" podStartSLOduration=3.74875793 podStartE2EDuration="44.18203102s" podCreationTimestamp="2025-12-03 13:14:36 +0000 UTC" firstStartedPulling="2025-12-03 13:14:38.089397008 +0000 UTC m=+1137.555828199" lastFinishedPulling="2025-12-03 13:15:18.522670098 +0000 UTC m=+1177.989101289" observedRunningTime="2025-12-03 13:15:20.180554479 +0000 UTC m=+1179.646985670" watchObservedRunningTime="2025-12-03 13:15:20.18203102 +0000 UTC m=+1179.648462211" Dec 03 13:15:20 crc kubenswrapper[4986]: I1203 13:15:20.506228 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412795-l86x6" Dec 03 13:15:20 crc kubenswrapper[4986]: I1203 13:15:20.641685 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dd86e79f-37fe-4393-8bd8-28a14d6d5537-secret-volume\") pod \"dd86e79f-37fe-4393-8bd8-28a14d6d5537\" (UID: \"dd86e79f-37fe-4393-8bd8-28a14d6d5537\") " Dec 03 13:15:20 crc kubenswrapper[4986]: I1203 13:15:20.641942 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dd86e79f-37fe-4393-8bd8-28a14d6d5537-config-volume\") pod \"dd86e79f-37fe-4393-8bd8-28a14d6d5537\" (UID: \"dd86e79f-37fe-4393-8bd8-28a14d6d5537\") " Dec 03 13:15:20 crc kubenswrapper[4986]: I1203 13:15:20.641973 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrqjz\" (UniqueName: \"kubernetes.io/projected/dd86e79f-37fe-4393-8bd8-28a14d6d5537-kube-api-access-jrqjz\") pod \"dd86e79f-37fe-4393-8bd8-28a14d6d5537\" (UID: \"dd86e79f-37fe-4393-8bd8-28a14d6d5537\") " Dec 03 13:15:20 crc kubenswrapper[4986]: I1203 13:15:20.643095 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd86e79f-37fe-4393-8bd8-28a14d6d5537-config-volume" (OuterVolumeSpecName: "config-volume") pod "dd86e79f-37fe-4393-8bd8-28a14d6d5537" (UID: "dd86e79f-37fe-4393-8bd8-28a14d6d5537"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:15:20 crc kubenswrapper[4986]: I1203 13:15:20.654369 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd86e79f-37fe-4393-8bd8-28a14d6d5537-kube-api-access-jrqjz" (OuterVolumeSpecName: "kube-api-access-jrqjz") pod "dd86e79f-37fe-4393-8bd8-28a14d6d5537" (UID: "dd86e79f-37fe-4393-8bd8-28a14d6d5537"). InnerVolumeSpecName "kube-api-access-jrqjz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:15:20 crc kubenswrapper[4986]: I1203 13:15:20.654430 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd86e79f-37fe-4393-8bd8-28a14d6d5537-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "dd86e79f-37fe-4393-8bd8-28a14d6d5537" (UID: "dd86e79f-37fe-4393-8bd8-28a14d6d5537"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:15:20 crc kubenswrapper[4986]: I1203 13:15:20.743168 4986 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dd86e79f-37fe-4393-8bd8-28a14d6d5537-config-volume\") on node \"crc\" DevicePath \"\"" Dec 03 13:15:20 crc kubenswrapper[4986]: I1203 13:15:20.743202 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrqjz\" (UniqueName: \"kubernetes.io/projected/dd86e79f-37fe-4393-8bd8-28a14d6d5537-kube-api-access-jrqjz\") on node \"crc\" DevicePath \"\"" Dec 03 13:15:20 crc kubenswrapper[4986]: I1203 13:15:20.743216 4986 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dd86e79f-37fe-4393-8bd8-28a14d6d5537-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 03 13:15:20 crc kubenswrapper[4986]: I1203 13:15:20.845770 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-7t7xl" event={"ID":"d601bb24-2bd9-478a-96d1-ed2001bd53b6","Type":"ContainerStarted","Data":"7495fcca88bbe761828524ba741fc61aa1628a7745d773352686d4b9acf41e80"} Dec 03 13:15:20 crc kubenswrapper[4986]: E1203 13:15:20.847362 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670\\\"\"" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-7t7xl" podUID="d601bb24-2bd9-478a-96d1-ed2001bd53b6" Dec 03 13:15:20 crc kubenswrapper[4986]: I1203 13:15:20.847766 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-xcs9s" event={"ID":"be530205-b10b-4d4b-9fa3-4d9d0548054c","Type":"ContainerStarted","Data":"f529cc0da561acfd06263fd9e665a06f414bc47d6df96d8ecd693cdcb84f5db3"} Dec 03 13:15:20 crc kubenswrapper[4986]: E1203 13:15:20.849156 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-998648c74-xcs9s" podUID="be530205-b10b-4d4b-9fa3-4d9d0548054c" Dec 03 13:15:20 crc kubenswrapper[4986]: I1203 13:15:20.849586 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-j4bg2" event={"ID":"c6bc4b54-a3be-4a2c-813e-fa19eea0dbe8","Type":"ContainerStarted","Data":"09e2457185b2c30b27e932983e7cff1669b5b26f96ecc0ef5ab6794ede6eea6d"} Dec 03 13:15:20 crc kubenswrapper[4986]: I1203 13:15:20.849621 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-j4bg2" event={"ID":"c6bc4b54-a3be-4a2c-813e-fa19eea0dbe8","Type":"ContainerStarted","Data":"8869d3cff9e944fc6e9f81c5709260164291f86116ab0f7907964c502b9cd910"} Dec 03 13:15:20 crc kubenswrapper[4986]: I1203 13:15:20.849782 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-78f8948974-j4bg2" Dec 03 13:15:20 crc kubenswrapper[4986]: I1203 13:15:20.851801 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412795-l86x6" event={"ID":"dd86e79f-37fe-4393-8bd8-28a14d6d5537","Type":"ContainerDied","Data":"4e0010e3aeb155ad21c6b4dad796016acc2022fbdb61d451028605b5000cc06c"} Dec 03 13:15:20 crc kubenswrapper[4986]: I1203 13:15:20.851832 4986 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e0010e3aeb155ad21c6b4dad796016acc2022fbdb61d451028605b5000cc06c" Dec 03 13:15:20 crc kubenswrapper[4986]: I1203 13:15:20.851848 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412795-l86x6" Dec 03 13:15:20 crc kubenswrapper[4986]: I1203 13:15:20.864021 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-l48wq" event={"ID":"f3328b2b-d4e4-4b39-a949-bfd1463596f0","Type":"ContainerStarted","Data":"42f4b10a5d1e5b6ec78261f4a379f0b22912d9d27d31bdeacccef4ba4b24cfa6"} Dec 03 13:15:20 crc kubenswrapper[4986]: I1203 13:15:20.864779 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-l48wq" Dec 03 13:15:20 crc kubenswrapper[4986]: I1203 13:15:20.867337 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-5vc5p" event={"ID":"8af63121-8727-4c23-b872-554fe679fc2f","Type":"ContainerStarted","Data":"d421053e3ea68fdabe30b0103325f3698e71200c188221ad29f7910c5f4d4815"} Dec 03 13:15:20 crc kubenswrapper[4986]: I1203 13:15:20.868848 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-bkf5w" event={"ID":"3d88c3a6-1643-4fff-acbe-2327b9878103","Type":"ContainerStarted","Data":"d6c25bed5c94959ca74aa2107fd1a86a451b93d2b1ba489b61c25f612b056272"} Dec 03 13:15:20 crc kubenswrapper[4986]: I1203 13:15:20.869018 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-bkf5w" Dec 03 13:15:20 crc kubenswrapper[4986]: I1203 13:15:20.870646 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-lvpbh" event={"ID":"42fe5051-78e1-45ab-9766-dbd119c4e060","Type":"ContainerStarted","Data":"993c2e0f4bacc3629e29338a7f3653146e46fde29d633e9ce1a6000d6e447f40"} Dec 03 13:15:20 crc kubenswrapper[4986]: I1203 13:15:20.870671 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-lvpbh" event={"ID":"42fe5051-78e1-45ab-9766-dbd119c4e060","Type":"ContainerStarted","Data":"e30bc75b94b7fc949bdcfcd08d7769b2dea123963e909eeed2123d00e3a2b997"} Dec 03 13:15:20 crc kubenswrapper[4986]: I1203 13:15:20.870850 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-lvpbh" Dec 03 13:15:20 crc kubenswrapper[4986]: I1203 13:15:20.872580 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4fw9mm" event={"ID":"400b4a35-c3f1-409e-83fe-019ff145c65a","Type":"ContainerStarted","Data":"5da92cfadc2722abeeb9ab142484abbdc398a9b152db93fbf4956d884799c686"} Dec 03 13:15:20 crc kubenswrapper[4986]: E1203 13:15:20.875059 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/infra-operator@sha256:09a6d0613ee2d3c1c809fc36c22678458ac271e0da87c970aec0a5339f5423f7\\\"\"" pod="openstack-operators/infra-operator-controller-manager-57548d458d-cgjf2" podUID="b47ead63-1562-466f-887b-54c155983ebf" Dec 03 13:15:20 crc kubenswrapper[4986]: E1203 13:15:20.875834 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:14cfad6ea2e7f7ecc4cb2aafceb9c61514b3d04b66668832d1e4ac3b19f1ab81\\\"\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4fw9mm" podUID="400b4a35-c3f1-409e-83fe-019ff145c65a" Dec 03 13:15:20 crc kubenswrapper[4986]: I1203 13:15:20.892996 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-l48wq" podStartSLOduration=4.3424147699999995 podStartE2EDuration="44.892980176s" podCreationTimestamp="2025-12-03 13:14:36 +0000 UTC" firstStartedPulling="2025-12-03 13:14:38.064561029 +0000 UTC m=+1137.530992220" lastFinishedPulling="2025-12-03 13:15:18.615126435 +0000 UTC m=+1178.081557626" observedRunningTime="2025-12-03 13:15:20.892948385 +0000 UTC m=+1180.359379586" watchObservedRunningTime="2025-12-03 13:15:20.892980176 +0000 UTC m=+1180.359411377" Dec 03 13:15:20 crc kubenswrapper[4986]: I1203 13:15:20.988369 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-bkf5w" podStartSLOduration=2.799375209 podStartE2EDuration="44.98130543s" podCreationTimestamp="2025-12-03 13:14:36 +0000 UTC" firstStartedPulling="2025-12-03 13:14:38.260870424 +0000 UTC m=+1137.727301615" lastFinishedPulling="2025-12-03 13:15:20.442800645 +0000 UTC m=+1179.909231836" observedRunningTime="2025-12-03 13:15:20.944651578 +0000 UTC m=+1180.411082769" watchObservedRunningTime="2025-12-03 13:15:20.98130543 +0000 UTC m=+1180.447736621" Dec 03 13:15:20 crc kubenswrapper[4986]: I1203 13:15:20.991272 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-lvpbh" podStartSLOduration=2.9525021320000002 podStartE2EDuration="44.991254151s" podCreationTimestamp="2025-12-03 13:14:36 +0000 UTC" firstStartedPulling="2025-12-03 13:14:38.407884991 +0000 UTC m=+1137.874316182" lastFinishedPulling="2025-12-03 13:15:20.44663701 +0000 UTC m=+1179.913068201" observedRunningTime="2025-12-03 13:15:20.990518182 +0000 UTC m=+1180.456949393" watchObservedRunningTime="2025-12-03 13:15:20.991254151 +0000 UTC m=+1180.457685332" Dec 03 13:15:21 crc kubenswrapper[4986]: I1203 13:15:21.062514 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-78f8948974-j4bg2" podStartSLOduration=4.708871014 podStartE2EDuration="45.062501138s" podCreationTimestamp="2025-12-03 13:14:36 +0000 UTC" firstStartedPulling="2025-12-03 13:14:38.623818401 +0000 UTC m=+1138.090249592" lastFinishedPulling="2025-12-03 13:15:18.977448535 +0000 UTC m=+1178.443879716" observedRunningTime="2025-12-03 13:15:21.058645963 +0000 UTC m=+1180.525077154" watchObservedRunningTime="2025-12-03 13:15:21.062501138 +0000 UTC m=+1180.528932319" Dec 03 13:15:21 crc kubenswrapper[4986]: I1203 13:15:21.882149 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-5vc5p" event={"ID":"8af63121-8727-4c23-b872-554fe679fc2f","Type":"ContainerStarted","Data":"aec934ae03cecf4c39a0a0dba46edc0c86997bfe0220f102d8a9740d126858b8"} Dec 03 13:15:21 crc kubenswrapper[4986]: I1203 13:15:21.882537 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-5vc5p" Dec 03 13:15:21 crc kubenswrapper[4986]: I1203 13:15:21.885688 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-jgxqz" Dec 03 13:15:21 crc kubenswrapper[4986]: I1203 13:15:21.904826 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-5vc5p" podStartSLOduration=3.462476199 podStartE2EDuration="45.904805785s" podCreationTimestamp="2025-12-03 13:14:36 +0000 UTC" firstStartedPulling="2025-12-03 13:14:38.064855798 +0000 UTC m=+1137.531286989" lastFinishedPulling="2025-12-03 13:15:20.507185384 +0000 UTC m=+1179.973616575" observedRunningTime="2025-12-03 13:15:21.903446647 +0000 UTC m=+1181.369877868" watchObservedRunningTime="2025-12-03 13:15:21.904805785 +0000 UTC m=+1181.371236976" Dec 03 13:15:26 crc kubenswrapper[4986]: I1203 13:15:26.927153 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-l48wq" Dec 03 13:15:26 crc kubenswrapper[4986]: I1203 13:15:26.936847 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-5vc5p" Dec 03 13:15:27 crc kubenswrapper[4986]: I1203 13:15:27.187667 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-bkf5w" Dec 03 13:15:27 crc kubenswrapper[4986]: I1203 13:15:27.382742 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-lvpbh" Dec 03 13:15:27 crc kubenswrapper[4986]: I1203 13:15:27.421882 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-78f8948974-j4bg2" Dec 03 13:15:27 crc kubenswrapper[4986]: I1203 13:15:27.564858 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-726wj" Dec 03 13:15:27 crc kubenswrapper[4986]: I1203 13:15:27.600309 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5854674fcc-bg4vv" Dec 03 13:15:27 crc kubenswrapper[4986]: I1203 13:15:27.688004 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-c6wk7" Dec 03 13:15:27 crc kubenswrapper[4986]: I1203 13:15:27.748149 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-ph2qf" Dec 03 13:15:29 crc kubenswrapper[4986]: I1203 13:15:29.743786 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-5cd9cc65cb-tjrw7" Dec 03 13:15:33 crc kubenswrapper[4986]: E1203 13:15:33.943848 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670\\\"\"" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-7t7xl" podUID="d601bb24-2bd9-478a-96d1-ed2001bd53b6" Dec 03 13:15:33 crc kubenswrapper[4986]: I1203 13:15:33.970568 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4fw9mm" event={"ID":"400b4a35-c3f1-409e-83fe-019ff145c65a","Type":"ContainerStarted","Data":"f4f6e1714997b6ec864cc8a0c48fb310459e1c09bcf28b0fd21628bd4c79581a"} Dec 03 13:15:33 crc kubenswrapper[4986]: I1203 13:15:33.970830 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4fw9mm" Dec 03 13:15:33 crc kubenswrapper[4986]: I1203 13:15:33.995677 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4fw9mm" podStartSLOduration=24.85528489 podStartE2EDuration="57.995652463s" podCreationTimestamp="2025-12-03 13:14:36 +0000 UTC" firstStartedPulling="2025-12-03 13:14:59.264523993 +0000 UTC m=+1158.730955184" lastFinishedPulling="2025-12-03 13:15:32.404891566 +0000 UTC m=+1191.871322757" observedRunningTime="2025-12-03 13:15:33.991017557 +0000 UTC m=+1193.457448758" watchObservedRunningTime="2025-12-03 13:15:33.995652463 +0000 UTC m=+1193.462083654" Dec 03 13:15:35 crc kubenswrapper[4986]: I1203 13:15:35.986018 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-cgjf2" event={"ID":"b47ead63-1562-466f-887b-54c155983ebf","Type":"ContainerStarted","Data":"6ac0db10aa81cc88d939bddda80dc1a5f41734cafb45246f5a6502b2a58d1bc0"} Dec 03 13:15:35 crc kubenswrapper[4986]: I1203 13:15:35.986630 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-57548d458d-cgjf2" Dec 03 13:15:36 crc kubenswrapper[4986]: I1203 13:15:36.011040 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-57548d458d-cgjf2" podStartSLOduration=20.72900419 podStartE2EDuration="1m0.011015282s" podCreationTimestamp="2025-12-03 13:14:36 +0000 UTC" firstStartedPulling="2025-12-03 13:14:55.694321448 +0000 UTC m=+1155.160752639" lastFinishedPulling="2025-12-03 13:15:34.97633253 +0000 UTC m=+1194.442763731" observedRunningTime="2025-12-03 13:15:36.001139512 +0000 UTC m=+1195.467570713" watchObservedRunningTime="2025-12-03 13:15:36.011015282 +0000 UTC m=+1195.477446503" Dec 03 13:15:38 crc kubenswrapper[4986]: I1203 13:15:38.003749 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-xcs9s" event={"ID":"be530205-b10b-4d4b-9fa3-4d9d0548054c","Type":"ContainerStarted","Data":"03e1ec3814313bccfca39ecd413f428bba7a7cdf5cf59b0c59622273a50bc201"} Dec 03 13:15:38 crc kubenswrapper[4986]: I1203 13:15:38.004364 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-998648c74-xcs9s" Dec 03 13:15:38 crc kubenswrapper[4986]: I1203 13:15:38.026952 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-998648c74-xcs9s" podStartSLOduration=4.175987116 podStartE2EDuration="1m2.026936728s" podCreationTimestamp="2025-12-03 13:14:36 +0000 UTC" firstStartedPulling="2025-12-03 13:14:38.633231439 +0000 UTC m=+1138.099662640" lastFinishedPulling="2025-12-03 13:15:36.484181061 +0000 UTC m=+1195.950612252" observedRunningTime="2025-12-03 13:15:38.021394866 +0000 UTC m=+1197.487826057" watchObservedRunningTime="2025-12-03 13:15:38.026936728 +0000 UTC m=+1197.493367919" Dec 03 13:15:42 crc kubenswrapper[4986]: I1203 13:15:42.610906 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-57548d458d-cgjf2" Dec 03 13:15:43 crc kubenswrapper[4986]: I1203 13:15:43.316136 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4fw9mm" Dec 03 13:15:47 crc kubenswrapper[4986]: I1203 13:15:47.357248 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-998648c74-xcs9s" Dec 03 13:15:52 crc kubenswrapper[4986]: I1203 13:15:52.151449 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-7t7xl" event={"ID":"d601bb24-2bd9-478a-96d1-ed2001bd53b6","Type":"ContainerStarted","Data":"465c6ddea1958661f37cf70248613dc0ff7426536dae66d53bf8eb95c5a4a35e"} Dec 03 13:15:52 crc kubenswrapper[4986]: I1203 13:15:52.152254 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-7t7xl" Dec 03 13:15:52 crc kubenswrapper[4986]: I1203 13:15:52.179095 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-7t7xl" podStartSLOduration=2.923338675 podStartE2EDuration="1m16.179064431s" podCreationTimestamp="2025-12-03 13:14:36 +0000 UTC" firstStartedPulling="2025-12-03 13:14:38.627903623 +0000 UTC m=+1138.094334814" lastFinishedPulling="2025-12-03 13:15:51.883629369 +0000 UTC m=+1211.350060570" observedRunningTime="2025-12-03 13:15:52.178408363 +0000 UTC m=+1211.644839584" watchObservedRunningTime="2025-12-03 13:15:52.179064431 +0000 UTC m=+1211.645495662" Dec 03 13:15:57 crc kubenswrapper[4986]: I1203 13:15:57.389914 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-7t7xl" Dec 03 13:16:13 crc kubenswrapper[4986]: I1203 13:16:13.678065 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-xznmr"] Dec 03 13:16:13 crc kubenswrapper[4986]: E1203 13:16:13.679500 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd86e79f-37fe-4393-8bd8-28a14d6d5537" containerName="collect-profiles" Dec 03 13:16:13 crc kubenswrapper[4986]: I1203 13:16:13.679520 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd86e79f-37fe-4393-8bd8-28a14d6d5537" containerName="collect-profiles" Dec 03 13:16:13 crc kubenswrapper[4986]: I1203 13:16:13.679707 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd86e79f-37fe-4393-8bd8-28a14d6d5537" containerName="collect-profiles" Dec 03 13:16:13 crc kubenswrapper[4986]: I1203 13:16:13.680583 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-xznmr" Dec 03 13:16:13 crc kubenswrapper[4986]: I1203 13:16:13.686863 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Dec 03 13:16:13 crc kubenswrapper[4986]: I1203 13:16:13.697269 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-bqn2j" Dec 03 13:16:13 crc kubenswrapper[4986]: I1203 13:16:13.697497 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Dec 03 13:16:13 crc kubenswrapper[4986]: I1203 13:16:13.697497 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Dec 03 13:16:13 crc kubenswrapper[4986]: I1203 13:16:13.717426 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-xznmr"] Dec 03 13:16:13 crc kubenswrapper[4986]: I1203 13:16:13.766244 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-sbp97"] Dec 03 13:16:13 crc kubenswrapper[4986]: I1203 13:16:13.772055 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-sbp97" Dec 03 13:16:13 crc kubenswrapper[4986]: I1203 13:16:13.774234 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Dec 03 13:16:13 crc kubenswrapper[4986]: I1203 13:16:13.788894 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-sbp97"] Dec 03 13:16:13 crc kubenswrapper[4986]: I1203 13:16:13.832824 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6dcfdd67-10b7-42e4-902e-4d3c65b287dd-config\") pod \"dnsmasq-dns-675f4bcbfc-xznmr\" (UID: \"6dcfdd67-10b7-42e4-902e-4d3c65b287dd\") " pod="openstack/dnsmasq-dns-675f4bcbfc-xznmr" Dec 03 13:16:13 crc kubenswrapper[4986]: I1203 13:16:13.833001 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c27f4\" (UniqueName: \"kubernetes.io/projected/6dcfdd67-10b7-42e4-902e-4d3c65b287dd-kube-api-access-c27f4\") pod \"dnsmasq-dns-675f4bcbfc-xznmr\" (UID: \"6dcfdd67-10b7-42e4-902e-4d3c65b287dd\") " pod="openstack/dnsmasq-dns-675f4bcbfc-xznmr" Dec 03 13:16:13 crc kubenswrapper[4986]: I1203 13:16:13.934367 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4bb76c2-97be-4087-aa72-70fd50a5731c-config\") pod \"dnsmasq-dns-78dd6ddcc-sbp97\" (UID: \"a4bb76c2-97be-4087-aa72-70fd50a5731c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-sbp97" Dec 03 13:16:13 crc kubenswrapper[4986]: I1203 13:16:13.934425 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nm4xd\" (UniqueName: \"kubernetes.io/projected/a4bb76c2-97be-4087-aa72-70fd50a5731c-kube-api-access-nm4xd\") pod \"dnsmasq-dns-78dd6ddcc-sbp97\" (UID: \"a4bb76c2-97be-4087-aa72-70fd50a5731c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-sbp97" Dec 03 13:16:13 crc kubenswrapper[4986]: I1203 13:16:13.934449 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a4bb76c2-97be-4087-aa72-70fd50a5731c-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-sbp97\" (UID: \"a4bb76c2-97be-4087-aa72-70fd50a5731c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-sbp97" Dec 03 13:16:13 crc kubenswrapper[4986]: I1203 13:16:13.934606 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c27f4\" (UniqueName: \"kubernetes.io/projected/6dcfdd67-10b7-42e4-902e-4d3c65b287dd-kube-api-access-c27f4\") pod \"dnsmasq-dns-675f4bcbfc-xznmr\" (UID: \"6dcfdd67-10b7-42e4-902e-4d3c65b287dd\") " pod="openstack/dnsmasq-dns-675f4bcbfc-xznmr" Dec 03 13:16:13 crc kubenswrapper[4986]: I1203 13:16:13.934808 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6dcfdd67-10b7-42e4-902e-4d3c65b287dd-config\") pod \"dnsmasq-dns-675f4bcbfc-xznmr\" (UID: \"6dcfdd67-10b7-42e4-902e-4d3c65b287dd\") " pod="openstack/dnsmasq-dns-675f4bcbfc-xznmr" Dec 03 13:16:13 crc kubenswrapper[4986]: I1203 13:16:13.935753 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6dcfdd67-10b7-42e4-902e-4d3c65b287dd-config\") pod \"dnsmasq-dns-675f4bcbfc-xznmr\" (UID: \"6dcfdd67-10b7-42e4-902e-4d3c65b287dd\") " pod="openstack/dnsmasq-dns-675f4bcbfc-xznmr" Dec 03 13:16:13 crc kubenswrapper[4986]: I1203 13:16:13.959538 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c27f4\" (UniqueName: \"kubernetes.io/projected/6dcfdd67-10b7-42e4-902e-4d3c65b287dd-kube-api-access-c27f4\") pod \"dnsmasq-dns-675f4bcbfc-xznmr\" (UID: \"6dcfdd67-10b7-42e4-902e-4d3c65b287dd\") " pod="openstack/dnsmasq-dns-675f4bcbfc-xznmr" Dec 03 13:16:13 crc kubenswrapper[4986]: I1203 13:16:13.998297 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-xznmr" Dec 03 13:16:14 crc kubenswrapper[4986]: I1203 13:16:14.035791 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nm4xd\" (UniqueName: \"kubernetes.io/projected/a4bb76c2-97be-4087-aa72-70fd50a5731c-kube-api-access-nm4xd\") pod \"dnsmasq-dns-78dd6ddcc-sbp97\" (UID: \"a4bb76c2-97be-4087-aa72-70fd50a5731c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-sbp97" Dec 03 13:16:14 crc kubenswrapper[4986]: I1203 13:16:14.036104 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a4bb76c2-97be-4087-aa72-70fd50a5731c-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-sbp97\" (UID: \"a4bb76c2-97be-4087-aa72-70fd50a5731c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-sbp97" Dec 03 13:16:14 crc kubenswrapper[4986]: I1203 13:16:14.036264 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4bb76c2-97be-4087-aa72-70fd50a5731c-config\") pod \"dnsmasq-dns-78dd6ddcc-sbp97\" (UID: \"a4bb76c2-97be-4087-aa72-70fd50a5731c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-sbp97" Dec 03 13:16:14 crc kubenswrapper[4986]: I1203 13:16:14.037736 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a4bb76c2-97be-4087-aa72-70fd50a5731c-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-sbp97\" (UID: \"a4bb76c2-97be-4087-aa72-70fd50a5731c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-sbp97" Dec 03 13:16:14 crc kubenswrapper[4986]: I1203 13:16:14.038173 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4bb76c2-97be-4087-aa72-70fd50a5731c-config\") pod \"dnsmasq-dns-78dd6ddcc-sbp97\" (UID: \"a4bb76c2-97be-4087-aa72-70fd50a5731c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-sbp97" Dec 03 13:16:14 crc kubenswrapper[4986]: I1203 13:16:14.055656 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nm4xd\" (UniqueName: \"kubernetes.io/projected/a4bb76c2-97be-4087-aa72-70fd50a5731c-kube-api-access-nm4xd\") pod \"dnsmasq-dns-78dd6ddcc-sbp97\" (UID: \"a4bb76c2-97be-4087-aa72-70fd50a5731c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-sbp97" Dec 03 13:16:14 crc kubenswrapper[4986]: I1203 13:16:14.092123 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-sbp97" Dec 03 13:16:14 crc kubenswrapper[4986]: I1203 13:16:14.198296 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-xznmr"] Dec 03 13:16:14 crc kubenswrapper[4986]: I1203 13:16:14.373697 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-xznmr" event={"ID":"6dcfdd67-10b7-42e4-902e-4d3c65b287dd","Type":"ContainerStarted","Data":"dfe562c10f53796d9e38dffcc69e302742755e9f8a4512a75b7af7e450ea0121"} Dec 03 13:16:14 crc kubenswrapper[4986]: I1203 13:16:14.557589 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-sbp97"] Dec 03 13:16:14 crc kubenswrapper[4986]: W1203 13:16:14.559575 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda4bb76c2_97be_4087_aa72_70fd50a5731c.slice/crio-a70a29f0702f2699b77df6c2ae599029a61de9f2a796e706607a4e13f359bbda WatchSource:0}: Error finding container a70a29f0702f2699b77df6c2ae599029a61de9f2a796e706607a4e13f359bbda: Status 404 returned error can't find the container with id a70a29f0702f2699b77df6c2ae599029a61de9f2a796e706607a4e13f359bbda Dec 03 13:16:15 crc kubenswrapper[4986]: I1203 13:16:15.383744 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-sbp97" event={"ID":"a4bb76c2-97be-4087-aa72-70fd50a5731c","Type":"ContainerStarted","Data":"a70a29f0702f2699b77df6c2ae599029a61de9f2a796e706607a4e13f359bbda"} Dec 03 13:16:16 crc kubenswrapper[4986]: I1203 13:16:16.693953 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-xznmr"] Dec 03 13:16:16 crc kubenswrapper[4986]: I1203 13:16:16.720515 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-7jms6"] Dec 03 13:16:16 crc kubenswrapper[4986]: I1203 13:16:16.721989 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-7jms6" Dec 03 13:16:16 crc kubenswrapper[4986]: I1203 13:16:16.735821 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-7jms6"] Dec 03 13:16:16 crc kubenswrapper[4986]: I1203 13:16:16.774494 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fd53494-2eec-45b8-80f4-7b94abf29bfb-config\") pod \"dnsmasq-dns-666b6646f7-7jms6\" (UID: \"5fd53494-2eec-45b8-80f4-7b94abf29bfb\") " pod="openstack/dnsmasq-dns-666b6646f7-7jms6" Dec 03 13:16:16 crc kubenswrapper[4986]: I1203 13:16:16.774646 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5fd53494-2eec-45b8-80f4-7b94abf29bfb-dns-svc\") pod \"dnsmasq-dns-666b6646f7-7jms6\" (UID: \"5fd53494-2eec-45b8-80f4-7b94abf29bfb\") " pod="openstack/dnsmasq-dns-666b6646f7-7jms6" Dec 03 13:16:16 crc kubenswrapper[4986]: I1203 13:16:16.774671 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lcgt\" (UniqueName: \"kubernetes.io/projected/5fd53494-2eec-45b8-80f4-7b94abf29bfb-kube-api-access-6lcgt\") pod \"dnsmasq-dns-666b6646f7-7jms6\" (UID: \"5fd53494-2eec-45b8-80f4-7b94abf29bfb\") " pod="openstack/dnsmasq-dns-666b6646f7-7jms6" Dec 03 13:16:16 crc kubenswrapper[4986]: I1203 13:16:16.875855 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fd53494-2eec-45b8-80f4-7b94abf29bfb-config\") pod \"dnsmasq-dns-666b6646f7-7jms6\" (UID: \"5fd53494-2eec-45b8-80f4-7b94abf29bfb\") " pod="openstack/dnsmasq-dns-666b6646f7-7jms6" Dec 03 13:16:16 crc kubenswrapper[4986]: I1203 13:16:16.875992 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5fd53494-2eec-45b8-80f4-7b94abf29bfb-dns-svc\") pod \"dnsmasq-dns-666b6646f7-7jms6\" (UID: \"5fd53494-2eec-45b8-80f4-7b94abf29bfb\") " pod="openstack/dnsmasq-dns-666b6646f7-7jms6" Dec 03 13:16:16 crc kubenswrapper[4986]: I1203 13:16:16.876011 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lcgt\" (UniqueName: \"kubernetes.io/projected/5fd53494-2eec-45b8-80f4-7b94abf29bfb-kube-api-access-6lcgt\") pod \"dnsmasq-dns-666b6646f7-7jms6\" (UID: \"5fd53494-2eec-45b8-80f4-7b94abf29bfb\") " pod="openstack/dnsmasq-dns-666b6646f7-7jms6" Dec 03 13:16:16 crc kubenswrapper[4986]: I1203 13:16:16.876872 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5fd53494-2eec-45b8-80f4-7b94abf29bfb-dns-svc\") pod \"dnsmasq-dns-666b6646f7-7jms6\" (UID: \"5fd53494-2eec-45b8-80f4-7b94abf29bfb\") " pod="openstack/dnsmasq-dns-666b6646f7-7jms6" Dec 03 13:16:16 crc kubenswrapper[4986]: I1203 13:16:16.877526 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fd53494-2eec-45b8-80f4-7b94abf29bfb-config\") pod \"dnsmasq-dns-666b6646f7-7jms6\" (UID: \"5fd53494-2eec-45b8-80f4-7b94abf29bfb\") " pod="openstack/dnsmasq-dns-666b6646f7-7jms6" Dec 03 13:16:16 crc kubenswrapper[4986]: I1203 13:16:16.910714 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lcgt\" (UniqueName: \"kubernetes.io/projected/5fd53494-2eec-45b8-80f4-7b94abf29bfb-kube-api-access-6lcgt\") pod \"dnsmasq-dns-666b6646f7-7jms6\" (UID: \"5fd53494-2eec-45b8-80f4-7b94abf29bfb\") " pod="openstack/dnsmasq-dns-666b6646f7-7jms6" Dec 03 13:16:16 crc kubenswrapper[4986]: I1203 13:16:16.996341 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-sbp97"] Dec 03 13:16:17 crc kubenswrapper[4986]: I1203 13:16:17.024185 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-smttz"] Dec 03 13:16:17 crc kubenswrapper[4986]: I1203 13:16:17.029107 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-smttz" Dec 03 13:16:17 crc kubenswrapper[4986]: I1203 13:16:17.033628 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-smttz"] Dec 03 13:16:17 crc kubenswrapper[4986]: I1203 13:16:17.050067 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-7jms6" Dec 03 13:16:17 crc kubenswrapper[4986]: I1203 13:16:17.085144 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0a895ed-eb3a-49d5-95fe-fe470db0eca3-config\") pod \"dnsmasq-dns-57d769cc4f-smttz\" (UID: \"b0a895ed-eb3a-49d5-95fe-fe470db0eca3\") " pod="openstack/dnsmasq-dns-57d769cc4f-smttz" Dec 03 13:16:17 crc kubenswrapper[4986]: I1203 13:16:17.085221 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b0a895ed-eb3a-49d5-95fe-fe470db0eca3-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-smttz\" (UID: \"b0a895ed-eb3a-49d5-95fe-fe470db0eca3\") " pod="openstack/dnsmasq-dns-57d769cc4f-smttz" Dec 03 13:16:17 crc kubenswrapper[4986]: I1203 13:16:17.085335 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bkk5\" (UniqueName: \"kubernetes.io/projected/b0a895ed-eb3a-49d5-95fe-fe470db0eca3-kube-api-access-4bkk5\") pod \"dnsmasq-dns-57d769cc4f-smttz\" (UID: \"b0a895ed-eb3a-49d5-95fe-fe470db0eca3\") " pod="openstack/dnsmasq-dns-57d769cc4f-smttz" Dec 03 13:16:17 crc kubenswrapper[4986]: I1203 13:16:17.188071 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bkk5\" (UniqueName: \"kubernetes.io/projected/b0a895ed-eb3a-49d5-95fe-fe470db0eca3-kube-api-access-4bkk5\") pod \"dnsmasq-dns-57d769cc4f-smttz\" (UID: \"b0a895ed-eb3a-49d5-95fe-fe470db0eca3\") " pod="openstack/dnsmasq-dns-57d769cc4f-smttz" Dec 03 13:16:17 crc kubenswrapper[4986]: I1203 13:16:17.188159 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0a895ed-eb3a-49d5-95fe-fe470db0eca3-config\") pod \"dnsmasq-dns-57d769cc4f-smttz\" (UID: \"b0a895ed-eb3a-49d5-95fe-fe470db0eca3\") " pod="openstack/dnsmasq-dns-57d769cc4f-smttz" Dec 03 13:16:17 crc kubenswrapper[4986]: I1203 13:16:17.188212 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b0a895ed-eb3a-49d5-95fe-fe470db0eca3-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-smttz\" (UID: \"b0a895ed-eb3a-49d5-95fe-fe470db0eca3\") " pod="openstack/dnsmasq-dns-57d769cc4f-smttz" Dec 03 13:16:17 crc kubenswrapper[4986]: I1203 13:16:17.189102 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0a895ed-eb3a-49d5-95fe-fe470db0eca3-config\") pod \"dnsmasq-dns-57d769cc4f-smttz\" (UID: \"b0a895ed-eb3a-49d5-95fe-fe470db0eca3\") " pod="openstack/dnsmasq-dns-57d769cc4f-smttz" Dec 03 13:16:17 crc kubenswrapper[4986]: I1203 13:16:17.189348 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b0a895ed-eb3a-49d5-95fe-fe470db0eca3-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-smttz\" (UID: \"b0a895ed-eb3a-49d5-95fe-fe470db0eca3\") " pod="openstack/dnsmasq-dns-57d769cc4f-smttz" Dec 03 13:16:17 crc kubenswrapper[4986]: I1203 13:16:17.208406 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bkk5\" (UniqueName: \"kubernetes.io/projected/b0a895ed-eb3a-49d5-95fe-fe470db0eca3-kube-api-access-4bkk5\") pod \"dnsmasq-dns-57d769cc4f-smttz\" (UID: \"b0a895ed-eb3a-49d5-95fe-fe470db0eca3\") " pod="openstack/dnsmasq-dns-57d769cc4f-smttz" Dec 03 13:16:17 crc kubenswrapper[4986]: I1203 13:16:17.358034 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-smttz" Dec 03 13:16:17 crc kubenswrapper[4986]: I1203 13:16:17.873539 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Dec 03 13:16:17 crc kubenswrapper[4986]: I1203 13:16:17.874796 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 03 13:16:17 crc kubenswrapper[4986]: I1203 13:16:17.876381 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Dec 03 13:16:17 crc kubenswrapper[4986]: I1203 13:16:17.876353 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Dec 03 13:16:17 crc kubenswrapper[4986]: I1203 13:16:17.880043 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-xhl6q" Dec 03 13:16:17 crc kubenswrapper[4986]: I1203 13:16:17.880533 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Dec 03 13:16:17 crc kubenswrapper[4986]: I1203 13:16:17.880858 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Dec 03 13:16:17 crc kubenswrapper[4986]: I1203 13:16:17.881015 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Dec 03 13:16:17 crc kubenswrapper[4986]: I1203 13:16:17.881100 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Dec 03 13:16:17 crc kubenswrapper[4986]: I1203 13:16:17.888190 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 03 13:16:17 crc kubenswrapper[4986]: I1203 13:16:17.997699 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/285bb825-5a17-4d45-87d6-852513d0351b-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"285bb825-5a17-4d45-87d6-852513d0351b\") " pod="openstack/rabbitmq-server-0" Dec 03 13:16:17 crc kubenswrapper[4986]: I1203 13:16:17.997747 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/285bb825-5a17-4d45-87d6-852513d0351b-pod-info\") pod \"rabbitmq-server-0\" (UID: \"285bb825-5a17-4d45-87d6-852513d0351b\") " pod="openstack/rabbitmq-server-0" Dec 03 13:16:17 crc kubenswrapper[4986]: I1203 13:16:17.997777 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"285bb825-5a17-4d45-87d6-852513d0351b\") " pod="openstack/rabbitmq-server-0" Dec 03 13:16:17 crc kubenswrapper[4986]: I1203 13:16:17.997860 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/285bb825-5a17-4d45-87d6-852513d0351b-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"285bb825-5a17-4d45-87d6-852513d0351b\") " pod="openstack/rabbitmq-server-0" Dec 03 13:16:17 crc kubenswrapper[4986]: I1203 13:16:17.998039 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/285bb825-5a17-4d45-87d6-852513d0351b-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"285bb825-5a17-4d45-87d6-852513d0351b\") " pod="openstack/rabbitmq-server-0" Dec 03 13:16:17 crc kubenswrapper[4986]: I1203 13:16:17.998167 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/285bb825-5a17-4d45-87d6-852513d0351b-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"285bb825-5a17-4d45-87d6-852513d0351b\") " pod="openstack/rabbitmq-server-0" Dec 03 13:16:17 crc kubenswrapper[4986]: I1203 13:16:17.998265 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/285bb825-5a17-4d45-87d6-852513d0351b-config-data\") pod \"rabbitmq-server-0\" (UID: \"285bb825-5a17-4d45-87d6-852513d0351b\") " pod="openstack/rabbitmq-server-0" Dec 03 13:16:17 crc kubenswrapper[4986]: I1203 13:16:17.998329 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/285bb825-5a17-4d45-87d6-852513d0351b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"285bb825-5a17-4d45-87d6-852513d0351b\") " pod="openstack/rabbitmq-server-0" Dec 03 13:16:17 crc kubenswrapper[4986]: I1203 13:16:17.998408 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6d2cx\" (UniqueName: \"kubernetes.io/projected/285bb825-5a17-4d45-87d6-852513d0351b-kube-api-access-6d2cx\") pod \"rabbitmq-server-0\" (UID: \"285bb825-5a17-4d45-87d6-852513d0351b\") " pod="openstack/rabbitmq-server-0" Dec 03 13:16:17 crc kubenswrapper[4986]: I1203 13:16:17.998484 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/285bb825-5a17-4d45-87d6-852513d0351b-server-conf\") pod \"rabbitmq-server-0\" (UID: \"285bb825-5a17-4d45-87d6-852513d0351b\") " pod="openstack/rabbitmq-server-0" Dec 03 13:16:17 crc kubenswrapper[4986]: I1203 13:16:17.998516 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/285bb825-5a17-4d45-87d6-852513d0351b-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"285bb825-5a17-4d45-87d6-852513d0351b\") " pod="openstack/rabbitmq-server-0" Dec 03 13:16:18 crc kubenswrapper[4986]: I1203 13:16:18.100044 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"285bb825-5a17-4d45-87d6-852513d0351b\") " pod="openstack/rabbitmq-server-0" Dec 03 13:16:18 crc kubenswrapper[4986]: I1203 13:16:18.100099 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/285bb825-5a17-4d45-87d6-852513d0351b-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"285bb825-5a17-4d45-87d6-852513d0351b\") " pod="openstack/rabbitmq-server-0" Dec 03 13:16:18 crc kubenswrapper[4986]: I1203 13:16:18.100136 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/285bb825-5a17-4d45-87d6-852513d0351b-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"285bb825-5a17-4d45-87d6-852513d0351b\") " pod="openstack/rabbitmq-server-0" Dec 03 13:16:18 crc kubenswrapper[4986]: I1203 13:16:18.100162 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/285bb825-5a17-4d45-87d6-852513d0351b-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"285bb825-5a17-4d45-87d6-852513d0351b\") " pod="openstack/rabbitmq-server-0" Dec 03 13:16:18 crc kubenswrapper[4986]: I1203 13:16:18.100196 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/285bb825-5a17-4d45-87d6-852513d0351b-config-data\") pod \"rabbitmq-server-0\" (UID: \"285bb825-5a17-4d45-87d6-852513d0351b\") " pod="openstack/rabbitmq-server-0" Dec 03 13:16:18 crc kubenswrapper[4986]: I1203 13:16:18.100213 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/285bb825-5a17-4d45-87d6-852513d0351b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"285bb825-5a17-4d45-87d6-852513d0351b\") " pod="openstack/rabbitmq-server-0" Dec 03 13:16:18 crc kubenswrapper[4986]: I1203 13:16:18.100237 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6d2cx\" (UniqueName: \"kubernetes.io/projected/285bb825-5a17-4d45-87d6-852513d0351b-kube-api-access-6d2cx\") pod \"rabbitmq-server-0\" (UID: \"285bb825-5a17-4d45-87d6-852513d0351b\") " pod="openstack/rabbitmq-server-0" Dec 03 13:16:18 crc kubenswrapper[4986]: I1203 13:16:18.100264 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/285bb825-5a17-4d45-87d6-852513d0351b-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"285bb825-5a17-4d45-87d6-852513d0351b\") " pod="openstack/rabbitmq-server-0" Dec 03 13:16:18 crc kubenswrapper[4986]: I1203 13:16:18.100474 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/285bb825-5a17-4d45-87d6-852513d0351b-server-conf\") pod \"rabbitmq-server-0\" (UID: \"285bb825-5a17-4d45-87d6-852513d0351b\") " pod="openstack/rabbitmq-server-0" Dec 03 13:16:18 crc kubenswrapper[4986]: I1203 13:16:18.100497 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/285bb825-5a17-4d45-87d6-852513d0351b-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"285bb825-5a17-4d45-87d6-852513d0351b\") " pod="openstack/rabbitmq-server-0" Dec 03 13:16:18 crc kubenswrapper[4986]: I1203 13:16:18.100519 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/285bb825-5a17-4d45-87d6-852513d0351b-pod-info\") pod \"rabbitmq-server-0\" (UID: \"285bb825-5a17-4d45-87d6-852513d0351b\") " pod="openstack/rabbitmq-server-0" Dec 03 13:16:18 crc kubenswrapper[4986]: I1203 13:16:18.100702 4986 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"285bb825-5a17-4d45-87d6-852513d0351b\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-server-0" Dec 03 13:16:18 crc kubenswrapper[4986]: I1203 13:16:18.101351 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/285bb825-5a17-4d45-87d6-852513d0351b-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"285bb825-5a17-4d45-87d6-852513d0351b\") " pod="openstack/rabbitmq-server-0" Dec 03 13:16:18 crc kubenswrapper[4986]: I1203 13:16:18.101428 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/285bb825-5a17-4d45-87d6-852513d0351b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"285bb825-5a17-4d45-87d6-852513d0351b\") " pod="openstack/rabbitmq-server-0" Dec 03 13:16:18 crc kubenswrapper[4986]: I1203 13:16:18.101827 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/285bb825-5a17-4d45-87d6-852513d0351b-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"285bb825-5a17-4d45-87d6-852513d0351b\") " pod="openstack/rabbitmq-server-0" Dec 03 13:16:18 crc kubenswrapper[4986]: I1203 13:16:18.101888 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/285bb825-5a17-4d45-87d6-852513d0351b-config-data\") pod \"rabbitmq-server-0\" (UID: \"285bb825-5a17-4d45-87d6-852513d0351b\") " pod="openstack/rabbitmq-server-0" Dec 03 13:16:18 crc kubenswrapper[4986]: I1203 13:16:18.103025 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/285bb825-5a17-4d45-87d6-852513d0351b-server-conf\") pod \"rabbitmq-server-0\" (UID: \"285bb825-5a17-4d45-87d6-852513d0351b\") " pod="openstack/rabbitmq-server-0" Dec 03 13:16:18 crc kubenswrapper[4986]: I1203 13:16:18.105065 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/285bb825-5a17-4d45-87d6-852513d0351b-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"285bb825-5a17-4d45-87d6-852513d0351b\") " pod="openstack/rabbitmq-server-0" Dec 03 13:16:18 crc kubenswrapper[4986]: I1203 13:16:18.105790 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/285bb825-5a17-4d45-87d6-852513d0351b-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"285bb825-5a17-4d45-87d6-852513d0351b\") " pod="openstack/rabbitmq-server-0" Dec 03 13:16:18 crc kubenswrapper[4986]: I1203 13:16:18.109926 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/285bb825-5a17-4d45-87d6-852513d0351b-pod-info\") pod \"rabbitmq-server-0\" (UID: \"285bb825-5a17-4d45-87d6-852513d0351b\") " pod="openstack/rabbitmq-server-0" Dec 03 13:16:18 crc kubenswrapper[4986]: I1203 13:16:18.110579 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/285bb825-5a17-4d45-87d6-852513d0351b-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"285bb825-5a17-4d45-87d6-852513d0351b\") " pod="openstack/rabbitmq-server-0" Dec 03 13:16:18 crc kubenswrapper[4986]: I1203 13:16:18.116115 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6d2cx\" (UniqueName: \"kubernetes.io/projected/285bb825-5a17-4d45-87d6-852513d0351b-kube-api-access-6d2cx\") pod \"rabbitmq-server-0\" (UID: \"285bb825-5a17-4d45-87d6-852513d0351b\") " pod="openstack/rabbitmq-server-0" Dec 03 13:16:18 crc kubenswrapper[4986]: I1203 13:16:18.128957 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 03 13:16:18 crc kubenswrapper[4986]: I1203 13:16:18.130070 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 03 13:16:18 crc kubenswrapper[4986]: I1203 13:16:18.132749 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Dec 03 13:16:18 crc kubenswrapper[4986]: I1203 13:16:18.133012 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-qvv8d" Dec 03 13:16:18 crc kubenswrapper[4986]: I1203 13:16:18.133188 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Dec 03 13:16:18 crc kubenswrapper[4986]: I1203 13:16:18.133344 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Dec 03 13:16:18 crc kubenswrapper[4986]: I1203 13:16:18.133557 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Dec 03 13:16:18 crc kubenswrapper[4986]: I1203 13:16:18.133707 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Dec 03 13:16:18 crc kubenswrapper[4986]: I1203 13:16:18.133840 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Dec 03 13:16:18 crc kubenswrapper[4986]: I1203 13:16:18.134715 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"285bb825-5a17-4d45-87d6-852513d0351b\") " pod="openstack/rabbitmq-server-0" Dec 03 13:16:18 crc kubenswrapper[4986]: I1203 13:16:18.153263 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 03 13:16:18 crc kubenswrapper[4986]: I1203 13:16:18.205062 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 03 13:16:18 crc kubenswrapper[4986]: I1203 13:16:18.303082 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5e8a62bd-1e92-464d-b905-8eb18cc44646-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"5e8a62bd-1e92-464d-b905-8eb18cc44646\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 13:16:18 crc kubenswrapper[4986]: I1203 13:16:18.303125 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5e8a62bd-1e92-464d-b905-8eb18cc44646-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"5e8a62bd-1e92-464d-b905-8eb18cc44646\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 13:16:18 crc kubenswrapper[4986]: I1203 13:16:18.303153 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"5e8a62bd-1e92-464d-b905-8eb18cc44646\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 13:16:18 crc kubenswrapper[4986]: I1203 13:16:18.303256 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5e8a62bd-1e92-464d-b905-8eb18cc44646-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"5e8a62bd-1e92-464d-b905-8eb18cc44646\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 13:16:18 crc kubenswrapper[4986]: I1203 13:16:18.303344 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5e8a62bd-1e92-464d-b905-8eb18cc44646-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"5e8a62bd-1e92-464d-b905-8eb18cc44646\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 13:16:18 crc kubenswrapper[4986]: I1203 13:16:18.303430 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5e8a62bd-1e92-464d-b905-8eb18cc44646-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"5e8a62bd-1e92-464d-b905-8eb18cc44646\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 13:16:18 crc kubenswrapper[4986]: I1203 13:16:18.303505 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5e8a62bd-1e92-464d-b905-8eb18cc44646-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"5e8a62bd-1e92-464d-b905-8eb18cc44646\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 13:16:18 crc kubenswrapper[4986]: I1203 13:16:18.303527 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5e8a62bd-1e92-464d-b905-8eb18cc44646-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"5e8a62bd-1e92-464d-b905-8eb18cc44646\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 13:16:18 crc kubenswrapper[4986]: I1203 13:16:18.303556 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5e8a62bd-1e92-464d-b905-8eb18cc44646-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"5e8a62bd-1e92-464d-b905-8eb18cc44646\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 13:16:18 crc kubenswrapper[4986]: I1203 13:16:18.303590 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdgs8\" (UniqueName: \"kubernetes.io/projected/5e8a62bd-1e92-464d-b905-8eb18cc44646-kube-api-access-pdgs8\") pod \"rabbitmq-cell1-server-0\" (UID: \"5e8a62bd-1e92-464d-b905-8eb18cc44646\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 13:16:18 crc kubenswrapper[4986]: I1203 13:16:18.303648 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5e8a62bd-1e92-464d-b905-8eb18cc44646-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"5e8a62bd-1e92-464d-b905-8eb18cc44646\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 13:16:18 crc kubenswrapper[4986]: I1203 13:16:18.404382 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5e8a62bd-1e92-464d-b905-8eb18cc44646-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"5e8a62bd-1e92-464d-b905-8eb18cc44646\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 13:16:18 crc kubenswrapper[4986]: I1203 13:16:18.404430 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"5e8a62bd-1e92-464d-b905-8eb18cc44646\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 13:16:18 crc kubenswrapper[4986]: I1203 13:16:18.404455 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5e8a62bd-1e92-464d-b905-8eb18cc44646-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"5e8a62bd-1e92-464d-b905-8eb18cc44646\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 13:16:18 crc kubenswrapper[4986]: I1203 13:16:18.404475 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5e8a62bd-1e92-464d-b905-8eb18cc44646-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"5e8a62bd-1e92-464d-b905-8eb18cc44646\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 13:16:18 crc kubenswrapper[4986]: I1203 13:16:18.404503 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5e8a62bd-1e92-464d-b905-8eb18cc44646-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"5e8a62bd-1e92-464d-b905-8eb18cc44646\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 13:16:18 crc kubenswrapper[4986]: I1203 13:16:18.404527 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5e8a62bd-1e92-464d-b905-8eb18cc44646-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"5e8a62bd-1e92-464d-b905-8eb18cc44646\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 13:16:18 crc kubenswrapper[4986]: I1203 13:16:18.404543 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5e8a62bd-1e92-464d-b905-8eb18cc44646-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"5e8a62bd-1e92-464d-b905-8eb18cc44646\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 13:16:18 crc kubenswrapper[4986]: I1203 13:16:18.404575 4986 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"5e8a62bd-1e92-464d-b905-8eb18cc44646\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/rabbitmq-cell1-server-0" Dec 03 13:16:18 crc kubenswrapper[4986]: I1203 13:16:18.404827 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5e8a62bd-1e92-464d-b905-8eb18cc44646-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"5e8a62bd-1e92-464d-b905-8eb18cc44646\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 13:16:18 crc kubenswrapper[4986]: I1203 13:16:18.405412 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5e8a62bd-1e92-464d-b905-8eb18cc44646-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"5e8a62bd-1e92-464d-b905-8eb18cc44646\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 13:16:18 crc kubenswrapper[4986]: I1203 13:16:18.405465 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5e8a62bd-1e92-464d-b905-8eb18cc44646-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"5e8a62bd-1e92-464d-b905-8eb18cc44646\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 13:16:18 crc kubenswrapper[4986]: I1203 13:16:18.405499 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5e8a62bd-1e92-464d-b905-8eb18cc44646-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"5e8a62bd-1e92-464d-b905-8eb18cc44646\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 13:16:18 crc kubenswrapper[4986]: I1203 13:16:18.405510 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5e8a62bd-1e92-464d-b905-8eb18cc44646-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"5e8a62bd-1e92-464d-b905-8eb18cc44646\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 13:16:18 crc kubenswrapper[4986]: I1203 13:16:18.405535 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdgs8\" (UniqueName: \"kubernetes.io/projected/5e8a62bd-1e92-464d-b905-8eb18cc44646-kube-api-access-pdgs8\") pod \"rabbitmq-cell1-server-0\" (UID: \"5e8a62bd-1e92-464d-b905-8eb18cc44646\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 13:16:18 crc kubenswrapper[4986]: I1203 13:16:18.405633 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5e8a62bd-1e92-464d-b905-8eb18cc44646-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"5e8a62bd-1e92-464d-b905-8eb18cc44646\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 13:16:18 crc kubenswrapper[4986]: I1203 13:16:18.405747 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5e8a62bd-1e92-464d-b905-8eb18cc44646-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"5e8a62bd-1e92-464d-b905-8eb18cc44646\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 13:16:18 crc kubenswrapper[4986]: I1203 13:16:18.407970 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5e8a62bd-1e92-464d-b905-8eb18cc44646-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"5e8a62bd-1e92-464d-b905-8eb18cc44646\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 13:16:18 crc kubenswrapper[4986]: I1203 13:16:18.409030 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5e8a62bd-1e92-464d-b905-8eb18cc44646-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"5e8a62bd-1e92-464d-b905-8eb18cc44646\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 13:16:18 crc kubenswrapper[4986]: I1203 13:16:18.410216 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5e8a62bd-1e92-464d-b905-8eb18cc44646-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"5e8a62bd-1e92-464d-b905-8eb18cc44646\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 13:16:18 crc kubenswrapper[4986]: I1203 13:16:18.414781 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5e8a62bd-1e92-464d-b905-8eb18cc44646-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"5e8a62bd-1e92-464d-b905-8eb18cc44646\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 13:16:18 crc kubenswrapper[4986]: I1203 13:16:18.420822 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5e8a62bd-1e92-464d-b905-8eb18cc44646-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"5e8a62bd-1e92-464d-b905-8eb18cc44646\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 13:16:18 crc kubenswrapper[4986]: I1203 13:16:18.432954 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdgs8\" (UniqueName: \"kubernetes.io/projected/5e8a62bd-1e92-464d-b905-8eb18cc44646-kube-api-access-pdgs8\") pod \"rabbitmq-cell1-server-0\" (UID: \"5e8a62bd-1e92-464d-b905-8eb18cc44646\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 13:16:18 crc kubenswrapper[4986]: I1203 13:16:18.441858 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"5e8a62bd-1e92-464d-b905-8eb18cc44646\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 13:16:18 crc kubenswrapper[4986]: I1203 13:16:18.498395 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 03 13:16:19 crc kubenswrapper[4986]: I1203 13:16:19.670062 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Dec 03 13:16:19 crc kubenswrapper[4986]: I1203 13:16:19.671677 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 03 13:16:19 crc kubenswrapper[4986]: I1203 13:16:19.681817 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Dec 03 13:16:19 crc kubenswrapper[4986]: I1203 13:16:19.684254 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Dec 03 13:16:19 crc kubenswrapper[4986]: I1203 13:16:19.689640 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-qltt6" Dec 03 13:16:19 crc kubenswrapper[4986]: I1203 13:16:19.690462 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Dec 03 13:16:19 crc kubenswrapper[4986]: I1203 13:16:19.712335 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Dec 03 13:16:19 crc kubenswrapper[4986]: I1203 13:16:19.727195 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Dec 03 13:16:19 crc kubenswrapper[4986]: I1203 13:16:19.826496 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/50659f56-763b-4cac-9ab4-d660c7d777af-config-data-default\") pod \"openstack-galera-0\" (UID: \"50659f56-763b-4cac-9ab4-d660c7d777af\") " pod="openstack/openstack-galera-0" Dec 03 13:16:19 crc kubenswrapper[4986]: I1203 13:16:19.826638 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/50659f56-763b-4cac-9ab4-d660c7d777af-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"50659f56-763b-4cac-9ab4-d660c7d777af\") " pod="openstack/openstack-galera-0" Dec 03 13:16:19 crc kubenswrapper[4986]: I1203 13:16:19.826686 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"50659f56-763b-4cac-9ab4-d660c7d777af\") " pod="openstack/openstack-galera-0" Dec 03 13:16:19 crc kubenswrapper[4986]: I1203 13:16:19.826714 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksk2v\" (UniqueName: \"kubernetes.io/projected/50659f56-763b-4cac-9ab4-d660c7d777af-kube-api-access-ksk2v\") pod \"openstack-galera-0\" (UID: \"50659f56-763b-4cac-9ab4-d660c7d777af\") " pod="openstack/openstack-galera-0" Dec 03 13:16:19 crc kubenswrapper[4986]: I1203 13:16:19.826738 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/50659f56-763b-4cac-9ab4-d660c7d777af-kolla-config\") pod \"openstack-galera-0\" (UID: \"50659f56-763b-4cac-9ab4-d660c7d777af\") " pod="openstack/openstack-galera-0" Dec 03 13:16:19 crc kubenswrapper[4986]: I1203 13:16:19.826779 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/50659f56-763b-4cac-9ab4-d660c7d777af-config-data-generated\") pod \"openstack-galera-0\" (UID: \"50659f56-763b-4cac-9ab4-d660c7d777af\") " pod="openstack/openstack-galera-0" Dec 03 13:16:19 crc kubenswrapper[4986]: I1203 13:16:19.826806 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/50659f56-763b-4cac-9ab4-d660c7d777af-operator-scripts\") pod \"openstack-galera-0\" (UID: \"50659f56-763b-4cac-9ab4-d660c7d777af\") " pod="openstack/openstack-galera-0" Dec 03 13:16:19 crc kubenswrapper[4986]: I1203 13:16:19.826829 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50659f56-763b-4cac-9ab4-d660c7d777af-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"50659f56-763b-4cac-9ab4-d660c7d777af\") " pod="openstack/openstack-galera-0" Dec 03 13:16:19 crc kubenswrapper[4986]: I1203 13:16:19.928062 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/50659f56-763b-4cac-9ab4-d660c7d777af-config-data-default\") pod \"openstack-galera-0\" (UID: \"50659f56-763b-4cac-9ab4-d660c7d777af\") " pod="openstack/openstack-galera-0" Dec 03 13:16:19 crc kubenswrapper[4986]: I1203 13:16:19.928176 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/50659f56-763b-4cac-9ab4-d660c7d777af-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"50659f56-763b-4cac-9ab4-d660c7d777af\") " pod="openstack/openstack-galera-0" Dec 03 13:16:19 crc kubenswrapper[4986]: I1203 13:16:19.928232 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"50659f56-763b-4cac-9ab4-d660c7d777af\") " pod="openstack/openstack-galera-0" Dec 03 13:16:19 crc kubenswrapper[4986]: I1203 13:16:19.928301 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksk2v\" (UniqueName: \"kubernetes.io/projected/50659f56-763b-4cac-9ab4-d660c7d777af-kube-api-access-ksk2v\") pod \"openstack-galera-0\" (UID: \"50659f56-763b-4cac-9ab4-d660c7d777af\") " pod="openstack/openstack-galera-0" Dec 03 13:16:19 crc kubenswrapper[4986]: I1203 13:16:19.928337 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/50659f56-763b-4cac-9ab4-d660c7d777af-kolla-config\") pod \"openstack-galera-0\" (UID: \"50659f56-763b-4cac-9ab4-d660c7d777af\") " pod="openstack/openstack-galera-0" Dec 03 13:16:19 crc kubenswrapper[4986]: I1203 13:16:19.928398 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/50659f56-763b-4cac-9ab4-d660c7d777af-config-data-generated\") pod \"openstack-galera-0\" (UID: \"50659f56-763b-4cac-9ab4-d660c7d777af\") " pod="openstack/openstack-galera-0" Dec 03 13:16:19 crc kubenswrapper[4986]: I1203 13:16:19.928438 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/50659f56-763b-4cac-9ab4-d660c7d777af-operator-scripts\") pod \"openstack-galera-0\" (UID: \"50659f56-763b-4cac-9ab4-d660c7d777af\") " pod="openstack/openstack-galera-0" Dec 03 13:16:19 crc kubenswrapper[4986]: I1203 13:16:19.928475 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50659f56-763b-4cac-9ab4-d660c7d777af-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"50659f56-763b-4cac-9ab4-d660c7d777af\") " pod="openstack/openstack-galera-0" Dec 03 13:16:19 crc kubenswrapper[4986]: I1203 13:16:19.928758 4986 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"50659f56-763b-4cac-9ab4-d660c7d777af\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/openstack-galera-0" Dec 03 13:16:19 crc kubenswrapper[4986]: I1203 13:16:19.932865 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/50659f56-763b-4cac-9ab4-d660c7d777af-config-data-default\") pod \"openstack-galera-0\" (UID: \"50659f56-763b-4cac-9ab4-d660c7d777af\") " pod="openstack/openstack-galera-0" Dec 03 13:16:19 crc kubenswrapper[4986]: I1203 13:16:19.933721 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/50659f56-763b-4cac-9ab4-d660c7d777af-config-data-generated\") pod \"openstack-galera-0\" (UID: \"50659f56-763b-4cac-9ab4-d660c7d777af\") " pod="openstack/openstack-galera-0" Dec 03 13:16:19 crc kubenswrapper[4986]: I1203 13:16:19.933936 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/50659f56-763b-4cac-9ab4-d660c7d777af-kolla-config\") pod \"openstack-galera-0\" (UID: \"50659f56-763b-4cac-9ab4-d660c7d777af\") " pod="openstack/openstack-galera-0" Dec 03 13:16:19 crc kubenswrapper[4986]: I1203 13:16:19.949779 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/50659f56-763b-4cac-9ab4-d660c7d777af-operator-scripts\") pod \"openstack-galera-0\" (UID: \"50659f56-763b-4cac-9ab4-d660c7d777af\") " pod="openstack/openstack-galera-0" Dec 03 13:16:19 crc kubenswrapper[4986]: I1203 13:16:19.957535 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50659f56-763b-4cac-9ab4-d660c7d777af-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"50659f56-763b-4cac-9ab4-d660c7d777af\") " pod="openstack/openstack-galera-0" Dec 03 13:16:19 crc kubenswrapper[4986]: I1203 13:16:19.979326 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/50659f56-763b-4cac-9ab4-d660c7d777af-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"50659f56-763b-4cac-9ab4-d660c7d777af\") " pod="openstack/openstack-galera-0" Dec 03 13:16:19 crc kubenswrapper[4986]: I1203 13:16:19.983584 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksk2v\" (UniqueName: \"kubernetes.io/projected/50659f56-763b-4cac-9ab4-d660c7d777af-kube-api-access-ksk2v\") pod \"openstack-galera-0\" (UID: \"50659f56-763b-4cac-9ab4-d660c7d777af\") " pod="openstack/openstack-galera-0" Dec 03 13:16:20 crc kubenswrapper[4986]: I1203 13:16:20.019512 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"50659f56-763b-4cac-9ab4-d660c7d777af\") " pod="openstack/openstack-galera-0" Dec 03 13:16:20 crc kubenswrapper[4986]: I1203 13:16:20.312960 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 03 13:16:21 crc kubenswrapper[4986]: I1203 13:16:21.025456 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 03 13:16:21 crc kubenswrapper[4986]: I1203 13:16:21.027108 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 03 13:16:21 crc kubenswrapper[4986]: I1203 13:16:21.029833 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-m2nqb" Dec 03 13:16:21 crc kubenswrapper[4986]: I1203 13:16:21.030266 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Dec 03 13:16:21 crc kubenswrapper[4986]: I1203 13:16:21.030673 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Dec 03 13:16:21 crc kubenswrapper[4986]: I1203 13:16:21.030924 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Dec 03 13:16:21 crc kubenswrapper[4986]: I1203 13:16:21.032937 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 03 13:16:21 crc kubenswrapper[4986]: I1203 13:16:21.149402 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a0496538-1ab2-45a2-94ab-fc3474533ec3-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"a0496538-1ab2-45a2-94ab-fc3474533ec3\") " pod="openstack/openstack-cell1-galera-0" Dec 03 13:16:21 crc kubenswrapper[4986]: I1203 13:16:21.149469 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0496538-1ab2-45a2-94ab-fc3474533ec3-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"a0496538-1ab2-45a2-94ab-fc3474533ec3\") " pod="openstack/openstack-cell1-galera-0" Dec 03 13:16:21 crc kubenswrapper[4986]: I1203 13:16:21.149499 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-cell1-galera-0\" (UID: \"a0496538-1ab2-45a2-94ab-fc3474533ec3\") " pod="openstack/openstack-cell1-galera-0" Dec 03 13:16:21 crc kubenswrapper[4986]: I1203 13:16:21.149528 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a0496538-1ab2-45a2-94ab-fc3474533ec3-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"a0496538-1ab2-45a2-94ab-fc3474533ec3\") " pod="openstack/openstack-cell1-galera-0" Dec 03 13:16:21 crc kubenswrapper[4986]: I1203 13:16:21.149556 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a0496538-1ab2-45a2-94ab-fc3474533ec3-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"a0496538-1ab2-45a2-94ab-fc3474533ec3\") " pod="openstack/openstack-cell1-galera-0" Dec 03 13:16:21 crc kubenswrapper[4986]: I1203 13:16:21.149584 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2skp\" (UniqueName: \"kubernetes.io/projected/a0496538-1ab2-45a2-94ab-fc3474533ec3-kube-api-access-x2skp\") pod \"openstack-cell1-galera-0\" (UID: \"a0496538-1ab2-45a2-94ab-fc3474533ec3\") " pod="openstack/openstack-cell1-galera-0" Dec 03 13:16:21 crc kubenswrapper[4986]: I1203 13:16:21.149607 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0496538-1ab2-45a2-94ab-fc3474533ec3-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"a0496538-1ab2-45a2-94ab-fc3474533ec3\") " pod="openstack/openstack-cell1-galera-0" Dec 03 13:16:21 crc kubenswrapper[4986]: I1203 13:16:21.149655 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a0496538-1ab2-45a2-94ab-fc3474533ec3-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"a0496538-1ab2-45a2-94ab-fc3474533ec3\") " pod="openstack/openstack-cell1-galera-0" Dec 03 13:16:21 crc kubenswrapper[4986]: I1203 13:16:21.251428 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a0496538-1ab2-45a2-94ab-fc3474533ec3-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"a0496538-1ab2-45a2-94ab-fc3474533ec3\") " pod="openstack/openstack-cell1-galera-0" Dec 03 13:16:21 crc kubenswrapper[4986]: I1203 13:16:21.251502 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a0496538-1ab2-45a2-94ab-fc3474533ec3-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"a0496538-1ab2-45a2-94ab-fc3474533ec3\") " pod="openstack/openstack-cell1-galera-0" Dec 03 13:16:21 crc kubenswrapper[4986]: I1203 13:16:21.251563 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0496538-1ab2-45a2-94ab-fc3474533ec3-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"a0496538-1ab2-45a2-94ab-fc3474533ec3\") " pod="openstack/openstack-cell1-galera-0" Dec 03 13:16:21 crc kubenswrapper[4986]: I1203 13:16:21.251587 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-cell1-galera-0\" (UID: \"a0496538-1ab2-45a2-94ab-fc3474533ec3\") " pod="openstack/openstack-cell1-galera-0" Dec 03 13:16:21 crc kubenswrapper[4986]: I1203 13:16:21.251629 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a0496538-1ab2-45a2-94ab-fc3474533ec3-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"a0496538-1ab2-45a2-94ab-fc3474533ec3\") " pod="openstack/openstack-cell1-galera-0" Dec 03 13:16:21 crc kubenswrapper[4986]: I1203 13:16:21.251667 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a0496538-1ab2-45a2-94ab-fc3474533ec3-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"a0496538-1ab2-45a2-94ab-fc3474533ec3\") " pod="openstack/openstack-cell1-galera-0" Dec 03 13:16:21 crc kubenswrapper[4986]: I1203 13:16:21.251707 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2skp\" (UniqueName: \"kubernetes.io/projected/a0496538-1ab2-45a2-94ab-fc3474533ec3-kube-api-access-x2skp\") pod \"openstack-cell1-galera-0\" (UID: \"a0496538-1ab2-45a2-94ab-fc3474533ec3\") " pod="openstack/openstack-cell1-galera-0" Dec 03 13:16:21 crc kubenswrapper[4986]: I1203 13:16:21.251740 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0496538-1ab2-45a2-94ab-fc3474533ec3-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"a0496538-1ab2-45a2-94ab-fc3474533ec3\") " pod="openstack/openstack-cell1-galera-0" Dec 03 13:16:21 crc kubenswrapper[4986]: I1203 13:16:21.252351 4986 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-cell1-galera-0\" (UID: \"a0496538-1ab2-45a2-94ab-fc3474533ec3\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/openstack-cell1-galera-0" Dec 03 13:16:21 crc kubenswrapper[4986]: I1203 13:16:21.253201 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a0496538-1ab2-45a2-94ab-fc3474533ec3-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"a0496538-1ab2-45a2-94ab-fc3474533ec3\") " pod="openstack/openstack-cell1-galera-0" Dec 03 13:16:21 crc kubenswrapper[4986]: I1203 13:16:21.255239 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a0496538-1ab2-45a2-94ab-fc3474533ec3-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"a0496538-1ab2-45a2-94ab-fc3474533ec3\") " pod="openstack/openstack-cell1-galera-0" Dec 03 13:16:21 crc kubenswrapper[4986]: I1203 13:16:21.255635 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a0496538-1ab2-45a2-94ab-fc3474533ec3-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"a0496538-1ab2-45a2-94ab-fc3474533ec3\") " pod="openstack/openstack-cell1-galera-0" Dec 03 13:16:21 crc kubenswrapper[4986]: I1203 13:16:21.256107 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a0496538-1ab2-45a2-94ab-fc3474533ec3-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"a0496538-1ab2-45a2-94ab-fc3474533ec3\") " pod="openstack/openstack-cell1-galera-0" Dec 03 13:16:21 crc kubenswrapper[4986]: I1203 13:16:21.259616 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0496538-1ab2-45a2-94ab-fc3474533ec3-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"a0496538-1ab2-45a2-94ab-fc3474533ec3\") " pod="openstack/openstack-cell1-galera-0" Dec 03 13:16:21 crc kubenswrapper[4986]: I1203 13:16:21.260226 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0496538-1ab2-45a2-94ab-fc3474533ec3-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"a0496538-1ab2-45a2-94ab-fc3474533ec3\") " pod="openstack/openstack-cell1-galera-0" Dec 03 13:16:21 crc kubenswrapper[4986]: I1203 13:16:21.270611 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2skp\" (UniqueName: \"kubernetes.io/projected/a0496538-1ab2-45a2-94ab-fc3474533ec3-kube-api-access-x2skp\") pod \"openstack-cell1-galera-0\" (UID: \"a0496538-1ab2-45a2-94ab-fc3474533ec3\") " pod="openstack/openstack-cell1-galera-0" Dec 03 13:16:21 crc kubenswrapper[4986]: I1203 13:16:21.287468 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-cell1-galera-0\" (UID: \"a0496538-1ab2-45a2-94ab-fc3474533ec3\") " pod="openstack/openstack-cell1-galera-0" Dec 03 13:16:21 crc kubenswrapper[4986]: I1203 13:16:21.339807 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Dec 03 13:16:21 crc kubenswrapper[4986]: I1203 13:16:21.340790 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 03 13:16:21 crc kubenswrapper[4986]: I1203 13:16:21.344650 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-b6d94" Dec 03 13:16:21 crc kubenswrapper[4986]: I1203 13:16:21.344847 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Dec 03 13:16:21 crc kubenswrapper[4986]: I1203 13:16:21.345594 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Dec 03 13:16:21 crc kubenswrapper[4986]: I1203 13:16:21.350925 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 03 13:16:21 crc kubenswrapper[4986]: I1203 13:16:21.350938 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Dec 03 13:16:21 crc kubenswrapper[4986]: I1203 13:16:21.461052 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb2b35c1-7c5f-4ffc-a178-4f7ade5cc313-combined-ca-bundle\") pod \"memcached-0\" (UID: \"cb2b35c1-7c5f-4ffc-a178-4f7ade5cc313\") " pod="openstack/memcached-0" Dec 03 13:16:21 crc kubenswrapper[4986]: I1203 13:16:21.461274 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s799b\" (UniqueName: \"kubernetes.io/projected/cb2b35c1-7c5f-4ffc-a178-4f7ade5cc313-kube-api-access-s799b\") pod \"memcached-0\" (UID: \"cb2b35c1-7c5f-4ffc-a178-4f7ade5cc313\") " pod="openstack/memcached-0" Dec 03 13:16:21 crc kubenswrapper[4986]: I1203 13:16:21.461364 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb2b35c1-7c5f-4ffc-a178-4f7ade5cc313-memcached-tls-certs\") pod \"memcached-0\" (UID: \"cb2b35c1-7c5f-4ffc-a178-4f7ade5cc313\") " pod="openstack/memcached-0" Dec 03 13:16:21 crc kubenswrapper[4986]: I1203 13:16:21.461509 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/cb2b35c1-7c5f-4ffc-a178-4f7ade5cc313-kolla-config\") pod \"memcached-0\" (UID: \"cb2b35c1-7c5f-4ffc-a178-4f7ade5cc313\") " pod="openstack/memcached-0" Dec 03 13:16:21 crc kubenswrapper[4986]: I1203 13:16:21.461545 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cb2b35c1-7c5f-4ffc-a178-4f7ade5cc313-config-data\") pod \"memcached-0\" (UID: \"cb2b35c1-7c5f-4ffc-a178-4f7ade5cc313\") " pod="openstack/memcached-0" Dec 03 13:16:21 crc kubenswrapper[4986]: I1203 13:16:21.562852 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/cb2b35c1-7c5f-4ffc-a178-4f7ade5cc313-kolla-config\") pod \"memcached-0\" (UID: \"cb2b35c1-7c5f-4ffc-a178-4f7ade5cc313\") " pod="openstack/memcached-0" Dec 03 13:16:21 crc kubenswrapper[4986]: I1203 13:16:21.562910 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cb2b35c1-7c5f-4ffc-a178-4f7ade5cc313-config-data\") pod \"memcached-0\" (UID: \"cb2b35c1-7c5f-4ffc-a178-4f7ade5cc313\") " pod="openstack/memcached-0" Dec 03 13:16:21 crc kubenswrapper[4986]: I1203 13:16:21.562950 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb2b35c1-7c5f-4ffc-a178-4f7ade5cc313-combined-ca-bundle\") pod \"memcached-0\" (UID: \"cb2b35c1-7c5f-4ffc-a178-4f7ade5cc313\") " pod="openstack/memcached-0" Dec 03 13:16:21 crc kubenswrapper[4986]: I1203 13:16:21.563043 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s799b\" (UniqueName: \"kubernetes.io/projected/cb2b35c1-7c5f-4ffc-a178-4f7ade5cc313-kube-api-access-s799b\") pod \"memcached-0\" (UID: \"cb2b35c1-7c5f-4ffc-a178-4f7ade5cc313\") " pod="openstack/memcached-0" Dec 03 13:16:21 crc kubenswrapper[4986]: I1203 13:16:21.563071 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb2b35c1-7c5f-4ffc-a178-4f7ade5cc313-memcached-tls-certs\") pod \"memcached-0\" (UID: \"cb2b35c1-7c5f-4ffc-a178-4f7ade5cc313\") " pod="openstack/memcached-0" Dec 03 13:16:21 crc kubenswrapper[4986]: I1203 13:16:21.563556 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/cb2b35c1-7c5f-4ffc-a178-4f7ade5cc313-kolla-config\") pod \"memcached-0\" (UID: \"cb2b35c1-7c5f-4ffc-a178-4f7ade5cc313\") " pod="openstack/memcached-0" Dec 03 13:16:21 crc kubenswrapper[4986]: I1203 13:16:21.564092 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cb2b35c1-7c5f-4ffc-a178-4f7ade5cc313-config-data\") pod \"memcached-0\" (UID: \"cb2b35c1-7c5f-4ffc-a178-4f7ade5cc313\") " pod="openstack/memcached-0" Dec 03 13:16:21 crc kubenswrapper[4986]: I1203 13:16:21.568064 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb2b35c1-7c5f-4ffc-a178-4f7ade5cc313-memcached-tls-certs\") pod \"memcached-0\" (UID: \"cb2b35c1-7c5f-4ffc-a178-4f7ade5cc313\") " pod="openstack/memcached-0" Dec 03 13:16:21 crc kubenswrapper[4986]: I1203 13:16:21.583058 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb2b35c1-7c5f-4ffc-a178-4f7ade5cc313-combined-ca-bundle\") pod \"memcached-0\" (UID: \"cb2b35c1-7c5f-4ffc-a178-4f7ade5cc313\") " pod="openstack/memcached-0" Dec 03 13:16:21 crc kubenswrapper[4986]: I1203 13:16:21.589106 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s799b\" (UniqueName: \"kubernetes.io/projected/cb2b35c1-7c5f-4ffc-a178-4f7ade5cc313-kube-api-access-s799b\") pod \"memcached-0\" (UID: \"cb2b35c1-7c5f-4ffc-a178-4f7ade5cc313\") " pod="openstack/memcached-0" Dec 03 13:16:21 crc kubenswrapper[4986]: I1203 13:16:21.665495 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 03 13:16:23 crc kubenswrapper[4986]: I1203 13:16:23.053639 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Dec 03 13:16:23 crc kubenswrapper[4986]: I1203 13:16:23.055025 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 03 13:16:23 crc kubenswrapper[4986]: I1203 13:16:23.057463 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-xkjbs" Dec 03 13:16:23 crc kubenswrapper[4986]: I1203 13:16:23.071773 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 03 13:16:23 crc kubenswrapper[4986]: I1203 13:16:23.200091 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxspb\" (UniqueName: \"kubernetes.io/projected/4ea00525-8d87-4a9e-b1ce-15a73b6bedd1-kube-api-access-jxspb\") pod \"kube-state-metrics-0\" (UID: \"4ea00525-8d87-4a9e-b1ce-15a73b6bedd1\") " pod="openstack/kube-state-metrics-0" Dec 03 13:16:23 crc kubenswrapper[4986]: I1203 13:16:23.301022 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxspb\" (UniqueName: \"kubernetes.io/projected/4ea00525-8d87-4a9e-b1ce-15a73b6bedd1-kube-api-access-jxspb\") pod \"kube-state-metrics-0\" (UID: \"4ea00525-8d87-4a9e-b1ce-15a73b6bedd1\") " pod="openstack/kube-state-metrics-0" Dec 03 13:16:23 crc kubenswrapper[4986]: I1203 13:16:23.331982 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxspb\" (UniqueName: \"kubernetes.io/projected/4ea00525-8d87-4a9e-b1ce-15a73b6bedd1-kube-api-access-jxspb\") pod \"kube-state-metrics-0\" (UID: \"4ea00525-8d87-4a9e-b1ce-15a73b6bedd1\") " pod="openstack/kube-state-metrics-0" Dec 03 13:16:23 crc kubenswrapper[4986]: I1203 13:16:23.372607 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 03 13:16:26 crc kubenswrapper[4986]: I1203 13:16:26.022541 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-kqn7t"] Dec 03 13:16:26 crc kubenswrapper[4986]: I1203 13:16:26.026997 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-kqn7t" Dec 03 13:16:26 crc kubenswrapper[4986]: I1203 13:16:26.029875 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Dec 03 13:16:26 crc kubenswrapper[4986]: I1203 13:16:26.030136 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-s4n2q" Dec 03 13:16:26 crc kubenswrapper[4986]: I1203 13:16:26.034336 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-kqn7t"] Dec 03 13:16:26 crc kubenswrapper[4986]: I1203 13:16:26.034765 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Dec 03 13:16:26 crc kubenswrapper[4986]: I1203 13:16:26.090869 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-45czf"] Dec 03 13:16:26 crc kubenswrapper[4986]: I1203 13:16:26.095719 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-45czf" Dec 03 13:16:26 crc kubenswrapper[4986]: I1203 13:16:26.131185 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-45czf"] Dec 03 13:16:26 crc kubenswrapper[4986]: I1203 13:16:26.148861 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/d06f8249-00a2-4e59-a055-82ab737c7b92-ovn-controller-tls-certs\") pod \"ovn-controller-kqn7t\" (UID: \"d06f8249-00a2-4e59-a055-82ab737c7b92\") " pod="openstack/ovn-controller-kqn7t" Dec 03 13:16:26 crc kubenswrapper[4986]: I1203 13:16:26.148966 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d06f8249-00a2-4e59-a055-82ab737c7b92-var-log-ovn\") pod \"ovn-controller-kqn7t\" (UID: \"d06f8249-00a2-4e59-a055-82ab737c7b92\") " pod="openstack/ovn-controller-kqn7t" Dec 03 13:16:26 crc kubenswrapper[4986]: I1203 13:16:26.148988 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d06f8249-00a2-4e59-a055-82ab737c7b92-combined-ca-bundle\") pod \"ovn-controller-kqn7t\" (UID: \"d06f8249-00a2-4e59-a055-82ab737c7b92\") " pod="openstack/ovn-controller-kqn7t" Dec 03 13:16:26 crc kubenswrapper[4986]: I1203 13:16:26.149015 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d06f8249-00a2-4e59-a055-82ab737c7b92-var-run-ovn\") pod \"ovn-controller-kqn7t\" (UID: \"d06f8249-00a2-4e59-a055-82ab737c7b92\") " pod="openstack/ovn-controller-kqn7t" Dec 03 13:16:26 crc kubenswrapper[4986]: I1203 13:16:26.149051 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2zvf\" (UniqueName: \"kubernetes.io/projected/d06f8249-00a2-4e59-a055-82ab737c7b92-kube-api-access-g2zvf\") pod \"ovn-controller-kqn7t\" (UID: \"d06f8249-00a2-4e59-a055-82ab737c7b92\") " pod="openstack/ovn-controller-kqn7t" Dec 03 13:16:26 crc kubenswrapper[4986]: I1203 13:16:26.149146 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d06f8249-00a2-4e59-a055-82ab737c7b92-var-run\") pod \"ovn-controller-kqn7t\" (UID: \"d06f8249-00a2-4e59-a055-82ab737c7b92\") " pod="openstack/ovn-controller-kqn7t" Dec 03 13:16:26 crc kubenswrapper[4986]: I1203 13:16:26.149182 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d06f8249-00a2-4e59-a055-82ab737c7b92-scripts\") pod \"ovn-controller-kqn7t\" (UID: \"d06f8249-00a2-4e59-a055-82ab737c7b92\") " pod="openstack/ovn-controller-kqn7t" Dec 03 13:16:26 crc kubenswrapper[4986]: I1203 13:16:26.251077 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/d06f8249-00a2-4e59-a055-82ab737c7b92-ovn-controller-tls-certs\") pod \"ovn-controller-kqn7t\" (UID: \"d06f8249-00a2-4e59-a055-82ab737c7b92\") " pod="openstack/ovn-controller-kqn7t" Dec 03 13:16:26 crc kubenswrapper[4986]: I1203 13:16:26.251141 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5510fce4-e81b-4089-a5c3-4c4b6c72d9e0-var-log\") pod \"ovn-controller-ovs-45czf\" (UID: \"5510fce4-e81b-4089-a5c3-4c4b6c72d9e0\") " pod="openstack/ovn-controller-ovs-45czf" Dec 03 13:16:26 crc kubenswrapper[4986]: I1203 13:16:26.251186 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d06f8249-00a2-4e59-a055-82ab737c7b92-var-log-ovn\") pod \"ovn-controller-kqn7t\" (UID: \"d06f8249-00a2-4e59-a055-82ab737c7b92\") " pod="openstack/ovn-controller-kqn7t" Dec 03 13:16:26 crc kubenswrapper[4986]: I1203 13:16:26.251207 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d06f8249-00a2-4e59-a055-82ab737c7b92-combined-ca-bundle\") pod \"ovn-controller-kqn7t\" (UID: \"d06f8249-00a2-4e59-a055-82ab737c7b92\") " pod="openstack/ovn-controller-kqn7t" Dec 03 13:16:26 crc kubenswrapper[4986]: I1203 13:16:26.251236 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d06f8249-00a2-4e59-a055-82ab737c7b92-var-run-ovn\") pod \"ovn-controller-kqn7t\" (UID: \"d06f8249-00a2-4e59-a055-82ab737c7b92\") " pod="openstack/ovn-controller-kqn7t" Dec 03 13:16:26 crc kubenswrapper[4986]: I1203 13:16:26.251261 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5510fce4-e81b-4089-a5c3-4c4b6c72d9e0-scripts\") pod \"ovn-controller-ovs-45czf\" (UID: \"5510fce4-e81b-4089-a5c3-4c4b6c72d9e0\") " pod="openstack/ovn-controller-ovs-45czf" Dec 03 13:16:26 crc kubenswrapper[4986]: I1203 13:16:26.251304 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2zvf\" (UniqueName: \"kubernetes.io/projected/d06f8249-00a2-4e59-a055-82ab737c7b92-kube-api-access-g2zvf\") pod \"ovn-controller-kqn7t\" (UID: \"d06f8249-00a2-4e59-a055-82ab737c7b92\") " pod="openstack/ovn-controller-kqn7t" Dec 03 13:16:26 crc kubenswrapper[4986]: I1203 13:16:26.251341 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f68sc\" (UniqueName: \"kubernetes.io/projected/5510fce4-e81b-4089-a5c3-4c4b6c72d9e0-kube-api-access-f68sc\") pod \"ovn-controller-ovs-45czf\" (UID: \"5510fce4-e81b-4089-a5c3-4c4b6c72d9e0\") " pod="openstack/ovn-controller-ovs-45czf" Dec 03 13:16:26 crc kubenswrapper[4986]: I1203 13:16:26.251379 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5510fce4-e81b-4089-a5c3-4c4b6c72d9e0-var-run\") pod \"ovn-controller-ovs-45czf\" (UID: \"5510fce4-e81b-4089-a5c3-4c4b6c72d9e0\") " pod="openstack/ovn-controller-ovs-45czf" Dec 03 13:16:26 crc kubenswrapper[4986]: I1203 13:16:26.251404 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/5510fce4-e81b-4089-a5c3-4c4b6c72d9e0-var-lib\") pod \"ovn-controller-ovs-45czf\" (UID: \"5510fce4-e81b-4089-a5c3-4c4b6c72d9e0\") " pod="openstack/ovn-controller-ovs-45czf" Dec 03 13:16:26 crc kubenswrapper[4986]: I1203 13:16:26.251429 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d06f8249-00a2-4e59-a055-82ab737c7b92-var-run\") pod \"ovn-controller-kqn7t\" (UID: \"d06f8249-00a2-4e59-a055-82ab737c7b92\") " pod="openstack/ovn-controller-kqn7t" Dec 03 13:16:26 crc kubenswrapper[4986]: I1203 13:16:26.251453 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/5510fce4-e81b-4089-a5c3-4c4b6c72d9e0-etc-ovs\") pod \"ovn-controller-ovs-45czf\" (UID: \"5510fce4-e81b-4089-a5c3-4c4b6c72d9e0\") " pod="openstack/ovn-controller-ovs-45czf" Dec 03 13:16:26 crc kubenswrapper[4986]: I1203 13:16:26.251482 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d06f8249-00a2-4e59-a055-82ab737c7b92-scripts\") pod \"ovn-controller-kqn7t\" (UID: \"d06f8249-00a2-4e59-a055-82ab737c7b92\") " pod="openstack/ovn-controller-kqn7t" Dec 03 13:16:26 crc kubenswrapper[4986]: I1203 13:16:26.253665 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d06f8249-00a2-4e59-a055-82ab737c7b92-scripts\") pod \"ovn-controller-kqn7t\" (UID: \"d06f8249-00a2-4e59-a055-82ab737c7b92\") " pod="openstack/ovn-controller-kqn7t" Dec 03 13:16:26 crc kubenswrapper[4986]: I1203 13:16:26.255749 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d06f8249-00a2-4e59-a055-82ab737c7b92-var-run\") pod \"ovn-controller-kqn7t\" (UID: \"d06f8249-00a2-4e59-a055-82ab737c7b92\") " pod="openstack/ovn-controller-kqn7t" Dec 03 13:16:26 crc kubenswrapper[4986]: I1203 13:16:26.255872 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d06f8249-00a2-4e59-a055-82ab737c7b92-var-run-ovn\") pod \"ovn-controller-kqn7t\" (UID: \"d06f8249-00a2-4e59-a055-82ab737c7b92\") " pod="openstack/ovn-controller-kqn7t" Dec 03 13:16:26 crc kubenswrapper[4986]: I1203 13:16:26.255975 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d06f8249-00a2-4e59-a055-82ab737c7b92-var-log-ovn\") pod \"ovn-controller-kqn7t\" (UID: \"d06f8249-00a2-4e59-a055-82ab737c7b92\") " pod="openstack/ovn-controller-kqn7t" Dec 03 13:16:26 crc kubenswrapper[4986]: I1203 13:16:26.266260 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/d06f8249-00a2-4e59-a055-82ab737c7b92-ovn-controller-tls-certs\") pod \"ovn-controller-kqn7t\" (UID: \"d06f8249-00a2-4e59-a055-82ab737c7b92\") " pod="openstack/ovn-controller-kqn7t" Dec 03 13:16:26 crc kubenswrapper[4986]: I1203 13:16:26.275139 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d06f8249-00a2-4e59-a055-82ab737c7b92-combined-ca-bundle\") pod \"ovn-controller-kqn7t\" (UID: \"d06f8249-00a2-4e59-a055-82ab737c7b92\") " pod="openstack/ovn-controller-kqn7t" Dec 03 13:16:26 crc kubenswrapper[4986]: I1203 13:16:26.284579 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2zvf\" (UniqueName: \"kubernetes.io/projected/d06f8249-00a2-4e59-a055-82ab737c7b92-kube-api-access-g2zvf\") pod \"ovn-controller-kqn7t\" (UID: \"d06f8249-00a2-4e59-a055-82ab737c7b92\") " pod="openstack/ovn-controller-kqn7t" Dec 03 13:16:26 crc kubenswrapper[4986]: I1203 13:16:26.348217 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-kqn7t" Dec 03 13:16:26 crc kubenswrapper[4986]: I1203 13:16:26.352694 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5510fce4-e81b-4089-a5c3-4c4b6c72d9e0-scripts\") pod \"ovn-controller-ovs-45czf\" (UID: \"5510fce4-e81b-4089-a5c3-4c4b6c72d9e0\") " pod="openstack/ovn-controller-ovs-45czf" Dec 03 13:16:26 crc kubenswrapper[4986]: I1203 13:16:26.352752 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f68sc\" (UniqueName: \"kubernetes.io/projected/5510fce4-e81b-4089-a5c3-4c4b6c72d9e0-kube-api-access-f68sc\") pod \"ovn-controller-ovs-45czf\" (UID: \"5510fce4-e81b-4089-a5c3-4c4b6c72d9e0\") " pod="openstack/ovn-controller-ovs-45czf" Dec 03 13:16:26 crc kubenswrapper[4986]: I1203 13:16:26.352782 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5510fce4-e81b-4089-a5c3-4c4b6c72d9e0-var-run\") pod \"ovn-controller-ovs-45czf\" (UID: \"5510fce4-e81b-4089-a5c3-4c4b6c72d9e0\") " pod="openstack/ovn-controller-ovs-45czf" Dec 03 13:16:26 crc kubenswrapper[4986]: I1203 13:16:26.352800 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/5510fce4-e81b-4089-a5c3-4c4b6c72d9e0-var-lib\") pod \"ovn-controller-ovs-45czf\" (UID: \"5510fce4-e81b-4089-a5c3-4c4b6c72d9e0\") " pod="openstack/ovn-controller-ovs-45czf" Dec 03 13:16:26 crc kubenswrapper[4986]: I1203 13:16:26.352822 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/5510fce4-e81b-4089-a5c3-4c4b6c72d9e0-etc-ovs\") pod \"ovn-controller-ovs-45czf\" (UID: \"5510fce4-e81b-4089-a5c3-4c4b6c72d9e0\") " pod="openstack/ovn-controller-ovs-45czf" Dec 03 13:16:26 crc kubenswrapper[4986]: I1203 13:16:26.352876 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5510fce4-e81b-4089-a5c3-4c4b6c72d9e0-var-log\") pod \"ovn-controller-ovs-45czf\" (UID: \"5510fce4-e81b-4089-a5c3-4c4b6c72d9e0\") " pod="openstack/ovn-controller-ovs-45czf" Dec 03 13:16:26 crc kubenswrapper[4986]: I1203 13:16:26.353103 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5510fce4-e81b-4089-a5c3-4c4b6c72d9e0-var-log\") pod \"ovn-controller-ovs-45czf\" (UID: \"5510fce4-e81b-4089-a5c3-4c4b6c72d9e0\") " pod="openstack/ovn-controller-ovs-45czf" Dec 03 13:16:26 crc kubenswrapper[4986]: I1203 13:16:26.354662 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5510fce4-e81b-4089-a5c3-4c4b6c72d9e0-scripts\") pod \"ovn-controller-ovs-45czf\" (UID: \"5510fce4-e81b-4089-a5c3-4c4b6c72d9e0\") " pod="openstack/ovn-controller-ovs-45czf" Dec 03 13:16:26 crc kubenswrapper[4986]: I1203 13:16:26.354934 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5510fce4-e81b-4089-a5c3-4c4b6c72d9e0-var-run\") pod \"ovn-controller-ovs-45czf\" (UID: \"5510fce4-e81b-4089-a5c3-4c4b6c72d9e0\") " pod="openstack/ovn-controller-ovs-45czf" Dec 03 13:16:26 crc kubenswrapper[4986]: I1203 13:16:26.355086 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/5510fce4-e81b-4089-a5c3-4c4b6c72d9e0-var-lib\") pod \"ovn-controller-ovs-45czf\" (UID: \"5510fce4-e81b-4089-a5c3-4c4b6c72d9e0\") " pod="openstack/ovn-controller-ovs-45czf" Dec 03 13:16:26 crc kubenswrapper[4986]: I1203 13:16:26.355222 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/5510fce4-e81b-4089-a5c3-4c4b6c72d9e0-etc-ovs\") pod \"ovn-controller-ovs-45czf\" (UID: \"5510fce4-e81b-4089-a5c3-4c4b6c72d9e0\") " pod="openstack/ovn-controller-ovs-45czf" Dec 03 13:16:26 crc kubenswrapper[4986]: I1203 13:16:26.375226 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f68sc\" (UniqueName: \"kubernetes.io/projected/5510fce4-e81b-4089-a5c3-4c4b6c72d9e0-kube-api-access-f68sc\") pod \"ovn-controller-ovs-45czf\" (UID: \"5510fce4-e81b-4089-a5c3-4c4b6c72d9e0\") " pod="openstack/ovn-controller-ovs-45czf" Dec 03 13:16:26 crc kubenswrapper[4986]: I1203 13:16:26.464963 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-45czf" Dec 03 13:16:28 crc kubenswrapper[4986]: I1203 13:16:28.152685 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 03 13:16:28 crc kubenswrapper[4986]: I1203 13:16:28.155884 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 03 13:16:28 crc kubenswrapper[4986]: I1203 13:16:28.159473 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Dec 03 13:16:28 crc kubenswrapper[4986]: I1203 13:16:28.159486 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Dec 03 13:16:28 crc kubenswrapper[4986]: I1203 13:16:28.159783 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Dec 03 13:16:28 crc kubenswrapper[4986]: I1203 13:16:28.160122 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-6zrl4" Dec 03 13:16:28 crc kubenswrapper[4986]: I1203 13:16:28.161615 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Dec 03 13:16:28 crc kubenswrapper[4986]: I1203 13:16:28.166166 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 03 13:16:28 crc kubenswrapper[4986]: I1203 13:16:28.283296 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/905d78e0-0235-400d-8004-1f612a11b60a-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"905d78e0-0235-400d-8004-1f612a11b60a\") " pod="openstack/ovsdbserver-nb-0" Dec 03 13:16:28 crc kubenswrapper[4986]: I1203 13:16:28.283369 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"905d78e0-0235-400d-8004-1f612a11b60a\") " pod="openstack/ovsdbserver-nb-0" Dec 03 13:16:28 crc kubenswrapper[4986]: I1203 13:16:28.283400 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/905d78e0-0235-400d-8004-1f612a11b60a-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"905d78e0-0235-400d-8004-1f612a11b60a\") " pod="openstack/ovsdbserver-nb-0" Dec 03 13:16:28 crc kubenswrapper[4986]: I1203 13:16:28.283424 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/905d78e0-0235-400d-8004-1f612a11b60a-config\") pod \"ovsdbserver-nb-0\" (UID: \"905d78e0-0235-400d-8004-1f612a11b60a\") " pod="openstack/ovsdbserver-nb-0" Dec 03 13:16:28 crc kubenswrapper[4986]: I1203 13:16:28.283477 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/905d78e0-0235-400d-8004-1f612a11b60a-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"905d78e0-0235-400d-8004-1f612a11b60a\") " pod="openstack/ovsdbserver-nb-0" Dec 03 13:16:28 crc kubenswrapper[4986]: I1203 13:16:28.283524 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qm5bc\" (UniqueName: \"kubernetes.io/projected/905d78e0-0235-400d-8004-1f612a11b60a-kube-api-access-qm5bc\") pod \"ovsdbserver-nb-0\" (UID: \"905d78e0-0235-400d-8004-1f612a11b60a\") " pod="openstack/ovsdbserver-nb-0" Dec 03 13:16:28 crc kubenswrapper[4986]: I1203 13:16:28.283576 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/905d78e0-0235-400d-8004-1f612a11b60a-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"905d78e0-0235-400d-8004-1f612a11b60a\") " pod="openstack/ovsdbserver-nb-0" Dec 03 13:16:28 crc kubenswrapper[4986]: I1203 13:16:28.283597 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/905d78e0-0235-400d-8004-1f612a11b60a-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"905d78e0-0235-400d-8004-1f612a11b60a\") " pod="openstack/ovsdbserver-nb-0" Dec 03 13:16:28 crc kubenswrapper[4986]: E1203 13:16:28.320760 4986 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 03 13:16:28 crc kubenswrapper[4986]: E1203 13:16:28.320934 4986 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c27f4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-xznmr_openstack(6dcfdd67-10b7-42e4-902e-4d3c65b287dd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 13:16:28 crc kubenswrapper[4986]: E1203 13:16:28.322265 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-xznmr" podUID="6dcfdd67-10b7-42e4-902e-4d3c65b287dd" Dec 03 13:16:28 crc kubenswrapper[4986]: I1203 13:16:28.384518 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/905d78e0-0235-400d-8004-1f612a11b60a-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"905d78e0-0235-400d-8004-1f612a11b60a\") " pod="openstack/ovsdbserver-nb-0" Dec 03 13:16:28 crc kubenswrapper[4986]: I1203 13:16:28.384834 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"905d78e0-0235-400d-8004-1f612a11b60a\") " pod="openstack/ovsdbserver-nb-0" Dec 03 13:16:28 crc kubenswrapper[4986]: I1203 13:16:28.384856 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/905d78e0-0235-400d-8004-1f612a11b60a-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"905d78e0-0235-400d-8004-1f612a11b60a\") " pod="openstack/ovsdbserver-nb-0" Dec 03 13:16:28 crc kubenswrapper[4986]: I1203 13:16:28.384875 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/905d78e0-0235-400d-8004-1f612a11b60a-config\") pod \"ovsdbserver-nb-0\" (UID: \"905d78e0-0235-400d-8004-1f612a11b60a\") " pod="openstack/ovsdbserver-nb-0" Dec 03 13:16:28 crc kubenswrapper[4986]: I1203 13:16:28.384906 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/905d78e0-0235-400d-8004-1f612a11b60a-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"905d78e0-0235-400d-8004-1f612a11b60a\") " pod="openstack/ovsdbserver-nb-0" Dec 03 13:16:28 crc kubenswrapper[4986]: I1203 13:16:28.384940 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qm5bc\" (UniqueName: \"kubernetes.io/projected/905d78e0-0235-400d-8004-1f612a11b60a-kube-api-access-qm5bc\") pod \"ovsdbserver-nb-0\" (UID: \"905d78e0-0235-400d-8004-1f612a11b60a\") " pod="openstack/ovsdbserver-nb-0" Dec 03 13:16:28 crc kubenswrapper[4986]: I1203 13:16:28.384978 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/905d78e0-0235-400d-8004-1f612a11b60a-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"905d78e0-0235-400d-8004-1f612a11b60a\") " pod="openstack/ovsdbserver-nb-0" Dec 03 13:16:28 crc kubenswrapper[4986]: I1203 13:16:28.384993 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/905d78e0-0235-400d-8004-1f612a11b60a-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"905d78e0-0235-400d-8004-1f612a11b60a\") " pod="openstack/ovsdbserver-nb-0" Dec 03 13:16:28 crc kubenswrapper[4986]: I1203 13:16:28.385401 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/905d78e0-0235-400d-8004-1f612a11b60a-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"905d78e0-0235-400d-8004-1f612a11b60a\") " pod="openstack/ovsdbserver-nb-0" Dec 03 13:16:28 crc kubenswrapper[4986]: I1203 13:16:28.386056 4986 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"905d78e0-0235-400d-8004-1f612a11b60a\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/ovsdbserver-nb-0" Dec 03 13:16:28 crc kubenswrapper[4986]: I1203 13:16:28.389710 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/905d78e0-0235-400d-8004-1f612a11b60a-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"905d78e0-0235-400d-8004-1f612a11b60a\") " pod="openstack/ovsdbserver-nb-0" Dec 03 13:16:28 crc kubenswrapper[4986]: I1203 13:16:28.389864 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/905d78e0-0235-400d-8004-1f612a11b60a-config\") pod \"ovsdbserver-nb-0\" (UID: \"905d78e0-0235-400d-8004-1f612a11b60a\") " pod="openstack/ovsdbserver-nb-0" Dec 03 13:16:28 crc kubenswrapper[4986]: I1203 13:16:28.395233 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/905d78e0-0235-400d-8004-1f612a11b60a-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"905d78e0-0235-400d-8004-1f612a11b60a\") " pod="openstack/ovsdbserver-nb-0" Dec 03 13:16:28 crc kubenswrapper[4986]: I1203 13:16:28.401378 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/905d78e0-0235-400d-8004-1f612a11b60a-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"905d78e0-0235-400d-8004-1f612a11b60a\") " pod="openstack/ovsdbserver-nb-0" Dec 03 13:16:28 crc kubenswrapper[4986]: I1203 13:16:28.403072 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/905d78e0-0235-400d-8004-1f612a11b60a-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"905d78e0-0235-400d-8004-1f612a11b60a\") " pod="openstack/ovsdbserver-nb-0" Dec 03 13:16:28 crc kubenswrapper[4986]: I1203 13:16:28.404230 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qm5bc\" (UniqueName: \"kubernetes.io/projected/905d78e0-0235-400d-8004-1f612a11b60a-kube-api-access-qm5bc\") pod \"ovsdbserver-nb-0\" (UID: \"905d78e0-0235-400d-8004-1f612a11b60a\") " pod="openstack/ovsdbserver-nb-0" Dec 03 13:16:28 crc kubenswrapper[4986]: I1203 13:16:28.447700 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"905d78e0-0235-400d-8004-1f612a11b60a\") " pod="openstack/ovsdbserver-nb-0" Dec 03 13:16:28 crc kubenswrapper[4986]: I1203 13:16:28.481813 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 03 13:16:28 crc kubenswrapper[4986]: I1203 13:16:28.791228 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 03 13:16:28 crc kubenswrapper[4986]: I1203 13:16:28.961988 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-xznmr" Dec 03 13:16:29 crc kubenswrapper[4986]: I1203 13:16:29.103091 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6dcfdd67-10b7-42e4-902e-4d3c65b287dd-config\") pod \"6dcfdd67-10b7-42e4-902e-4d3c65b287dd\" (UID: \"6dcfdd67-10b7-42e4-902e-4d3c65b287dd\") " Dec 03 13:16:29 crc kubenswrapper[4986]: I1203 13:16:29.103190 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c27f4\" (UniqueName: \"kubernetes.io/projected/6dcfdd67-10b7-42e4-902e-4d3c65b287dd-kube-api-access-c27f4\") pod \"6dcfdd67-10b7-42e4-902e-4d3c65b287dd\" (UID: \"6dcfdd67-10b7-42e4-902e-4d3c65b287dd\") " Dec 03 13:16:29 crc kubenswrapper[4986]: I1203 13:16:29.104681 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6dcfdd67-10b7-42e4-902e-4d3c65b287dd-config" (OuterVolumeSpecName: "config") pod "6dcfdd67-10b7-42e4-902e-4d3c65b287dd" (UID: "6dcfdd67-10b7-42e4-902e-4d3c65b287dd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:16:29 crc kubenswrapper[4986]: I1203 13:16:29.109578 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6dcfdd67-10b7-42e4-902e-4d3c65b287dd-kube-api-access-c27f4" (OuterVolumeSpecName: "kube-api-access-c27f4") pod "6dcfdd67-10b7-42e4-902e-4d3c65b287dd" (UID: "6dcfdd67-10b7-42e4-902e-4d3c65b287dd"). InnerVolumeSpecName "kube-api-access-c27f4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:16:29 crc kubenswrapper[4986]: I1203 13:16:29.204667 4986 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6dcfdd67-10b7-42e4-902e-4d3c65b287dd-config\") on node \"crc\" DevicePath \"\"" Dec 03 13:16:29 crc kubenswrapper[4986]: I1203 13:16:29.204695 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c27f4\" (UniqueName: \"kubernetes.io/projected/6dcfdd67-10b7-42e4-902e-4d3c65b287dd-kube-api-access-c27f4\") on node \"crc\" DevicePath \"\"" Dec 03 13:16:29 crc kubenswrapper[4986]: I1203 13:16:29.309840 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-7jms6"] Dec 03 13:16:29 crc kubenswrapper[4986]: W1203 13:16:29.311056 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5fd53494_2eec_45b8_80f4_7b94abf29bfb.slice/crio-d018a12a0a3ddc8f550dbb32d955d6b1dc1d052d776a51a6507515595ae58fef WatchSource:0}: Error finding container d018a12a0a3ddc8f550dbb32d955d6b1dc1d052d776a51a6507515595ae58fef: Status 404 returned error can't find the container with id d018a12a0a3ddc8f550dbb32d955d6b1dc1d052d776a51a6507515595ae58fef Dec 03 13:16:29 crc kubenswrapper[4986]: W1203 13:16:29.311714 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd06f8249_00a2_4e59_a055_82ab737c7b92.slice/crio-a357accc083a4012f58682bf8539cc36668bcecdeaab9cf377abae29ab376aab WatchSource:0}: Error finding container a357accc083a4012f58682bf8539cc36668bcecdeaab9cf377abae29ab376aab: Status 404 returned error can't find the container with id a357accc083a4012f58682bf8539cc36668bcecdeaab9cf377abae29ab376aab Dec 03 13:16:29 crc kubenswrapper[4986]: I1203 13:16:29.324676 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-kqn7t"] Dec 03 13:16:29 crc kubenswrapper[4986]: I1203 13:16:29.340658 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Dec 03 13:16:29 crc kubenswrapper[4986]: I1203 13:16:29.365430 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 03 13:16:29 crc kubenswrapper[4986]: I1203 13:16:29.390852 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-smttz"] Dec 03 13:16:29 crc kubenswrapper[4986]: I1203 13:16:29.406425 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 03 13:16:29 crc kubenswrapper[4986]: I1203 13:16:29.407256 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Dec 03 13:16:29 crc kubenswrapper[4986]: I1203 13:16:29.412382 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 03 13:16:29 crc kubenswrapper[4986]: I1203 13:16:29.536207 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-kqn7t" event={"ID":"d06f8249-00a2-4e59-a055-82ab737c7b92","Type":"ContainerStarted","Data":"a357accc083a4012f58682bf8539cc36668bcecdeaab9cf377abae29ab376aab"} Dec 03 13:16:29 crc kubenswrapper[4986]: I1203 13:16:29.539780 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"cb2b35c1-7c5f-4ffc-a178-4f7ade5cc313","Type":"ContainerStarted","Data":"44207ae570882ccd2b1705475573c9a2bad0dd789d957d5316e78485dedb890d"} Dec 03 13:16:29 crc kubenswrapper[4986]: I1203 13:16:29.541669 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"285bb825-5a17-4d45-87d6-852513d0351b","Type":"ContainerStarted","Data":"c8375457046ecdbee80a482cf08fbc1e34dcaecc5d8c276ad3770f3e9a91ff2e"} Dec 03 13:16:29 crc kubenswrapper[4986]: I1203 13:16:29.546428 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"5e8a62bd-1e92-464d-b905-8eb18cc44646","Type":"ContainerStarted","Data":"b1ebe45443ee46ac60425599ab9f28dcb0701675bbd250c8a50ac52f0f0c5dcd"} Dec 03 13:16:29 crc kubenswrapper[4986]: I1203 13:16:29.561212 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a0496538-1ab2-45a2-94ab-fc3474533ec3","Type":"ContainerStarted","Data":"3aaa1e3afa9efcba3efc2dfefda621d48676157b96e93e46f6d421af99ccd22a"} Dec 03 13:16:29 crc kubenswrapper[4986]: I1203 13:16:29.563362 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-7jms6" event={"ID":"5fd53494-2eec-45b8-80f4-7b94abf29bfb","Type":"ContainerStarted","Data":"d018a12a0a3ddc8f550dbb32d955d6b1dc1d052d776a51a6507515595ae58fef"} Dec 03 13:16:29 crc kubenswrapper[4986]: I1203 13:16:29.564298 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 03 13:16:29 crc kubenswrapper[4986]: I1203 13:16:29.565237 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"4ea00525-8d87-4a9e-b1ce-15a73b6bedd1","Type":"ContainerStarted","Data":"e4c035369ecf2e51ba81488e3769bde30a14cd96309bca64d160bb61d5425d0e"} Dec 03 13:16:29 crc kubenswrapper[4986]: I1203 13:16:29.572353 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"50659f56-763b-4cac-9ab4-d660c7d777af","Type":"ContainerStarted","Data":"4fd020e71abcc5d1c680c7854c4f38bf776293ab09d7b01c12446254ba07959c"} Dec 03 13:16:29 crc kubenswrapper[4986]: I1203 13:16:29.573390 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-xznmr" event={"ID":"6dcfdd67-10b7-42e4-902e-4d3c65b287dd","Type":"ContainerDied","Data":"dfe562c10f53796d9e38dffcc69e302742755e9f8a4512a75b7af7e450ea0121"} Dec 03 13:16:29 crc kubenswrapper[4986]: I1203 13:16:29.573518 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-xznmr" Dec 03 13:16:29 crc kubenswrapper[4986]: I1203 13:16:29.580508 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-smttz" event={"ID":"b0a895ed-eb3a-49d5-95fe-fe470db0eca3","Type":"ContainerStarted","Data":"8b8fe1afc59d5de40601b7395fd1149674d1fb559213a95cbb85d7974a6d280e"} Dec 03 13:16:29 crc kubenswrapper[4986]: I1203 13:16:29.587262 4986 generic.go:334] "Generic (PLEG): container finished" podID="a4bb76c2-97be-4087-aa72-70fd50a5731c" containerID="8de5b8bf5d0e20e31a7bd809936729e8ae83e4c668433eee4f82b8cb2c33a96f" exitCode=0 Dec 03 13:16:29 crc kubenswrapper[4986]: I1203 13:16:29.587318 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-sbp97" event={"ID":"a4bb76c2-97be-4087-aa72-70fd50a5731c","Type":"ContainerDied","Data":"8de5b8bf5d0e20e31a7bd809936729e8ae83e4c668433eee4f82b8cb2c33a96f"} Dec 03 13:16:29 crc kubenswrapper[4986]: I1203 13:16:29.687923 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-xznmr"] Dec 03 13:16:29 crc kubenswrapper[4986]: I1203 13:16:29.694108 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-xznmr"] Dec 03 13:16:29 crc kubenswrapper[4986]: I1203 13:16:29.947154 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-sbp97" Dec 03 13:16:30 crc kubenswrapper[4986]: I1203 13:16:30.122913 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4bb76c2-97be-4087-aa72-70fd50a5731c-config\") pod \"a4bb76c2-97be-4087-aa72-70fd50a5731c\" (UID: \"a4bb76c2-97be-4087-aa72-70fd50a5731c\") " Dec 03 13:16:30 crc kubenswrapper[4986]: I1203 13:16:30.122991 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nm4xd\" (UniqueName: \"kubernetes.io/projected/a4bb76c2-97be-4087-aa72-70fd50a5731c-kube-api-access-nm4xd\") pod \"a4bb76c2-97be-4087-aa72-70fd50a5731c\" (UID: \"a4bb76c2-97be-4087-aa72-70fd50a5731c\") " Dec 03 13:16:30 crc kubenswrapper[4986]: I1203 13:16:30.123108 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a4bb76c2-97be-4087-aa72-70fd50a5731c-dns-svc\") pod \"a4bb76c2-97be-4087-aa72-70fd50a5731c\" (UID: \"a4bb76c2-97be-4087-aa72-70fd50a5731c\") " Dec 03 13:16:30 crc kubenswrapper[4986]: I1203 13:16:30.129006 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4bb76c2-97be-4087-aa72-70fd50a5731c-kube-api-access-nm4xd" (OuterVolumeSpecName: "kube-api-access-nm4xd") pod "a4bb76c2-97be-4087-aa72-70fd50a5731c" (UID: "a4bb76c2-97be-4087-aa72-70fd50a5731c"). InnerVolumeSpecName "kube-api-access-nm4xd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:16:30 crc kubenswrapper[4986]: I1203 13:16:30.145850 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4bb76c2-97be-4087-aa72-70fd50a5731c-config" (OuterVolumeSpecName: "config") pod "a4bb76c2-97be-4087-aa72-70fd50a5731c" (UID: "a4bb76c2-97be-4087-aa72-70fd50a5731c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:16:30 crc kubenswrapper[4986]: I1203 13:16:30.146669 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4bb76c2-97be-4087-aa72-70fd50a5731c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a4bb76c2-97be-4087-aa72-70fd50a5731c" (UID: "a4bb76c2-97be-4087-aa72-70fd50a5731c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:16:30 crc kubenswrapper[4986]: I1203 13:16:30.226408 4986 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a4bb76c2-97be-4087-aa72-70fd50a5731c-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 03 13:16:30 crc kubenswrapper[4986]: I1203 13:16:30.226442 4986 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4bb76c2-97be-4087-aa72-70fd50a5731c-config\") on node \"crc\" DevicePath \"\"" Dec 03 13:16:30 crc kubenswrapper[4986]: I1203 13:16:30.226451 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nm4xd\" (UniqueName: \"kubernetes.io/projected/a4bb76c2-97be-4087-aa72-70fd50a5731c-kube-api-access-nm4xd\") on node \"crc\" DevicePath \"\"" Dec 03 13:16:30 crc kubenswrapper[4986]: I1203 13:16:30.418884 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-45czf"] Dec 03 13:16:30 crc kubenswrapper[4986]: I1203 13:16:30.567166 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 03 13:16:30 crc kubenswrapper[4986]: E1203 13:16:30.569517 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4bb76c2-97be-4087-aa72-70fd50a5731c" containerName="init" Dec 03 13:16:30 crc kubenswrapper[4986]: I1203 13:16:30.569550 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4bb76c2-97be-4087-aa72-70fd50a5731c" containerName="init" Dec 03 13:16:30 crc kubenswrapper[4986]: I1203 13:16:30.569941 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4bb76c2-97be-4087-aa72-70fd50a5731c" containerName="init" Dec 03 13:16:30 crc kubenswrapper[4986]: I1203 13:16:30.571092 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 03 13:16:30 crc kubenswrapper[4986]: I1203 13:16:30.573546 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Dec 03 13:16:30 crc kubenswrapper[4986]: I1203 13:16:30.574016 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Dec 03 13:16:30 crc kubenswrapper[4986]: I1203 13:16:30.574723 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 03 13:16:30 crc kubenswrapper[4986]: I1203 13:16:30.574896 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-p84fk" Dec 03 13:16:30 crc kubenswrapper[4986]: I1203 13:16:30.575181 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Dec 03 13:16:30 crc kubenswrapper[4986]: I1203 13:16:30.622548 4986 generic.go:334] "Generic (PLEG): container finished" podID="b0a895ed-eb3a-49d5-95fe-fe470db0eca3" containerID="2b86f871d6b1e9edd0fdced00a2742a4a87a417dac263d80085b4f868717b845" exitCode=0 Dec 03 13:16:30 crc kubenswrapper[4986]: I1203 13:16:30.622941 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-smttz" event={"ID":"b0a895ed-eb3a-49d5-95fe-fe470db0eca3","Type":"ContainerDied","Data":"2b86f871d6b1e9edd0fdced00a2742a4a87a417dac263d80085b4f868717b845"} Dec 03 13:16:30 crc kubenswrapper[4986]: I1203 13:16:30.625597 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"905d78e0-0235-400d-8004-1f612a11b60a","Type":"ContainerStarted","Data":"5bee8d0d47f40716fcfe3bd2e1752753b04bb872d821c84f94e815bf35bd25ed"} Dec 03 13:16:30 crc kubenswrapper[4986]: I1203 13:16:30.631581 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-sbp97" event={"ID":"a4bb76c2-97be-4087-aa72-70fd50a5731c","Type":"ContainerDied","Data":"a70a29f0702f2699b77df6c2ae599029a61de9f2a796e706607a4e13f359bbda"} Dec 03 13:16:30 crc kubenswrapper[4986]: I1203 13:16:30.631650 4986 scope.go:117] "RemoveContainer" containerID="8de5b8bf5d0e20e31a7bd809936729e8ae83e4c668433eee4f82b8cb2c33a96f" Dec 03 13:16:30 crc kubenswrapper[4986]: I1203 13:16:30.633421 4986 generic.go:334] "Generic (PLEG): container finished" podID="5fd53494-2eec-45b8-80f4-7b94abf29bfb" containerID="6146b68fc5c92879158178d1369408a10dfb67b770a1a9d0c50945c44329751e" exitCode=0 Dec 03 13:16:30 crc kubenswrapper[4986]: I1203 13:16:30.633464 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-7jms6" event={"ID":"5fd53494-2eec-45b8-80f4-7b94abf29bfb","Type":"ContainerDied","Data":"6146b68fc5c92879158178d1369408a10dfb67b770a1a9d0c50945c44329751e"} Dec 03 13:16:30 crc kubenswrapper[4986]: I1203 13:16:30.634401 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-sbp97" Dec 03 13:16:30 crc kubenswrapper[4986]: I1203 13:16:30.724436 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-sbp97"] Dec 03 13:16:30 crc kubenswrapper[4986]: I1203 13:16:30.731811 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-sbp97"] Dec 03 13:16:30 crc kubenswrapper[4986]: I1203 13:16:30.734650 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-sb-0\" (UID: \"40aba5da-7d4c-49e1-a054-6e6789aca293\") " pod="openstack/ovsdbserver-sb-0" Dec 03 13:16:30 crc kubenswrapper[4986]: I1203 13:16:30.734684 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/40aba5da-7d4c-49e1-a054-6e6789aca293-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"40aba5da-7d4c-49e1-a054-6e6789aca293\") " pod="openstack/ovsdbserver-sb-0" Dec 03 13:16:30 crc kubenswrapper[4986]: I1203 13:16:30.734711 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/40aba5da-7d4c-49e1-a054-6e6789aca293-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"40aba5da-7d4c-49e1-a054-6e6789aca293\") " pod="openstack/ovsdbserver-sb-0" Dec 03 13:16:30 crc kubenswrapper[4986]: I1203 13:16:30.734731 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2t9q\" (UniqueName: \"kubernetes.io/projected/40aba5da-7d4c-49e1-a054-6e6789aca293-kube-api-access-j2t9q\") pod \"ovsdbserver-sb-0\" (UID: \"40aba5da-7d4c-49e1-a054-6e6789aca293\") " pod="openstack/ovsdbserver-sb-0" Dec 03 13:16:30 crc kubenswrapper[4986]: I1203 13:16:30.734747 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40aba5da-7d4c-49e1-a054-6e6789aca293-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"40aba5da-7d4c-49e1-a054-6e6789aca293\") " pod="openstack/ovsdbserver-sb-0" Dec 03 13:16:30 crc kubenswrapper[4986]: I1203 13:16:30.734770 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40aba5da-7d4c-49e1-a054-6e6789aca293-config\") pod \"ovsdbserver-sb-0\" (UID: \"40aba5da-7d4c-49e1-a054-6e6789aca293\") " pod="openstack/ovsdbserver-sb-0" Dec 03 13:16:30 crc kubenswrapper[4986]: I1203 13:16:30.734838 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/40aba5da-7d4c-49e1-a054-6e6789aca293-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"40aba5da-7d4c-49e1-a054-6e6789aca293\") " pod="openstack/ovsdbserver-sb-0" Dec 03 13:16:30 crc kubenswrapper[4986]: I1203 13:16:30.735188 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/40aba5da-7d4c-49e1-a054-6e6789aca293-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"40aba5da-7d4c-49e1-a054-6e6789aca293\") " pod="openstack/ovsdbserver-sb-0" Dec 03 13:16:30 crc kubenswrapper[4986]: I1203 13:16:30.837945 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/40aba5da-7d4c-49e1-a054-6e6789aca293-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"40aba5da-7d4c-49e1-a054-6e6789aca293\") " pod="openstack/ovsdbserver-sb-0" Dec 03 13:16:30 crc kubenswrapper[4986]: I1203 13:16:30.837988 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2t9q\" (UniqueName: \"kubernetes.io/projected/40aba5da-7d4c-49e1-a054-6e6789aca293-kube-api-access-j2t9q\") pod \"ovsdbserver-sb-0\" (UID: \"40aba5da-7d4c-49e1-a054-6e6789aca293\") " pod="openstack/ovsdbserver-sb-0" Dec 03 13:16:30 crc kubenswrapper[4986]: I1203 13:16:30.838006 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40aba5da-7d4c-49e1-a054-6e6789aca293-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"40aba5da-7d4c-49e1-a054-6e6789aca293\") " pod="openstack/ovsdbserver-sb-0" Dec 03 13:16:30 crc kubenswrapper[4986]: I1203 13:16:30.838040 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40aba5da-7d4c-49e1-a054-6e6789aca293-config\") pod \"ovsdbserver-sb-0\" (UID: \"40aba5da-7d4c-49e1-a054-6e6789aca293\") " pod="openstack/ovsdbserver-sb-0" Dec 03 13:16:30 crc kubenswrapper[4986]: I1203 13:16:30.838091 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/40aba5da-7d4c-49e1-a054-6e6789aca293-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"40aba5da-7d4c-49e1-a054-6e6789aca293\") " pod="openstack/ovsdbserver-sb-0" Dec 03 13:16:30 crc kubenswrapper[4986]: I1203 13:16:30.838106 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/40aba5da-7d4c-49e1-a054-6e6789aca293-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"40aba5da-7d4c-49e1-a054-6e6789aca293\") " pod="openstack/ovsdbserver-sb-0" Dec 03 13:16:30 crc kubenswrapper[4986]: I1203 13:16:30.838163 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-sb-0\" (UID: \"40aba5da-7d4c-49e1-a054-6e6789aca293\") " pod="openstack/ovsdbserver-sb-0" Dec 03 13:16:30 crc kubenswrapper[4986]: I1203 13:16:30.838182 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/40aba5da-7d4c-49e1-a054-6e6789aca293-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"40aba5da-7d4c-49e1-a054-6e6789aca293\") " pod="openstack/ovsdbserver-sb-0" Dec 03 13:16:30 crc kubenswrapper[4986]: I1203 13:16:30.838597 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/40aba5da-7d4c-49e1-a054-6e6789aca293-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"40aba5da-7d4c-49e1-a054-6e6789aca293\") " pod="openstack/ovsdbserver-sb-0" Dec 03 13:16:30 crc kubenswrapper[4986]: I1203 13:16:30.838973 4986 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-sb-0\" (UID: \"40aba5da-7d4c-49e1-a054-6e6789aca293\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/ovsdbserver-sb-0" Dec 03 13:16:30 crc kubenswrapper[4986]: I1203 13:16:30.839098 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40aba5da-7d4c-49e1-a054-6e6789aca293-config\") pod \"ovsdbserver-sb-0\" (UID: \"40aba5da-7d4c-49e1-a054-6e6789aca293\") " pod="openstack/ovsdbserver-sb-0" Dec 03 13:16:30 crc kubenswrapper[4986]: I1203 13:16:30.839451 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/40aba5da-7d4c-49e1-a054-6e6789aca293-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"40aba5da-7d4c-49e1-a054-6e6789aca293\") " pod="openstack/ovsdbserver-sb-0" Dec 03 13:16:30 crc kubenswrapper[4986]: I1203 13:16:30.856056 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/40aba5da-7d4c-49e1-a054-6e6789aca293-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"40aba5da-7d4c-49e1-a054-6e6789aca293\") " pod="openstack/ovsdbserver-sb-0" Dec 03 13:16:30 crc kubenswrapper[4986]: I1203 13:16:30.858475 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/40aba5da-7d4c-49e1-a054-6e6789aca293-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"40aba5da-7d4c-49e1-a054-6e6789aca293\") " pod="openstack/ovsdbserver-sb-0" Dec 03 13:16:30 crc kubenswrapper[4986]: I1203 13:16:30.861429 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40aba5da-7d4c-49e1-a054-6e6789aca293-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"40aba5da-7d4c-49e1-a054-6e6789aca293\") " pod="openstack/ovsdbserver-sb-0" Dec 03 13:16:30 crc kubenswrapper[4986]: I1203 13:16:30.863664 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2t9q\" (UniqueName: \"kubernetes.io/projected/40aba5da-7d4c-49e1-a054-6e6789aca293-kube-api-access-j2t9q\") pod \"ovsdbserver-sb-0\" (UID: \"40aba5da-7d4c-49e1-a054-6e6789aca293\") " pod="openstack/ovsdbserver-sb-0" Dec 03 13:16:30 crc kubenswrapper[4986]: I1203 13:16:30.874244 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-sb-0\" (UID: \"40aba5da-7d4c-49e1-a054-6e6789aca293\") " pod="openstack/ovsdbserver-sb-0" Dec 03 13:16:30 crc kubenswrapper[4986]: I1203 13:16:30.954159 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6dcfdd67-10b7-42e4-902e-4d3c65b287dd" path="/var/lib/kubelet/pods/6dcfdd67-10b7-42e4-902e-4d3c65b287dd/volumes" Dec 03 13:16:30 crc kubenswrapper[4986]: I1203 13:16:30.954527 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4bb76c2-97be-4087-aa72-70fd50a5731c" path="/var/lib/kubelet/pods/a4bb76c2-97be-4087-aa72-70fd50a5731c/volumes" Dec 03 13:16:30 crc kubenswrapper[4986]: I1203 13:16:30.976866 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 03 13:16:31 crc kubenswrapper[4986]: I1203 13:16:31.640603 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-45czf" event={"ID":"5510fce4-e81b-4089-a5c3-4c4b6c72d9e0","Type":"ContainerStarted","Data":"0a1b4a33baa13a7f0a995d96417df621edcf1fff37a13f98e7e143f0289bfa4d"} Dec 03 13:16:42 crc kubenswrapper[4986]: E1203 13:16:42.018611 4986 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Dec 03 13:16:42 crc kubenswrapper[4986]: E1203 13:16:42.019530 4986 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ksk2v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(50659f56-763b-4cac-9ab4-d660c7d777af): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 13:16:42 crc kubenswrapper[4986]: E1203 13:16:42.020708 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="50659f56-763b-4cac-9ab4-d660c7d777af" Dec 03 13:16:42 crc kubenswrapper[4986]: E1203 13:16:42.058918 4986 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Dec 03 13:16:42 crc kubenswrapper[4986]: E1203 13:16:42.059070 4986 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x2skp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(a0496538-1ab2-45a2-94ab-fc3474533ec3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 13:16:42 crc kubenswrapper[4986]: E1203 13:16:42.060420 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="a0496538-1ab2-45a2-94ab-fc3474533ec3" Dec 03 13:16:42 crc kubenswrapper[4986]: E1203 13:16:42.353245 4986 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified" Dec 03 13:16:42 crc kubenswrapper[4986]: E1203 13:16:42.353489 4986 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovsdbserver-nb,Image:quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified,Command:[/usr/bin/dumb-init],Args:[/usr/local/bin/container-scripts/setup.sh],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n55bh86h566hc9h56fh55ch66ch697hf7h55ch684hf9h56h564h5f8h5c9h87h597h657h548hcdh8fhcch5f4h57ch579hcfh66bh87h5c6h54bh5dcq,ValueFrom:nil,},EnvVar{Name:OVN_LOGDIR,Value:/tmp,ValueFrom:nil,},EnvVar{Name:OVN_RUNDIR,Value:/tmp,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovndbcluster-nb-etc-ovn,ReadOnly:false,MountPath:/etc/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qm5bc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/cleanup.sh],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:20,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-nb-0_openstack(905d78e0-0235-400d-8004-1f612a11b60a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 13:16:42 crc kubenswrapper[4986]: E1203 13:16:42.502289 4986 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified" Dec 03 13:16:42 crc kubenswrapper[4986]: E1203 13:16:42.502462 4986 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovn-controller,Image:quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified,Command:[ovn-controller --pidfile unix:/run/openvswitch/db.sock --certificate=/etc/pki/tls/certs/ovndb.crt --private-key=/etc/pki/tls/private/ovndb.key --ca-cert=/etc/pki/tls/certs/ovndbca.crt],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nc9hdbh67chfhfch569h57ch656h5b7h67h57ch7fh574h5fbh679h5bfh9h6dh8bh8h68h57h584h567h695h589h646h55dh5c5hfdh67bh5ddq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:var-run,ReadOnly:false,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-ovn,ReadOnly:false,MountPath:/var/run/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-log-ovn,ReadOnly:false,MountPath:/var/log/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g2zvf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_liveness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_readiness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/share/ovn/scripts/ovn-ctl stop_controller],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-kqn7t_openstack(d06f8249-00a2-4e59-a055-82ab737c7b92): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 13:16:42 crc kubenswrapper[4986]: E1203 13:16:42.503687 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovn-controller-kqn7t" podUID="d06f8249-00a2-4e59-a055-82ab737c7b92" Dec 03 13:16:42 crc kubenswrapper[4986]: E1203 13:16:42.733006 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-galera-0" podUID="50659f56-763b-4cac-9ab4-d660c7d777af" Dec 03 13:16:42 crc kubenswrapper[4986]: E1203 13:16:42.734249 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="a0496538-1ab2-45a2-94ab-fc3474533ec3" Dec 03 13:16:42 crc kubenswrapper[4986]: E1203 13:16:42.734292 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified\\\"\"" pod="openstack/ovn-controller-kqn7t" podUID="d06f8249-00a2-4e59-a055-82ab737c7b92" Dec 03 13:16:43 crc kubenswrapper[4986]: I1203 13:16:43.473390 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 03 13:16:43 crc kubenswrapper[4986]: W1203 13:16:43.478370 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod40aba5da_7d4c_49e1_a054_6e6789aca293.slice/crio-b4194fb6c31cde5f4431dc123e0e432a2dab8afff1a2646b7d44b4df8e686bcb WatchSource:0}: Error finding container b4194fb6c31cde5f4431dc123e0e432a2dab8afff1a2646b7d44b4df8e686bcb: Status 404 returned error can't find the container with id b4194fb6c31cde5f4431dc123e0e432a2dab8afff1a2646b7d44b4df8e686bcb Dec 03 13:16:43 crc kubenswrapper[4986]: I1203 13:16:43.742963 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-7jms6" event={"ID":"5fd53494-2eec-45b8-80f4-7b94abf29bfb","Type":"ContainerStarted","Data":"1c7aebcef489b547601b55edb3f24f2d5c1b32e5593d160d5c99b349cb123d5e"} Dec 03 13:16:43 crc kubenswrapper[4986]: I1203 13:16:43.743208 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-7jms6" Dec 03 13:16:43 crc kubenswrapper[4986]: I1203 13:16:43.744954 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"4ea00525-8d87-4a9e-b1ce-15a73b6bedd1","Type":"ContainerStarted","Data":"2b839fea51f99e9668279fe36bf0c93319d7ee0e42824ac01703df9a4aa0ecc5"} Dec 03 13:16:43 crc kubenswrapper[4986]: I1203 13:16:43.745171 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Dec 03 13:16:43 crc kubenswrapper[4986]: I1203 13:16:43.747845 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"40aba5da-7d4c-49e1-a054-6e6789aca293","Type":"ContainerStarted","Data":"b4194fb6c31cde5f4431dc123e0e432a2dab8afff1a2646b7d44b4df8e686bcb"} Dec 03 13:16:43 crc kubenswrapper[4986]: I1203 13:16:43.753834 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-smttz" event={"ID":"b0a895ed-eb3a-49d5-95fe-fe470db0eca3","Type":"ContainerStarted","Data":"f47690b292dcdcf383c1b41a1ae8959fbdef10e8b96bfdaf6e00f8402d5fe5fe"} Dec 03 13:16:43 crc kubenswrapper[4986]: I1203 13:16:43.753898 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-smttz" Dec 03 13:16:43 crc kubenswrapper[4986]: I1203 13:16:43.760677 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-45czf" event={"ID":"5510fce4-e81b-4089-a5c3-4c4b6c72d9e0","Type":"ContainerStarted","Data":"4a6f7001e2eb5a7042099b2dc2c4468bdbb80783761295910aba9161920f6611"} Dec 03 13:16:43 crc kubenswrapper[4986]: I1203 13:16:43.766283 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"cb2b35c1-7c5f-4ffc-a178-4f7ade5cc313","Type":"ContainerStarted","Data":"fe1ff9b12569423c66274c0a7afeb991843cb8ae6064a91a2d9d8565d7388369"} Dec 03 13:16:43 crc kubenswrapper[4986]: I1203 13:16:43.766629 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Dec 03 13:16:43 crc kubenswrapper[4986]: I1203 13:16:43.767657 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-7jms6" podStartSLOduration=27.767644347 podStartE2EDuration="27.767644347s" podCreationTimestamp="2025-12-03 13:16:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:16:43.762230711 +0000 UTC m=+1263.228661902" watchObservedRunningTime="2025-12-03 13:16:43.767644347 +0000 UTC m=+1263.234075538" Dec 03 13:16:43 crc kubenswrapper[4986]: I1203 13:16:43.787650 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=7.061856181 podStartE2EDuration="20.787629336s" podCreationTimestamp="2025-12-03 13:16:23 +0000 UTC" firstStartedPulling="2025-12-03 13:16:29.443751968 +0000 UTC m=+1248.910183159" lastFinishedPulling="2025-12-03 13:16:43.169525123 +0000 UTC m=+1262.635956314" observedRunningTime="2025-12-03 13:16:43.775542241 +0000 UTC m=+1263.241973442" watchObservedRunningTime="2025-12-03 13:16:43.787629336 +0000 UTC m=+1263.254060537" Dec 03 13:16:43 crc kubenswrapper[4986]: I1203 13:16:43.816158 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-smttz" podStartSLOduration=27.816141255 podStartE2EDuration="27.816141255s" podCreationTimestamp="2025-12-03 13:16:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:16:43.810639807 +0000 UTC m=+1263.277071018" watchObservedRunningTime="2025-12-03 13:16:43.816141255 +0000 UTC m=+1263.282572446" Dec 03 13:16:43 crc kubenswrapper[4986]: I1203 13:16:43.835838 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=9.697039388 podStartE2EDuration="22.835814736s" podCreationTimestamp="2025-12-03 13:16:21 +0000 UTC" firstStartedPulling="2025-12-03 13:16:29.348313526 +0000 UTC m=+1248.814744717" lastFinishedPulling="2025-12-03 13:16:42.487088874 +0000 UTC m=+1261.953520065" observedRunningTime="2025-12-03 13:16:43.828019695 +0000 UTC m=+1263.294450886" watchObservedRunningTime="2025-12-03 13:16:43.835814736 +0000 UTC m=+1263.302245927" Dec 03 13:16:44 crc kubenswrapper[4986]: I1203 13:16:44.773910 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"5e8a62bd-1e92-464d-b905-8eb18cc44646","Type":"ContainerStarted","Data":"920c3ace47a148c8d0728716a7fd4dea74b6788fc8e1d4cbff7078089dbef14c"} Dec 03 13:16:44 crc kubenswrapper[4986]: I1203 13:16:44.775086 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"285bb825-5a17-4d45-87d6-852513d0351b","Type":"ContainerStarted","Data":"42ee92ce1ede62523add70392662fba98598b4a99ffb08d37172d9fc355676d5"} Dec 03 13:16:44 crc kubenswrapper[4986]: I1203 13:16:44.777522 4986 generic.go:334] "Generic (PLEG): container finished" podID="5510fce4-e81b-4089-a5c3-4c4b6c72d9e0" containerID="4a6f7001e2eb5a7042099b2dc2c4468bdbb80783761295910aba9161920f6611" exitCode=0 Dec 03 13:16:44 crc kubenswrapper[4986]: I1203 13:16:44.793109 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-45czf" event={"ID":"5510fce4-e81b-4089-a5c3-4c4b6c72d9e0","Type":"ContainerDied","Data":"4a6f7001e2eb5a7042099b2dc2c4468bdbb80783761295910aba9161920f6611"} Dec 03 13:16:46 crc kubenswrapper[4986]: E1203 13:16:46.234539 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-nb\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovsdbserver-nb-0" podUID="905d78e0-0235-400d-8004-1f612a11b60a" Dec 03 13:16:46 crc kubenswrapper[4986]: I1203 13:16:46.797263 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-45czf" event={"ID":"5510fce4-e81b-4089-a5c3-4c4b6c72d9e0","Type":"ContainerStarted","Data":"de0780be6f01e6f4e7b627f7919feaab0f311a0b1be9239145e5258276731704"} Dec 03 13:16:46 crc kubenswrapper[4986]: I1203 13:16:46.797602 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-45czf" event={"ID":"5510fce4-e81b-4089-a5c3-4c4b6c72d9e0","Type":"ContainerStarted","Data":"37cca81af6f4fd58fb5b68822891bc89722bfbd1cd6e2256cb3cd4e196824aec"} Dec 03 13:16:46 crc kubenswrapper[4986]: I1203 13:16:46.797905 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-45czf" Dec 03 13:16:46 crc kubenswrapper[4986]: I1203 13:16:46.800200 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"905d78e0-0235-400d-8004-1f612a11b60a","Type":"ContainerStarted","Data":"d14e9510b88c0c9c79ea78951604528c62048deac6eb88a43c0254a9d54f1f12"} Dec 03 13:16:46 crc kubenswrapper[4986]: E1203 13:16:46.802345 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-nb\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified\\\"\"" pod="openstack/ovsdbserver-nb-0" podUID="905d78e0-0235-400d-8004-1f612a11b60a" Dec 03 13:16:46 crc kubenswrapper[4986]: I1203 13:16:46.802919 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"40aba5da-7d4c-49e1-a054-6e6789aca293","Type":"ContainerStarted","Data":"0dc152d4f8b0bc6fc33be868af214932631038a55d1c3719160c277457ccd6fe"} Dec 03 13:16:46 crc kubenswrapper[4986]: I1203 13:16:46.802949 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"40aba5da-7d4c-49e1-a054-6e6789aca293","Type":"ContainerStarted","Data":"7050d16fdf79df46ccad15d507b97f284ffd913401d4f245c67cc94bb0cbf23a"} Dec 03 13:16:46 crc kubenswrapper[4986]: I1203 13:16:46.819846 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-45czf" podStartSLOduration=8.43790208 podStartE2EDuration="20.819831704s" podCreationTimestamp="2025-12-03 13:16:26 +0000 UTC" firstStartedPulling="2025-12-03 13:16:30.621067449 +0000 UTC m=+1250.087498640" lastFinishedPulling="2025-12-03 13:16:43.002997073 +0000 UTC m=+1262.469428264" observedRunningTime="2025-12-03 13:16:46.816979477 +0000 UTC m=+1266.283410668" watchObservedRunningTime="2025-12-03 13:16:46.819831704 +0000 UTC m=+1266.286262895" Dec 03 13:16:46 crc kubenswrapper[4986]: I1203 13:16:46.851877 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=15.326886006 podStartE2EDuration="17.851856838s" podCreationTimestamp="2025-12-03 13:16:29 +0000 UTC" firstStartedPulling="2025-12-03 13:16:43.483115087 +0000 UTC m=+1262.949546278" lastFinishedPulling="2025-12-03 13:16:46.008085919 +0000 UTC m=+1265.474517110" observedRunningTime="2025-12-03 13:16:46.842513425 +0000 UTC m=+1266.308944626" watchObservedRunningTime="2025-12-03 13:16:46.851856838 +0000 UTC m=+1266.318288039" Dec 03 13:16:47 crc kubenswrapper[4986]: I1203 13:16:47.812060 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-45czf" Dec 03 13:16:47 crc kubenswrapper[4986]: E1203 13:16:47.813919 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-nb\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified\\\"\"" pod="openstack/ovsdbserver-nb-0" podUID="905d78e0-0235-400d-8004-1f612a11b60a" Dec 03 13:16:48 crc kubenswrapper[4986]: I1203 13:16:48.119781 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-nm5xm"] Dec 03 13:16:48 crc kubenswrapper[4986]: I1203 13:16:48.121182 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-nm5xm" Dec 03 13:16:48 crc kubenswrapper[4986]: I1203 13:16:48.126656 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Dec 03 13:16:48 crc kubenswrapper[4986]: I1203 13:16:48.146681 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-nm5xm"] Dec 03 13:16:48 crc kubenswrapper[4986]: I1203 13:16:48.290551 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-smttz"] Dec 03 13:16:48 crc kubenswrapper[4986]: I1203 13:16:48.290775 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-smttz" podUID="b0a895ed-eb3a-49d5-95fe-fe470db0eca3" containerName="dnsmasq-dns" containerID="cri-o://f47690b292dcdcf383c1b41a1ae8959fbdef10e8b96bfdaf6e00f8402d5fe5fe" gracePeriod=10 Dec 03 13:16:48 crc kubenswrapper[4986]: I1203 13:16:48.291457 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57d769cc4f-smttz" Dec 03 13:16:48 crc kubenswrapper[4986]: I1203 13:16:48.309948 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0798ce7a-8eef-4450-900d-d89e2ab41858-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-nm5xm\" (UID: \"0798ce7a-8eef-4450-900d-d89e2ab41858\") " pod="openstack/ovn-controller-metrics-nm5xm" Dec 03 13:16:48 crc kubenswrapper[4986]: I1203 13:16:48.310502 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0798ce7a-8eef-4450-900d-d89e2ab41858-combined-ca-bundle\") pod \"ovn-controller-metrics-nm5xm\" (UID: \"0798ce7a-8eef-4450-900d-d89e2ab41858\") " pod="openstack/ovn-controller-metrics-nm5xm" Dec 03 13:16:48 crc kubenswrapper[4986]: I1203 13:16:48.310554 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0798ce7a-8eef-4450-900d-d89e2ab41858-config\") pod \"ovn-controller-metrics-nm5xm\" (UID: \"0798ce7a-8eef-4450-900d-d89e2ab41858\") " pod="openstack/ovn-controller-metrics-nm5xm" Dec 03 13:16:48 crc kubenswrapper[4986]: I1203 13:16:48.310619 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrzh8\" (UniqueName: \"kubernetes.io/projected/0798ce7a-8eef-4450-900d-d89e2ab41858-kube-api-access-qrzh8\") pod \"ovn-controller-metrics-nm5xm\" (UID: \"0798ce7a-8eef-4450-900d-d89e2ab41858\") " pod="openstack/ovn-controller-metrics-nm5xm" Dec 03 13:16:48 crc kubenswrapper[4986]: I1203 13:16:48.310645 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/0798ce7a-8eef-4450-900d-d89e2ab41858-ovs-rundir\") pod \"ovn-controller-metrics-nm5xm\" (UID: \"0798ce7a-8eef-4450-900d-d89e2ab41858\") " pod="openstack/ovn-controller-metrics-nm5xm" Dec 03 13:16:48 crc kubenswrapper[4986]: I1203 13:16:48.310665 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/0798ce7a-8eef-4450-900d-d89e2ab41858-ovn-rundir\") pod \"ovn-controller-metrics-nm5xm\" (UID: \"0798ce7a-8eef-4450-900d-d89e2ab41858\") " pod="openstack/ovn-controller-metrics-nm5xm" Dec 03 13:16:48 crc kubenswrapper[4986]: I1203 13:16:48.338184 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-kgc2w"] Dec 03 13:16:48 crc kubenswrapper[4986]: I1203 13:16:48.341102 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-kgc2w" Dec 03 13:16:48 crc kubenswrapper[4986]: I1203 13:16:48.342716 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Dec 03 13:16:48 crc kubenswrapper[4986]: I1203 13:16:48.353380 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-kgc2w"] Dec 03 13:16:48 crc kubenswrapper[4986]: I1203 13:16:48.415203 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0798ce7a-8eef-4450-900d-d89e2ab41858-combined-ca-bundle\") pod \"ovn-controller-metrics-nm5xm\" (UID: \"0798ce7a-8eef-4450-900d-d89e2ab41858\") " pod="openstack/ovn-controller-metrics-nm5xm" Dec 03 13:16:48 crc kubenswrapper[4986]: I1203 13:16:48.415286 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0798ce7a-8eef-4450-900d-d89e2ab41858-config\") pod \"ovn-controller-metrics-nm5xm\" (UID: \"0798ce7a-8eef-4450-900d-d89e2ab41858\") " pod="openstack/ovn-controller-metrics-nm5xm" Dec 03 13:16:48 crc kubenswrapper[4986]: I1203 13:16:48.415398 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrzh8\" (UniqueName: \"kubernetes.io/projected/0798ce7a-8eef-4450-900d-d89e2ab41858-kube-api-access-qrzh8\") pod \"ovn-controller-metrics-nm5xm\" (UID: \"0798ce7a-8eef-4450-900d-d89e2ab41858\") " pod="openstack/ovn-controller-metrics-nm5xm" Dec 03 13:16:48 crc kubenswrapper[4986]: I1203 13:16:48.415434 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/0798ce7a-8eef-4450-900d-d89e2ab41858-ovs-rundir\") pod \"ovn-controller-metrics-nm5xm\" (UID: \"0798ce7a-8eef-4450-900d-d89e2ab41858\") " pod="openstack/ovn-controller-metrics-nm5xm" Dec 03 13:16:48 crc kubenswrapper[4986]: I1203 13:16:48.415460 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/0798ce7a-8eef-4450-900d-d89e2ab41858-ovn-rundir\") pod \"ovn-controller-metrics-nm5xm\" (UID: \"0798ce7a-8eef-4450-900d-d89e2ab41858\") " pod="openstack/ovn-controller-metrics-nm5xm" Dec 03 13:16:48 crc kubenswrapper[4986]: I1203 13:16:48.415492 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0798ce7a-8eef-4450-900d-d89e2ab41858-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-nm5xm\" (UID: \"0798ce7a-8eef-4450-900d-d89e2ab41858\") " pod="openstack/ovn-controller-metrics-nm5xm" Dec 03 13:16:48 crc kubenswrapper[4986]: I1203 13:16:48.419005 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/0798ce7a-8eef-4450-900d-d89e2ab41858-ovs-rundir\") pod \"ovn-controller-metrics-nm5xm\" (UID: \"0798ce7a-8eef-4450-900d-d89e2ab41858\") " pod="openstack/ovn-controller-metrics-nm5xm" Dec 03 13:16:48 crc kubenswrapper[4986]: I1203 13:16:48.419050 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/0798ce7a-8eef-4450-900d-d89e2ab41858-ovn-rundir\") pod \"ovn-controller-metrics-nm5xm\" (UID: \"0798ce7a-8eef-4450-900d-d89e2ab41858\") " pod="openstack/ovn-controller-metrics-nm5xm" Dec 03 13:16:48 crc kubenswrapper[4986]: I1203 13:16:48.419654 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0798ce7a-8eef-4450-900d-d89e2ab41858-config\") pod \"ovn-controller-metrics-nm5xm\" (UID: \"0798ce7a-8eef-4450-900d-d89e2ab41858\") " pod="openstack/ovn-controller-metrics-nm5xm" Dec 03 13:16:48 crc kubenswrapper[4986]: I1203 13:16:48.420841 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0798ce7a-8eef-4450-900d-d89e2ab41858-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-nm5xm\" (UID: \"0798ce7a-8eef-4450-900d-d89e2ab41858\") " pod="openstack/ovn-controller-metrics-nm5xm" Dec 03 13:16:48 crc kubenswrapper[4986]: I1203 13:16:48.422923 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0798ce7a-8eef-4450-900d-d89e2ab41858-combined-ca-bundle\") pod \"ovn-controller-metrics-nm5xm\" (UID: \"0798ce7a-8eef-4450-900d-d89e2ab41858\") " pod="openstack/ovn-controller-metrics-nm5xm" Dec 03 13:16:48 crc kubenswrapper[4986]: I1203 13:16:48.475600 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrzh8\" (UniqueName: \"kubernetes.io/projected/0798ce7a-8eef-4450-900d-d89e2ab41858-kube-api-access-qrzh8\") pod \"ovn-controller-metrics-nm5xm\" (UID: \"0798ce7a-8eef-4450-900d-d89e2ab41858\") " pod="openstack/ovn-controller-metrics-nm5xm" Dec 03 13:16:48 crc kubenswrapper[4986]: I1203 13:16:48.521054 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f1c17ad3-8eb8-429d-afd6-ac7135f24b4a-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-kgc2w\" (UID: \"f1c17ad3-8eb8-429d-afd6-ac7135f24b4a\") " pod="openstack/dnsmasq-dns-7f896c8c65-kgc2w" Dec 03 13:16:48 crc kubenswrapper[4986]: I1203 13:16:48.521120 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tq8bj\" (UniqueName: \"kubernetes.io/projected/f1c17ad3-8eb8-429d-afd6-ac7135f24b4a-kube-api-access-tq8bj\") pod \"dnsmasq-dns-7f896c8c65-kgc2w\" (UID: \"f1c17ad3-8eb8-429d-afd6-ac7135f24b4a\") " pod="openstack/dnsmasq-dns-7f896c8c65-kgc2w" Dec 03 13:16:48 crc kubenswrapper[4986]: I1203 13:16:48.521168 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f1c17ad3-8eb8-429d-afd6-ac7135f24b4a-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-kgc2w\" (UID: \"f1c17ad3-8eb8-429d-afd6-ac7135f24b4a\") " pod="openstack/dnsmasq-dns-7f896c8c65-kgc2w" Dec 03 13:16:48 crc kubenswrapper[4986]: I1203 13:16:48.521209 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1c17ad3-8eb8-429d-afd6-ac7135f24b4a-config\") pod \"dnsmasq-dns-7f896c8c65-kgc2w\" (UID: \"f1c17ad3-8eb8-429d-afd6-ac7135f24b4a\") " pod="openstack/dnsmasq-dns-7f896c8c65-kgc2w" Dec 03 13:16:48 crc kubenswrapper[4986]: I1203 13:16:48.585556 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-7jms6"] Dec 03 13:16:48 crc kubenswrapper[4986]: I1203 13:16:48.585762 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-7jms6" podUID="5fd53494-2eec-45b8-80f4-7b94abf29bfb" containerName="dnsmasq-dns" containerID="cri-o://1c7aebcef489b547601b55edb3f24f2d5c1b32e5593d160d5c99b349cb123d5e" gracePeriod=10 Dec 03 13:16:48 crc kubenswrapper[4986]: I1203 13:16:48.587450 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-666b6646f7-7jms6" Dec 03 13:16:48 crc kubenswrapper[4986]: I1203 13:16:48.622639 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1c17ad3-8eb8-429d-afd6-ac7135f24b4a-config\") pod \"dnsmasq-dns-7f896c8c65-kgc2w\" (UID: \"f1c17ad3-8eb8-429d-afd6-ac7135f24b4a\") " pod="openstack/dnsmasq-dns-7f896c8c65-kgc2w" Dec 03 13:16:48 crc kubenswrapper[4986]: I1203 13:16:48.622706 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f1c17ad3-8eb8-429d-afd6-ac7135f24b4a-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-kgc2w\" (UID: \"f1c17ad3-8eb8-429d-afd6-ac7135f24b4a\") " pod="openstack/dnsmasq-dns-7f896c8c65-kgc2w" Dec 03 13:16:48 crc kubenswrapper[4986]: I1203 13:16:48.622746 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tq8bj\" (UniqueName: \"kubernetes.io/projected/f1c17ad3-8eb8-429d-afd6-ac7135f24b4a-kube-api-access-tq8bj\") pod \"dnsmasq-dns-7f896c8c65-kgc2w\" (UID: \"f1c17ad3-8eb8-429d-afd6-ac7135f24b4a\") " pod="openstack/dnsmasq-dns-7f896c8c65-kgc2w" Dec 03 13:16:48 crc kubenswrapper[4986]: I1203 13:16:48.622789 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f1c17ad3-8eb8-429d-afd6-ac7135f24b4a-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-kgc2w\" (UID: \"f1c17ad3-8eb8-429d-afd6-ac7135f24b4a\") " pod="openstack/dnsmasq-dns-7f896c8c65-kgc2w" Dec 03 13:16:48 crc kubenswrapper[4986]: I1203 13:16:48.623653 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f1c17ad3-8eb8-429d-afd6-ac7135f24b4a-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-kgc2w\" (UID: \"f1c17ad3-8eb8-429d-afd6-ac7135f24b4a\") " pod="openstack/dnsmasq-dns-7f896c8c65-kgc2w" Dec 03 13:16:48 crc kubenswrapper[4986]: I1203 13:16:48.624153 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1c17ad3-8eb8-429d-afd6-ac7135f24b4a-config\") pod \"dnsmasq-dns-7f896c8c65-kgc2w\" (UID: \"f1c17ad3-8eb8-429d-afd6-ac7135f24b4a\") " pod="openstack/dnsmasq-dns-7f896c8c65-kgc2w" Dec 03 13:16:48 crc kubenswrapper[4986]: I1203 13:16:48.624559 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f1c17ad3-8eb8-429d-afd6-ac7135f24b4a-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-kgc2w\" (UID: \"f1c17ad3-8eb8-429d-afd6-ac7135f24b4a\") " pod="openstack/dnsmasq-dns-7f896c8c65-kgc2w" Dec 03 13:16:48 crc kubenswrapper[4986]: I1203 13:16:48.629762 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-2dwn7"] Dec 03 13:16:48 crc kubenswrapper[4986]: I1203 13:16:48.631226 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-2dwn7" Dec 03 13:16:48 crc kubenswrapper[4986]: I1203 13:16:48.634610 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Dec 03 13:16:48 crc kubenswrapper[4986]: I1203 13:16:48.655107 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tq8bj\" (UniqueName: \"kubernetes.io/projected/f1c17ad3-8eb8-429d-afd6-ac7135f24b4a-kube-api-access-tq8bj\") pod \"dnsmasq-dns-7f896c8c65-kgc2w\" (UID: \"f1c17ad3-8eb8-429d-afd6-ac7135f24b4a\") " pod="openstack/dnsmasq-dns-7f896c8c65-kgc2w" Dec 03 13:16:48 crc kubenswrapper[4986]: I1203 13:16:48.668301 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-2dwn7"] Dec 03 13:16:48 crc kubenswrapper[4986]: I1203 13:16:48.700080 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-kgc2w" Dec 03 13:16:48 crc kubenswrapper[4986]: I1203 13:16:48.767166 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-nm5xm" Dec 03 13:16:48 crc kubenswrapper[4986]: I1203 13:16:48.822245 4986 generic.go:334] "Generic (PLEG): container finished" podID="5fd53494-2eec-45b8-80f4-7b94abf29bfb" containerID="1c7aebcef489b547601b55edb3f24f2d5c1b32e5593d160d5c99b349cb123d5e" exitCode=0 Dec 03 13:16:48 crc kubenswrapper[4986]: I1203 13:16:48.822315 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-7jms6" event={"ID":"5fd53494-2eec-45b8-80f4-7b94abf29bfb","Type":"ContainerDied","Data":"1c7aebcef489b547601b55edb3f24f2d5c1b32e5593d160d5c99b349cb123d5e"} Dec 03 13:16:48 crc kubenswrapper[4986]: I1203 13:16:48.825143 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/84767b4d-e93f-41d9-a7e3-795def131772-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-2dwn7\" (UID: \"84767b4d-e93f-41d9-a7e3-795def131772\") " pod="openstack/dnsmasq-dns-86db49b7ff-2dwn7" Dec 03 13:16:48 crc kubenswrapper[4986]: I1203 13:16:48.825203 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sshdg\" (UniqueName: \"kubernetes.io/projected/84767b4d-e93f-41d9-a7e3-795def131772-kube-api-access-sshdg\") pod \"dnsmasq-dns-86db49b7ff-2dwn7\" (UID: \"84767b4d-e93f-41d9-a7e3-795def131772\") " pod="openstack/dnsmasq-dns-86db49b7ff-2dwn7" Dec 03 13:16:48 crc kubenswrapper[4986]: I1203 13:16:48.825421 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/84767b4d-e93f-41d9-a7e3-795def131772-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-2dwn7\" (UID: \"84767b4d-e93f-41d9-a7e3-795def131772\") " pod="openstack/dnsmasq-dns-86db49b7ff-2dwn7" Dec 03 13:16:48 crc kubenswrapper[4986]: I1203 13:16:48.825531 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/84767b4d-e93f-41d9-a7e3-795def131772-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-2dwn7\" (UID: \"84767b4d-e93f-41d9-a7e3-795def131772\") " pod="openstack/dnsmasq-dns-86db49b7ff-2dwn7" Dec 03 13:16:48 crc kubenswrapper[4986]: I1203 13:16:48.825617 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84767b4d-e93f-41d9-a7e3-795def131772-config\") pod \"dnsmasq-dns-86db49b7ff-2dwn7\" (UID: \"84767b4d-e93f-41d9-a7e3-795def131772\") " pod="openstack/dnsmasq-dns-86db49b7ff-2dwn7" Dec 03 13:16:48 crc kubenswrapper[4986]: I1203 13:16:48.825915 4986 generic.go:334] "Generic (PLEG): container finished" podID="b0a895ed-eb3a-49d5-95fe-fe470db0eca3" containerID="f47690b292dcdcf383c1b41a1ae8959fbdef10e8b96bfdaf6e00f8402d5fe5fe" exitCode=0 Dec 03 13:16:48 crc kubenswrapper[4986]: I1203 13:16:48.826285 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-smttz" event={"ID":"b0a895ed-eb3a-49d5-95fe-fe470db0eca3","Type":"ContainerDied","Data":"f47690b292dcdcf383c1b41a1ae8959fbdef10e8b96bfdaf6e00f8402d5fe5fe"} Dec 03 13:16:48 crc kubenswrapper[4986]: I1203 13:16:48.838508 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-smttz" Dec 03 13:16:48 crc kubenswrapper[4986]: I1203 13:16:48.927464 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/84767b4d-e93f-41d9-a7e3-795def131772-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-2dwn7\" (UID: \"84767b4d-e93f-41d9-a7e3-795def131772\") " pod="openstack/dnsmasq-dns-86db49b7ff-2dwn7" Dec 03 13:16:48 crc kubenswrapper[4986]: I1203 13:16:48.927516 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sshdg\" (UniqueName: \"kubernetes.io/projected/84767b4d-e93f-41d9-a7e3-795def131772-kube-api-access-sshdg\") pod \"dnsmasq-dns-86db49b7ff-2dwn7\" (UID: \"84767b4d-e93f-41d9-a7e3-795def131772\") " pod="openstack/dnsmasq-dns-86db49b7ff-2dwn7" Dec 03 13:16:48 crc kubenswrapper[4986]: I1203 13:16:48.927585 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/84767b4d-e93f-41d9-a7e3-795def131772-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-2dwn7\" (UID: \"84767b4d-e93f-41d9-a7e3-795def131772\") " pod="openstack/dnsmasq-dns-86db49b7ff-2dwn7" Dec 03 13:16:48 crc kubenswrapper[4986]: I1203 13:16:48.927624 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/84767b4d-e93f-41d9-a7e3-795def131772-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-2dwn7\" (UID: \"84767b4d-e93f-41d9-a7e3-795def131772\") " pod="openstack/dnsmasq-dns-86db49b7ff-2dwn7" Dec 03 13:16:48 crc kubenswrapper[4986]: I1203 13:16:48.927661 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84767b4d-e93f-41d9-a7e3-795def131772-config\") pod \"dnsmasq-dns-86db49b7ff-2dwn7\" (UID: \"84767b4d-e93f-41d9-a7e3-795def131772\") " pod="openstack/dnsmasq-dns-86db49b7ff-2dwn7" Dec 03 13:16:48 crc kubenswrapper[4986]: I1203 13:16:48.928663 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84767b4d-e93f-41d9-a7e3-795def131772-config\") pod \"dnsmasq-dns-86db49b7ff-2dwn7\" (UID: \"84767b4d-e93f-41d9-a7e3-795def131772\") " pod="openstack/dnsmasq-dns-86db49b7ff-2dwn7" Dec 03 13:16:48 crc kubenswrapper[4986]: I1203 13:16:48.929213 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/84767b4d-e93f-41d9-a7e3-795def131772-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-2dwn7\" (UID: \"84767b4d-e93f-41d9-a7e3-795def131772\") " pod="openstack/dnsmasq-dns-86db49b7ff-2dwn7" Dec 03 13:16:48 crc kubenswrapper[4986]: I1203 13:16:48.929521 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/84767b4d-e93f-41d9-a7e3-795def131772-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-2dwn7\" (UID: \"84767b4d-e93f-41d9-a7e3-795def131772\") " pod="openstack/dnsmasq-dns-86db49b7ff-2dwn7" Dec 03 13:16:48 crc kubenswrapper[4986]: I1203 13:16:48.929769 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/84767b4d-e93f-41d9-a7e3-795def131772-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-2dwn7\" (UID: \"84767b4d-e93f-41d9-a7e3-795def131772\") " pod="openstack/dnsmasq-dns-86db49b7ff-2dwn7" Dec 03 13:16:48 crc kubenswrapper[4986]: I1203 13:16:48.946749 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sshdg\" (UniqueName: \"kubernetes.io/projected/84767b4d-e93f-41d9-a7e3-795def131772-kube-api-access-sshdg\") pod \"dnsmasq-dns-86db49b7ff-2dwn7\" (UID: \"84767b4d-e93f-41d9-a7e3-795def131772\") " pod="openstack/dnsmasq-dns-86db49b7ff-2dwn7" Dec 03 13:16:48 crc kubenswrapper[4986]: I1203 13:16:48.976093 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-7jms6" Dec 03 13:16:48 crc kubenswrapper[4986]: I1203 13:16:48.977387 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Dec 03 13:16:49 crc kubenswrapper[4986]: I1203 13:16:49.029407 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4bkk5\" (UniqueName: \"kubernetes.io/projected/b0a895ed-eb3a-49d5-95fe-fe470db0eca3-kube-api-access-4bkk5\") pod \"b0a895ed-eb3a-49d5-95fe-fe470db0eca3\" (UID: \"b0a895ed-eb3a-49d5-95fe-fe470db0eca3\") " Dec 03 13:16:49 crc kubenswrapper[4986]: I1203 13:16:49.029646 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b0a895ed-eb3a-49d5-95fe-fe470db0eca3-dns-svc\") pod \"b0a895ed-eb3a-49d5-95fe-fe470db0eca3\" (UID: \"b0a895ed-eb3a-49d5-95fe-fe470db0eca3\") " Dec 03 13:16:49 crc kubenswrapper[4986]: I1203 13:16:49.029699 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0a895ed-eb3a-49d5-95fe-fe470db0eca3-config\") pod \"b0a895ed-eb3a-49d5-95fe-fe470db0eca3\" (UID: \"b0a895ed-eb3a-49d5-95fe-fe470db0eca3\") " Dec 03 13:16:49 crc kubenswrapper[4986]: I1203 13:16:49.033658 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0a895ed-eb3a-49d5-95fe-fe470db0eca3-kube-api-access-4bkk5" (OuterVolumeSpecName: "kube-api-access-4bkk5") pod "b0a895ed-eb3a-49d5-95fe-fe470db0eca3" (UID: "b0a895ed-eb3a-49d5-95fe-fe470db0eca3"). InnerVolumeSpecName "kube-api-access-4bkk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:16:49 crc kubenswrapper[4986]: I1203 13:16:49.037136 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-2dwn7" Dec 03 13:16:49 crc kubenswrapper[4986]: I1203 13:16:49.070765 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0a895ed-eb3a-49d5-95fe-fe470db0eca3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b0a895ed-eb3a-49d5-95fe-fe470db0eca3" (UID: "b0a895ed-eb3a-49d5-95fe-fe470db0eca3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:16:49 crc kubenswrapper[4986]: I1203 13:16:49.077681 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0a895ed-eb3a-49d5-95fe-fe470db0eca3-config" (OuterVolumeSpecName: "config") pod "b0a895ed-eb3a-49d5-95fe-fe470db0eca3" (UID: "b0a895ed-eb3a-49d5-95fe-fe470db0eca3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:16:49 crc kubenswrapper[4986]: I1203 13:16:49.131113 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fd53494-2eec-45b8-80f4-7b94abf29bfb-config\") pod \"5fd53494-2eec-45b8-80f4-7b94abf29bfb\" (UID: \"5fd53494-2eec-45b8-80f4-7b94abf29bfb\") " Dec 03 13:16:49 crc kubenswrapper[4986]: I1203 13:16:49.131187 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6lcgt\" (UniqueName: \"kubernetes.io/projected/5fd53494-2eec-45b8-80f4-7b94abf29bfb-kube-api-access-6lcgt\") pod \"5fd53494-2eec-45b8-80f4-7b94abf29bfb\" (UID: \"5fd53494-2eec-45b8-80f4-7b94abf29bfb\") " Dec 03 13:16:49 crc kubenswrapper[4986]: I1203 13:16:49.131245 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5fd53494-2eec-45b8-80f4-7b94abf29bfb-dns-svc\") pod \"5fd53494-2eec-45b8-80f4-7b94abf29bfb\" (UID: \"5fd53494-2eec-45b8-80f4-7b94abf29bfb\") " Dec 03 13:16:49 crc kubenswrapper[4986]: I1203 13:16:49.131806 4986 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b0a895ed-eb3a-49d5-95fe-fe470db0eca3-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 03 13:16:49 crc kubenswrapper[4986]: I1203 13:16:49.131833 4986 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0a895ed-eb3a-49d5-95fe-fe470db0eca3-config\") on node \"crc\" DevicePath \"\"" Dec 03 13:16:49 crc kubenswrapper[4986]: I1203 13:16:49.131843 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4bkk5\" (UniqueName: \"kubernetes.io/projected/b0a895ed-eb3a-49d5-95fe-fe470db0eca3-kube-api-access-4bkk5\") on node \"crc\" DevicePath \"\"" Dec 03 13:16:49 crc kubenswrapper[4986]: I1203 13:16:49.137977 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fd53494-2eec-45b8-80f4-7b94abf29bfb-kube-api-access-6lcgt" (OuterVolumeSpecName: "kube-api-access-6lcgt") pod "5fd53494-2eec-45b8-80f4-7b94abf29bfb" (UID: "5fd53494-2eec-45b8-80f4-7b94abf29bfb"). InnerVolumeSpecName "kube-api-access-6lcgt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:16:49 crc kubenswrapper[4986]: I1203 13:16:49.177020 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fd53494-2eec-45b8-80f4-7b94abf29bfb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5fd53494-2eec-45b8-80f4-7b94abf29bfb" (UID: "5fd53494-2eec-45b8-80f4-7b94abf29bfb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:16:49 crc kubenswrapper[4986]: I1203 13:16:49.177156 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fd53494-2eec-45b8-80f4-7b94abf29bfb-config" (OuterVolumeSpecName: "config") pod "5fd53494-2eec-45b8-80f4-7b94abf29bfb" (UID: "5fd53494-2eec-45b8-80f4-7b94abf29bfb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:16:49 crc kubenswrapper[4986]: I1203 13:16:49.233889 4986 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fd53494-2eec-45b8-80f4-7b94abf29bfb-config\") on node \"crc\" DevicePath \"\"" Dec 03 13:16:49 crc kubenswrapper[4986]: I1203 13:16:49.233920 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6lcgt\" (UniqueName: \"kubernetes.io/projected/5fd53494-2eec-45b8-80f4-7b94abf29bfb-kube-api-access-6lcgt\") on node \"crc\" DevicePath \"\"" Dec 03 13:16:49 crc kubenswrapper[4986]: I1203 13:16:49.233955 4986 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5fd53494-2eec-45b8-80f4-7b94abf29bfb-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 03 13:16:49 crc kubenswrapper[4986]: I1203 13:16:49.243361 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-kgc2w"] Dec 03 13:16:49 crc kubenswrapper[4986]: W1203 13:16:49.246905 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf1c17ad3_8eb8_429d_afd6_ac7135f24b4a.slice/crio-0a9a9706939a771d4ff5987e420a0fd2ca3a61338cf802b34feb2298837b587e WatchSource:0}: Error finding container 0a9a9706939a771d4ff5987e420a0fd2ca3a61338cf802b34feb2298837b587e: Status 404 returned error can't find the container with id 0a9a9706939a771d4ff5987e420a0fd2ca3a61338cf802b34feb2298837b587e Dec 03 13:16:49 crc kubenswrapper[4986]: I1203 13:16:49.345562 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-nm5xm"] Dec 03 13:16:49 crc kubenswrapper[4986]: W1203 13:16:49.346216 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0798ce7a_8eef_4450_900d_d89e2ab41858.slice/crio-85dcd8ad82b303426170029f98a58fa755e9266652f9ebfb8daa913a219f4371 WatchSource:0}: Error finding container 85dcd8ad82b303426170029f98a58fa755e9266652f9ebfb8daa913a219f4371: Status 404 returned error can't find the container with id 85dcd8ad82b303426170029f98a58fa755e9266652f9ebfb8daa913a219f4371 Dec 03 13:16:49 crc kubenswrapper[4986]: I1203 13:16:49.457613 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-2dwn7"] Dec 03 13:16:49 crc kubenswrapper[4986]: W1203 13:16:49.468103 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod84767b4d_e93f_41d9_a7e3_795def131772.slice/crio-073e9b29202a69b000d3d746b9e0cc4aa6cc7c5ca5d6d200fabd7020ef67dfbf WatchSource:0}: Error finding container 073e9b29202a69b000d3d746b9e0cc4aa6cc7c5ca5d6d200fabd7020ef67dfbf: Status 404 returned error can't find the container with id 073e9b29202a69b000d3d746b9e0cc4aa6cc7c5ca5d6d200fabd7020ef67dfbf Dec 03 13:16:49 crc kubenswrapper[4986]: I1203 13:16:49.838593 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-7jms6" event={"ID":"5fd53494-2eec-45b8-80f4-7b94abf29bfb","Type":"ContainerDied","Data":"d018a12a0a3ddc8f550dbb32d955d6b1dc1d052d776a51a6507515595ae58fef"} Dec 03 13:16:49 crc kubenswrapper[4986]: I1203 13:16:49.838921 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-7jms6" Dec 03 13:16:49 crc kubenswrapper[4986]: I1203 13:16:49.839341 4986 scope.go:117] "RemoveContainer" containerID="1c7aebcef489b547601b55edb3f24f2d5c1b32e5593d160d5c99b349cb123d5e" Dec 03 13:16:49 crc kubenswrapper[4986]: I1203 13:16:49.841890 4986 generic.go:334] "Generic (PLEG): container finished" podID="84767b4d-e93f-41d9-a7e3-795def131772" containerID="603543b1a680b0caa9e5848c5cceea5803e2e00298e5d397d530277f8bf518ea" exitCode=0 Dec 03 13:16:49 crc kubenswrapper[4986]: I1203 13:16:49.841972 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-2dwn7" event={"ID":"84767b4d-e93f-41d9-a7e3-795def131772","Type":"ContainerDied","Data":"603543b1a680b0caa9e5848c5cceea5803e2e00298e5d397d530277f8bf518ea"} Dec 03 13:16:49 crc kubenswrapper[4986]: I1203 13:16:49.842002 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-2dwn7" event={"ID":"84767b4d-e93f-41d9-a7e3-795def131772","Type":"ContainerStarted","Data":"073e9b29202a69b000d3d746b9e0cc4aa6cc7c5ca5d6d200fabd7020ef67dfbf"} Dec 03 13:16:49 crc kubenswrapper[4986]: I1203 13:16:49.844153 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-nm5xm" event={"ID":"0798ce7a-8eef-4450-900d-d89e2ab41858","Type":"ContainerStarted","Data":"9e2e2302c85adef9b16a4416b18905d8d8431ec12d213e9bf36e500c14d0ad5e"} Dec 03 13:16:49 crc kubenswrapper[4986]: I1203 13:16:49.844232 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-nm5xm" event={"ID":"0798ce7a-8eef-4450-900d-d89e2ab41858","Type":"ContainerStarted","Data":"85dcd8ad82b303426170029f98a58fa755e9266652f9ebfb8daa913a219f4371"} Dec 03 13:16:49 crc kubenswrapper[4986]: I1203 13:16:49.849240 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-smttz" event={"ID":"b0a895ed-eb3a-49d5-95fe-fe470db0eca3","Type":"ContainerDied","Data":"8b8fe1afc59d5de40601b7395fd1149674d1fb559213a95cbb85d7974a6d280e"} Dec 03 13:16:49 crc kubenswrapper[4986]: I1203 13:16:49.849381 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-smttz" Dec 03 13:16:49 crc kubenswrapper[4986]: I1203 13:16:49.851379 4986 generic.go:334] "Generic (PLEG): container finished" podID="f1c17ad3-8eb8-429d-afd6-ac7135f24b4a" containerID="421818a60f44605b9b746518d35de5d7105048f493fbefed3e5df88142b445f0" exitCode=0 Dec 03 13:16:49 crc kubenswrapper[4986]: I1203 13:16:49.851516 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-kgc2w" event={"ID":"f1c17ad3-8eb8-429d-afd6-ac7135f24b4a","Type":"ContainerDied","Data":"421818a60f44605b9b746518d35de5d7105048f493fbefed3e5df88142b445f0"} Dec 03 13:16:49 crc kubenswrapper[4986]: I1203 13:16:49.851573 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-kgc2w" event={"ID":"f1c17ad3-8eb8-429d-afd6-ac7135f24b4a","Type":"ContainerStarted","Data":"0a9a9706939a771d4ff5987e420a0fd2ca3a61338cf802b34feb2298837b587e"} Dec 03 13:16:49 crc kubenswrapper[4986]: I1203 13:16:49.864591 4986 scope.go:117] "RemoveContainer" containerID="6146b68fc5c92879158178d1369408a10dfb67b770a1a9d0c50945c44329751e" Dec 03 13:16:49 crc kubenswrapper[4986]: I1203 13:16:49.877774 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-nm5xm" podStartSLOduration=1.877755605 podStartE2EDuration="1.877755605s" podCreationTimestamp="2025-12-03 13:16:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:16:49.868547166 +0000 UTC m=+1269.334978367" watchObservedRunningTime="2025-12-03 13:16:49.877755605 +0000 UTC m=+1269.344186806" Dec 03 13:16:49 crc kubenswrapper[4986]: I1203 13:16:49.918635 4986 scope.go:117] "RemoveContainer" containerID="f47690b292dcdcf383c1b41a1ae8959fbdef10e8b96bfdaf6e00f8402d5fe5fe" Dec 03 13:16:49 crc kubenswrapper[4986]: I1203 13:16:49.963106 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-7jms6"] Dec 03 13:16:49 crc kubenswrapper[4986]: I1203 13:16:49.965097 4986 scope.go:117] "RemoveContainer" containerID="2b86f871d6b1e9edd0fdced00a2742a4a87a417dac263d80085b4f868717b845" Dec 03 13:16:49 crc kubenswrapper[4986]: I1203 13:16:49.978048 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-7jms6"] Dec 03 13:16:49 crc kubenswrapper[4986]: I1203 13:16:49.988056 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-smttz"] Dec 03 13:16:49 crc kubenswrapper[4986]: I1203 13:16:49.994594 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-smttz"] Dec 03 13:16:50 crc kubenswrapper[4986]: I1203 13:16:50.864757 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-kgc2w" event={"ID":"f1c17ad3-8eb8-429d-afd6-ac7135f24b4a","Type":"ContainerStarted","Data":"698c470fb9b32a1eb83707a21d64e144815eb224820f9d502bef562fa5008fc3"} Dec 03 13:16:50 crc kubenswrapper[4986]: I1203 13:16:50.865187 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7f896c8c65-kgc2w" Dec 03 13:16:50 crc kubenswrapper[4986]: I1203 13:16:50.870516 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-2dwn7" event={"ID":"84767b4d-e93f-41d9-a7e3-795def131772","Type":"ContainerStarted","Data":"2f8b0a490b4b3d4508218fe6a139bd02292710da2067c2999f42ca14a42fe761"} Dec 03 13:16:50 crc kubenswrapper[4986]: I1203 13:16:50.870694 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-2dwn7" Dec 03 13:16:50 crc kubenswrapper[4986]: I1203 13:16:50.904433 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7f896c8c65-kgc2w" podStartSLOduration=2.904406653 podStartE2EDuration="2.904406653s" podCreationTimestamp="2025-12-03 13:16:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:16:50.889187392 +0000 UTC m=+1270.355618613" watchObservedRunningTime="2025-12-03 13:16:50.904406653 +0000 UTC m=+1270.370837874" Dec 03 13:16:50 crc kubenswrapper[4986]: I1203 13:16:50.913057 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-2dwn7" podStartSLOduration=2.913032985 podStartE2EDuration="2.913032985s" podCreationTimestamp="2025-12-03 13:16:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:16:50.910955339 +0000 UTC m=+1270.377386560" watchObservedRunningTime="2025-12-03 13:16:50.913032985 +0000 UTC m=+1270.379464196" Dec 03 13:16:50 crc kubenswrapper[4986]: I1203 13:16:50.961273 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fd53494-2eec-45b8-80f4-7b94abf29bfb" path="/var/lib/kubelet/pods/5fd53494-2eec-45b8-80f4-7b94abf29bfb/volumes" Dec 03 13:16:50 crc kubenswrapper[4986]: I1203 13:16:50.962292 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0a895ed-eb3a-49d5-95fe-fe470db0eca3" path="/var/lib/kubelet/pods/b0a895ed-eb3a-49d5-95fe-fe470db0eca3/volumes" Dec 03 13:16:50 crc kubenswrapper[4986]: I1203 13:16:50.977559 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Dec 03 13:16:51 crc kubenswrapper[4986]: I1203 13:16:51.668277 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Dec 03 13:16:52 crc kubenswrapper[4986]: I1203 13:16:52.021449 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Dec 03 13:16:52 crc kubenswrapper[4986]: I1203 13:16:52.062223 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Dec 03 13:16:53 crc kubenswrapper[4986]: I1203 13:16:53.392150 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Dec 03 13:16:53 crc kubenswrapper[4986]: I1203 13:16:53.472871 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-kgc2w"] Dec 03 13:16:53 crc kubenswrapper[4986]: I1203 13:16:53.473109 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7f896c8c65-kgc2w" podUID="f1c17ad3-8eb8-429d-afd6-ac7135f24b4a" containerName="dnsmasq-dns" containerID="cri-o://698c470fb9b32a1eb83707a21d64e144815eb224820f9d502bef562fa5008fc3" gracePeriod=10 Dec 03 13:16:53 crc kubenswrapper[4986]: I1203 13:16:53.512611 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-gfndf"] Dec 03 13:16:53 crc kubenswrapper[4986]: E1203 13:16:53.513012 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0a895ed-eb3a-49d5-95fe-fe470db0eca3" containerName="dnsmasq-dns" Dec 03 13:16:53 crc kubenswrapper[4986]: I1203 13:16:53.513036 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0a895ed-eb3a-49d5-95fe-fe470db0eca3" containerName="dnsmasq-dns" Dec 03 13:16:53 crc kubenswrapper[4986]: E1203 13:16:53.513054 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fd53494-2eec-45b8-80f4-7b94abf29bfb" containerName="dnsmasq-dns" Dec 03 13:16:53 crc kubenswrapper[4986]: I1203 13:16:53.513066 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fd53494-2eec-45b8-80f4-7b94abf29bfb" containerName="dnsmasq-dns" Dec 03 13:16:53 crc kubenswrapper[4986]: E1203 13:16:53.513090 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0a895ed-eb3a-49d5-95fe-fe470db0eca3" containerName="init" Dec 03 13:16:53 crc kubenswrapper[4986]: I1203 13:16:53.513099 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0a895ed-eb3a-49d5-95fe-fe470db0eca3" containerName="init" Dec 03 13:16:53 crc kubenswrapper[4986]: E1203 13:16:53.513115 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fd53494-2eec-45b8-80f4-7b94abf29bfb" containerName="init" Dec 03 13:16:53 crc kubenswrapper[4986]: I1203 13:16:53.513124 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fd53494-2eec-45b8-80f4-7b94abf29bfb" containerName="init" Dec 03 13:16:53 crc kubenswrapper[4986]: I1203 13:16:53.513321 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fd53494-2eec-45b8-80f4-7b94abf29bfb" containerName="dnsmasq-dns" Dec 03 13:16:53 crc kubenswrapper[4986]: I1203 13:16:53.513340 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0a895ed-eb3a-49d5-95fe-fe470db0eca3" containerName="dnsmasq-dns" Dec 03 13:16:53 crc kubenswrapper[4986]: I1203 13:16:53.514380 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-gfndf" Dec 03 13:16:53 crc kubenswrapper[4986]: I1203 13:16:53.529007 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-gfndf"] Dec 03 13:16:53 crc kubenswrapper[4986]: I1203 13:16:53.606167 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d9339da5-40ba-490f-8309-389ec66fd0d2-dns-svc\") pod \"dnsmasq-dns-698758b865-gfndf\" (UID: \"d9339da5-40ba-490f-8309-389ec66fd0d2\") " pod="openstack/dnsmasq-dns-698758b865-gfndf" Dec 03 13:16:53 crc kubenswrapper[4986]: I1203 13:16:53.606485 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9339da5-40ba-490f-8309-389ec66fd0d2-config\") pod \"dnsmasq-dns-698758b865-gfndf\" (UID: \"d9339da5-40ba-490f-8309-389ec66fd0d2\") " pod="openstack/dnsmasq-dns-698758b865-gfndf" Dec 03 13:16:53 crc kubenswrapper[4986]: I1203 13:16:53.606510 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d9339da5-40ba-490f-8309-389ec66fd0d2-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-gfndf\" (UID: \"d9339da5-40ba-490f-8309-389ec66fd0d2\") " pod="openstack/dnsmasq-dns-698758b865-gfndf" Dec 03 13:16:53 crc kubenswrapper[4986]: I1203 13:16:53.606544 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d9339da5-40ba-490f-8309-389ec66fd0d2-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-gfndf\" (UID: \"d9339da5-40ba-490f-8309-389ec66fd0d2\") " pod="openstack/dnsmasq-dns-698758b865-gfndf" Dec 03 13:16:53 crc kubenswrapper[4986]: I1203 13:16:53.606607 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rc57t\" (UniqueName: \"kubernetes.io/projected/d9339da5-40ba-490f-8309-389ec66fd0d2-kube-api-access-rc57t\") pod \"dnsmasq-dns-698758b865-gfndf\" (UID: \"d9339da5-40ba-490f-8309-389ec66fd0d2\") " pod="openstack/dnsmasq-dns-698758b865-gfndf" Dec 03 13:16:53 crc kubenswrapper[4986]: I1203 13:16:53.708180 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9339da5-40ba-490f-8309-389ec66fd0d2-config\") pod \"dnsmasq-dns-698758b865-gfndf\" (UID: \"d9339da5-40ba-490f-8309-389ec66fd0d2\") " pod="openstack/dnsmasq-dns-698758b865-gfndf" Dec 03 13:16:53 crc kubenswrapper[4986]: I1203 13:16:53.708244 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d9339da5-40ba-490f-8309-389ec66fd0d2-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-gfndf\" (UID: \"d9339da5-40ba-490f-8309-389ec66fd0d2\") " pod="openstack/dnsmasq-dns-698758b865-gfndf" Dec 03 13:16:53 crc kubenswrapper[4986]: I1203 13:16:53.708352 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d9339da5-40ba-490f-8309-389ec66fd0d2-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-gfndf\" (UID: \"d9339da5-40ba-490f-8309-389ec66fd0d2\") " pod="openstack/dnsmasq-dns-698758b865-gfndf" Dec 03 13:16:53 crc kubenswrapper[4986]: I1203 13:16:53.708422 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rc57t\" (UniqueName: \"kubernetes.io/projected/d9339da5-40ba-490f-8309-389ec66fd0d2-kube-api-access-rc57t\") pod \"dnsmasq-dns-698758b865-gfndf\" (UID: \"d9339da5-40ba-490f-8309-389ec66fd0d2\") " pod="openstack/dnsmasq-dns-698758b865-gfndf" Dec 03 13:16:53 crc kubenswrapper[4986]: I1203 13:16:53.708476 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d9339da5-40ba-490f-8309-389ec66fd0d2-dns-svc\") pod \"dnsmasq-dns-698758b865-gfndf\" (UID: \"d9339da5-40ba-490f-8309-389ec66fd0d2\") " pod="openstack/dnsmasq-dns-698758b865-gfndf" Dec 03 13:16:53 crc kubenswrapper[4986]: I1203 13:16:53.709151 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9339da5-40ba-490f-8309-389ec66fd0d2-config\") pod \"dnsmasq-dns-698758b865-gfndf\" (UID: \"d9339da5-40ba-490f-8309-389ec66fd0d2\") " pod="openstack/dnsmasq-dns-698758b865-gfndf" Dec 03 13:16:53 crc kubenswrapper[4986]: I1203 13:16:53.709542 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d9339da5-40ba-490f-8309-389ec66fd0d2-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-gfndf\" (UID: \"d9339da5-40ba-490f-8309-389ec66fd0d2\") " pod="openstack/dnsmasq-dns-698758b865-gfndf" Dec 03 13:16:53 crc kubenswrapper[4986]: I1203 13:16:53.709722 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d9339da5-40ba-490f-8309-389ec66fd0d2-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-gfndf\" (UID: \"d9339da5-40ba-490f-8309-389ec66fd0d2\") " pod="openstack/dnsmasq-dns-698758b865-gfndf" Dec 03 13:16:53 crc kubenswrapper[4986]: I1203 13:16:53.709836 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d9339da5-40ba-490f-8309-389ec66fd0d2-dns-svc\") pod \"dnsmasq-dns-698758b865-gfndf\" (UID: \"d9339da5-40ba-490f-8309-389ec66fd0d2\") " pod="openstack/dnsmasq-dns-698758b865-gfndf" Dec 03 13:16:53 crc kubenswrapper[4986]: I1203 13:16:53.742273 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rc57t\" (UniqueName: \"kubernetes.io/projected/d9339da5-40ba-490f-8309-389ec66fd0d2-kube-api-access-rc57t\") pod \"dnsmasq-dns-698758b865-gfndf\" (UID: \"d9339da5-40ba-490f-8309-389ec66fd0d2\") " pod="openstack/dnsmasq-dns-698758b865-gfndf" Dec 03 13:16:53 crc kubenswrapper[4986]: I1203 13:16:53.829740 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-gfndf" Dec 03 13:16:53 crc kubenswrapper[4986]: I1203 13:16:53.895018 4986 generic.go:334] "Generic (PLEG): container finished" podID="f1c17ad3-8eb8-429d-afd6-ac7135f24b4a" containerID="698c470fb9b32a1eb83707a21d64e144815eb224820f9d502bef562fa5008fc3" exitCode=0 Dec 03 13:16:53 crc kubenswrapper[4986]: I1203 13:16:53.895060 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-kgc2w" event={"ID":"f1c17ad3-8eb8-429d-afd6-ac7135f24b4a","Type":"ContainerDied","Data":"698c470fb9b32a1eb83707a21d64e144815eb224820f9d502bef562fa5008fc3"} Dec 03 13:16:53 crc kubenswrapper[4986]: I1203 13:16:53.895087 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-kgc2w" event={"ID":"f1c17ad3-8eb8-429d-afd6-ac7135f24b4a","Type":"ContainerDied","Data":"0a9a9706939a771d4ff5987e420a0fd2ca3a61338cf802b34feb2298837b587e"} Dec 03 13:16:53 crc kubenswrapper[4986]: I1203 13:16:53.895102 4986 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a9a9706939a771d4ff5987e420a0fd2ca3a61338cf802b34feb2298837b587e" Dec 03 13:16:53 crc kubenswrapper[4986]: I1203 13:16:53.910724 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-kgc2w" Dec 03 13:16:54 crc kubenswrapper[4986]: I1203 13:16:54.012041 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f1c17ad3-8eb8-429d-afd6-ac7135f24b4a-dns-svc\") pod \"f1c17ad3-8eb8-429d-afd6-ac7135f24b4a\" (UID: \"f1c17ad3-8eb8-429d-afd6-ac7135f24b4a\") " Dec 03 13:16:54 crc kubenswrapper[4986]: I1203 13:16:54.012426 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tq8bj\" (UniqueName: \"kubernetes.io/projected/f1c17ad3-8eb8-429d-afd6-ac7135f24b4a-kube-api-access-tq8bj\") pod \"f1c17ad3-8eb8-429d-afd6-ac7135f24b4a\" (UID: \"f1c17ad3-8eb8-429d-afd6-ac7135f24b4a\") " Dec 03 13:16:54 crc kubenswrapper[4986]: I1203 13:16:54.012525 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1c17ad3-8eb8-429d-afd6-ac7135f24b4a-config\") pod \"f1c17ad3-8eb8-429d-afd6-ac7135f24b4a\" (UID: \"f1c17ad3-8eb8-429d-afd6-ac7135f24b4a\") " Dec 03 13:16:54 crc kubenswrapper[4986]: I1203 13:16:54.012588 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f1c17ad3-8eb8-429d-afd6-ac7135f24b4a-ovsdbserver-sb\") pod \"f1c17ad3-8eb8-429d-afd6-ac7135f24b4a\" (UID: \"f1c17ad3-8eb8-429d-afd6-ac7135f24b4a\") " Dec 03 13:16:54 crc kubenswrapper[4986]: I1203 13:16:54.021545 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1c17ad3-8eb8-429d-afd6-ac7135f24b4a-kube-api-access-tq8bj" (OuterVolumeSpecName: "kube-api-access-tq8bj") pod "f1c17ad3-8eb8-429d-afd6-ac7135f24b4a" (UID: "f1c17ad3-8eb8-429d-afd6-ac7135f24b4a"). InnerVolumeSpecName "kube-api-access-tq8bj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:16:54 crc kubenswrapper[4986]: I1203 13:16:54.051112 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1c17ad3-8eb8-429d-afd6-ac7135f24b4a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f1c17ad3-8eb8-429d-afd6-ac7135f24b4a" (UID: "f1c17ad3-8eb8-429d-afd6-ac7135f24b4a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:16:54 crc kubenswrapper[4986]: I1203 13:16:54.054565 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1c17ad3-8eb8-429d-afd6-ac7135f24b4a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f1c17ad3-8eb8-429d-afd6-ac7135f24b4a" (UID: "f1c17ad3-8eb8-429d-afd6-ac7135f24b4a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:16:54 crc kubenswrapper[4986]: I1203 13:16:54.058172 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1c17ad3-8eb8-429d-afd6-ac7135f24b4a-config" (OuterVolumeSpecName: "config") pod "f1c17ad3-8eb8-429d-afd6-ac7135f24b4a" (UID: "f1c17ad3-8eb8-429d-afd6-ac7135f24b4a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:16:54 crc kubenswrapper[4986]: I1203 13:16:54.114664 4986 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f1c17ad3-8eb8-429d-afd6-ac7135f24b4a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 03 13:16:54 crc kubenswrapper[4986]: I1203 13:16:54.114705 4986 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f1c17ad3-8eb8-429d-afd6-ac7135f24b4a-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 03 13:16:54 crc kubenswrapper[4986]: I1203 13:16:54.114717 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tq8bj\" (UniqueName: \"kubernetes.io/projected/f1c17ad3-8eb8-429d-afd6-ac7135f24b4a-kube-api-access-tq8bj\") on node \"crc\" DevicePath \"\"" Dec 03 13:16:54 crc kubenswrapper[4986]: I1203 13:16:54.114727 4986 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1c17ad3-8eb8-429d-afd6-ac7135f24b4a-config\") on node \"crc\" DevicePath \"\"" Dec 03 13:16:54 crc kubenswrapper[4986]: I1203 13:16:54.265417 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-gfndf"] Dec 03 13:16:54 crc kubenswrapper[4986]: W1203 13:16:54.275219 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd9339da5_40ba_490f_8309_389ec66fd0d2.slice/crio-ea8799acf594edee25c399c2f65f836227181468d75147bde47a3b094705362a WatchSource:0}: Error finding container ea8799acf594edee25c399c2f65f836227181468d75147bde47a3b094705362a: Status 404 returned error can't find the container with id ea8799acf594edee25c399c2f65f836227181468d75147bde47a3b094705362a Dec 03 13:16:54 crc kubenswrapper[4986]: I1203 13:16:54.601838 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Dec 03 13:16:54 crc kubenswrapper[4986]: E1203 13:16:54.602201 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1c17ad3-8eb8-429d-afd6-ac7135f24b4a" containerName="dnsmasq-dns" Dec 03 13:16:54 crc kubenswrapper[4986]: I1203 13:16:54.602218 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1c17ad3-8eb8-429d-afd6-ac7135f24b4a" containerName="dnsmasq-dns" Dec 03 13:16:54 crc kubenswrapper[4986]: E1203 13:16:54.602245 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1c17ad3-8eb8-429d-afd6-ac7135f24b4a" containerName="init" Dec 03 13:16:54 crc kubenswrapper[4986]: I1203 13:16:54.602253 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1c17ad3-8eb8-429d-afd6-ac7135f24b4a" containerName="init" Dec 03 13:16:54 crc kubenswrapper[4986]: I1203 13:16:54.602467 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1c17ad3-8eb8-429d-afd6-ac7135f24b4a" containerName="dnsmasq-dns" Dec 03 13:16:54 crc kubenswrapper[4986]: I1203 13:16:54.607061 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Dec 03 13:16:54 crc kubenswrapper[4986]: I1203 13:16:54.608986 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Dec 03 13:16:54 crc kubenswrapper[4986]: I1203 13:16:54.609004 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Dec 03 13:16:54 crc kubenswrapper[4986]: I1203 13:16:54.609695 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Dec 03 13:16:54 crc kubenswrapper[4986]: I1203 13:16:54.609753 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-7q7wc" Dec 03 13:16:54 crc kubenswrapper[4986]: I1203 13:16:54.623269 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Dec 03 13:16:54 crc kubenswrapper[4986]: I1203 13:16:54.723840 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe-lock\") pod \"swift-storage-0\" (UID: \"cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe\") " pod="openstack/swift-storage-0" Dec 03 13:16:54 crc kubenswrapper[4986]: I1203 13:16:54.724591 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe-etc-swift\") pod \"swift-storage-0\" (UID: \"cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe\") " pod="openstack/swift-storage-0" Dec 03 13:16:54 crc kubenswrapper[4986]: I1203 13:16:54.724699 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe-cache\") pod \"swift-storage-0\" (UID: \"cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe\") " pod="openstack/swift-storage-0" Dec 03 13:16:54 crc kubenswrapper[4986]: I1203 13:16:54.724785 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe\") " pod="openstack/swift-storage-0" Dec 03 13:16:54 crc kubenswrapper[4986]: I1203 13:16:54.724915 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cw9sp\" (UniqueName: \"kubernetes.io/projected/cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe-kube-api-access-cw9sp\") pod \"swift-storage-0\" (UID: \"cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe\") " pod="openstack/swift-storage-0" Dec 03 13:16:54 crc kubenswrapper[4986]: I1203 13:16:54.826081 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe-lock\") pod \"swift-storage-0\" (UID: \"cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe\") " pod="openstack/swift-storage-0" Dec 03 13:16:54 crc kubenswrapper[4986]: I1203 13:16:54.826128 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe-etc-swift\") pod \"swift-storage-0\" (UID: \"cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe\") " pod="openstack/swift-storage-0" Dec 03 13:16:54 crc kubenswrapper[4986]: I1203 13:16:54.826169 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe-cache\") pod \"swift-storage-0\" (UID: \"cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe\") " pod="openstack/swift-storage-0" Dec 03 13:16:54 crc kubenswrapper[4986]: I1203 13:16:54.826188 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe\") " pod="openstack/swift-storage-0" Dec 03 13:16:54 crc kubenswrapper[4986]: I1203 13:16:54.826245 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cw9sp\" (UniqueName: \"kubernetes.io/projected/cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe-kube-api-access-cw9sp\") pod \"swift-storage-0\" (UID: \"cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe\") " pod="openstack/swift-storage-0" Dec 03 13:16:54 crc kubenswrapper[4986]: I1203 13:16:54.826921 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe-lock\") pod \"swift-storage-0\" (UID: \"cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe\") " pod="openstack/swift-storage-0" Dec 03 13:16:54 crc kubenswrapper[4986]: E1203 13:16:54.826998 4986 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 03 13:16:54 crc kubenswrapper[4986]: E1203 13:16:54.827010 4986 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 03 13:16:54 crc kubenswrapper[4986]: E1203 13:16:54.827043 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe-etc-swift podName:cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe nodeName:}" failed. No retries permitted until 2025-12-03 13:16:55.327030016 +0000 UTC m=+1274.793461207 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe-etc-swift") pod "swift-storage-0" (UID: "cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe") : configmap "swift-ring-files" not found Dec 03 13:16:54 crc kubenswrapper[4986]: I1203 13:16:54.827368 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe-cache\") pod \"swift-storage-0\" (UID: \"cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe\") " pod="openstack/swift-storage-0" Dec 03 13:16:54 crc kubenswrapper[4986]: I1203 13:16:54.827589 4986 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/swift-storage-0" Dec 03 13:16:54 crc kubenswrapper[4986]: I1203 13:16:54.851908 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cw9sp\" (UniqueName: \"kubernetes.io/projected/cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe-kube-api-access-cw9sp\") pod \"swift-storage-0\" (UID: \"cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe\") " pod="openstack/swift-storage-0" Dec 03 13:16:54 crc kubenswrapper[4986]: I1203 13:16:54.859915 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe\") " pod="openstack/swift-storage-0" Dec 03 13:16:54 crc kubenswrapper[4986]: I1203 13:16:54.911582 4986 generic.go:334] "Generic (PLEG): container finished" podID="d9339da5-40ba-490f-8309-389ec66fd0d2" containerID="72781ace6ad04f6f7b3ed5bc8a4b671dca3f4355392598ed5f1698bc62dbdeda" exitCode=0 Dec 03 13:16:54 crc kubenswrapper[4986]: I1203 13:16:54.913459 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-kgc2w" Dec 03 13:16:54 crc kubenswrapper[4986]: I1203 13:16:54.911854 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-gfndf" event={"ID":"d9339da5-40ba-490f-8309-389ec66fd0d2","Type":"ContainerDied","Data":"72781ace6ad04f6f7b3ed5bc8a4b671dca3f4355392598ed5f1698bc62dbdeda"} Dec 03 13:16:54 crc kubenswrapper[4986]: I1203 13:16:54.913764 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-gfndf" event={"ID":"d9339da5-40ba-490f-8309-389ec66fd0d2","Type":"ContainerStarted","Data":"ea8799acf594edee25c399c2f65f836227181468d75147bde47a3b094705362a"} Dec 03 13:16:55 crc kubenswrapper[4986]: I1203 13:16:55.014368 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-kgc2w"] Dec 03 13:16:55 crc kubenswrapper[4986]: I1203 13:16:55.021177 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-kgc2w"] Dec 03 13:16:55 crc kubenswrapper[4986]: I1203 13:16:55.335218 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe-etc-swift\") pod \"swift-storage-0\" (UID: \"cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe\") " pod="openstack/swift-storage-0" Dec 03 13:16:55 crc kubenswrapper[4986]: E1203 13:16:55.335456 4986 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 03 13:16:55 crc kubenswrapper[4986]: E1203 13:16:55.335963 4986 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 03 13:16:55 crc kubenswrapper[4986]: E1203 13:16:55.336104 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe-etc-swift podName:cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe nodeName:}" failed. No retries permitted until 2025-12-03 13:16:56.33607816 +0000 UTC m=+1275.802509351 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe-etc-swift") pod "swift-storage-0" (UID: "cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe") : configmap "swift-ring-files" not found Dec 03 13:16:55 crc kubenswrapper[4986]: I1203 13:16:55.923908 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-gfndf" event={"ID":"d9339da5-40ba-490f-8309-389ec66fd0d2","Type":"ContainerStarted","Data":"e792b52c823c49082b555ed5bd65bd798bd2827b627ae09894b673d79d462ec0"} Dec 03 13:16:55 crc kubenswrapper[4986]: I1203 13:16:55.924851 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-gfndf" Dec 03 13:16:55 crc kubenswrapper[4986]: I1203 13:16:55.957995 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-gfndf" podStartSLOduration=2.957973936 podStartE2EDuration="2.957973936s" podCreationTimestamp="2025-12-03 13:16:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:16:55.951914783 +0000 UTC m=+1275.418346034" watchObservedRunningTime="2025-12-03 13:16:55.957973936 +0000 UTC m=+1275.424405137" Dec 03 13:16:56 crc kubenswrapper[4986]: I1203 13:16:56.354429 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe-etc-swift\") pod \"swift-storage-0\" (UID: \"cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe\") " pod="openstack/swift-storage-0" Dec 03 13:16:56 crc kubenswrapper[4986]: E1203 13:16:56.354644 4986 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 03 13:16:56 crc kubenswrapper[4986]: E1203 13:16:56.354677 4986 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 03 13:16:56 crc kubenswrapper[4986]: E1203 13:16:56.354738 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe-etc-swift podName:cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe nodeName:}" failed. No retries permitted until 2025-12-03 13:16:58.354719893 +0000 UTC m=+1277.821151084 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe-etc-swift") pod "swift-storage-0" (UID: "cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe") : configmap "swift-ring-files" not found Dec 03 13:16:56 crc kubenswrapper[4986]: I1203 13:16:56.974691 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1c17ad3-8eb8-429d-afd6-ac7135f24b4a" path="/var/lib/kubelet/pods/f1c17ad3-8eb8-429d-afd6-ac7135f24b4a/volumes" Dec 03 13:16:57 crc kubenswrapper[4986]: I1203 13:16:57.955127 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"50659f56-763b-4cac-9ab4-d660c7d777af","Type":"ContainerStarted","Data":"a11d2b182519e459e9a2b7596f2422cd434e47fefee4d81a203876247dc427cc"} Dec 03 13:16:57 crc kubenswrapper[4986]: I1203 13:16:57.957034 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-kqn7t" event={"ID":"d06f8249-00a2-4e59-a055-82ab737c7b92","Type":"ContainerStarted","Data":"0a396d79ef3a54b8410ffae512a023817aa04ecdc930576b4f5906a705074b7a"} Dec 03 13:16:57 crc kubenswrapper[4986]: I1203 13:16:57.957397 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-kqn7t" Dec 03 13:16:58 crc kubenswrapper[4986]: I1203 13:16:58.002505 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-kqn7t" podStartSLOduration=4.584183873 podStartE2EDuration="32.002488196s" podCreationTimestamp="2025-12-03 13:16:26 +0000 UTC" firstStartedPulling="2025-12-03 13:16:29.314549514 +0000 UTC m=+1248.780980695" lastFinishedPulling="2025-12-03 13:16:56.732853827 +0000 UTC m=+1276.199285018" observedRunningTime="2025-12-03 13:16:57.995490368 +0000 UTC m=+1277.461921559" watchObservedRunningTime="2025-12-03 13:16:58.002488196 +0000 UTC m=+1277.468919387" Dec 03 13:16:58 crc kubenswrapper[4986]: I1203 13:16:58.389223 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe-etc-swift\") pod \"swift-storage-0\" (UID: \"cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe\") " pod="openstack/swift-storage-0" Dec 03 13:16:58 crc kubenswrapper[4986]: E1203 13:16:58.389632 4986 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 03 13:16:58 crc kubenswrapper[4986]: E1203 13:16:58.389748 4986 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 03 13:16:58 crc kubenswrapper[4986]: E1203 13:16:58.389802 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe-etc-swift podName:cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe nodeName:}" failed. No retries permitted until 2025-12-03 13:17:02.389783367 +0000 UTC m=+1281.856214558 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe-etc-swift") pod "swift-storage-0" (UID: "cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe") : configmap "swift-ring-files" not found Dec 03 13:16:58 crc kubenswrapper[4986]: I1203 13:16:58.615091 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-sxrg6"] Dec 03 13:16:58 crc kubenswrapper[4986]: I1203 13:16:58.617364 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-sxrg6" Dec 03 13:16:58 crc kubenswrapper[4986]: I1203 13:16:58.619822 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Dec 03 13:16:58 crc kubenswrapper[4986]: I1203 13:16:58.619839 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Dec 03 13:16:58 crc kubenswrapper[4986]: I1203 13:16:58.620686 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Dec 03 13:16:58 crc kubenswrapper[4986]: I1203 13:16:58.628143 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-sxrg6"] Dec 03 13:16:58 crc kubenswrapper[4986]: I1203 13:16:58.695638 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/95163eb6-a8f2-45d5-b816-84dd6ffbdab2-etc-swift\") pod \"swift-ring-rebalance-sxrg6\" (UID: \"95163eb6-a8f2-45d5-b816-84dd6ffbdab2\") " pod="openstack/swift-ring-rebalance-sxrg6" Dec 03 13:16:58 crc kubenswrapper[4986]: I1203 13:16:58.696187 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95163eb6-a8f2-45d5-b816-84dd6ffbdab2-combined-ca-bundle\") pod \"swift-ring-rebalance-sxrg6\" (UID: \"95163eb6-a8f2-45d5-b816-84dd6ffbdab2\") " pod="openstack/swift-ring-rebalance-sxrg6" Dec 03 13:16:58 crc kubenswrapper[4986]: I1203 13:16:58.696247 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/95163eb6-a8f2-45d5-b816-84dd6ffbdab2-dispersionconf\") pod \"swift-ring-rebalance-sxrg6\" (UID: \"95163eb6-a8f2-45d5-b816-84dd6ffbdab2\") " pod="openstack/swift-ring-rebalance-sxrg6" Dec 03 13:16:58 crc kubenswrapper[4986]: I1203 13:16:58.696333 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bw9n\" (UniqueName: \"kubernetes.io/projected/95163eb6-a8f2-45d5-b816-84dd6ffbdab2-kube-api-access-6bw9n\") pod \"swift-ring-rebalance-sxrg6\" (UID: \"95163eb6-a8f2-45d5-b816-84dd6ffbdab2\") " pod="openstack/swift-ring-rebalance-sxrg6" Dec 03 13:16:58 crc kubenswrapper[4986]: I1203 13:16:58.696363 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/95163eb6-a8f2-45d5-b816-84dd6ffbdab2-swiftconf\") pod \"swift-ring-rebalance-sxrg6\" (UID: \"95163eb6-a8f2-45d5-b816-84dd6ffbdab2\") " pod="openstack/swift-ring-rebalance-sxrg6" Dec 03 13:16:58 crc kubenswrapper[4986]: I1203 13:16:58.696806 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/95163eb6-a8f2-45d5-b816-84dd6ffbdab2-scripts\") pod \"swift-ring-rebalance-sxrg6\" (UID: \"95163eb6-a8f2-45d5-b816-84dd6ffbdab2\") " pod="openstack/swift-ring-rebalance-sxrg6" Dec 03 13:16:58 crc kubenswrapper[4986]: I1203 13:16:58.696914 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/95163eb6-a8f2-45d5-b816-84dd6ffbdab2-ring-data-devices\") pod \"swift-ring-rebalance-sxrg6\" (UID: \"95163eb6-a8f2-45d5-b816-84dd6ffbdab2\") " pod="openstack/swift-ring-rebalance-sxrg6" Dec 03 13:16:58 crc kubenswrapper[4986]: I1203 13:16:58.798920 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/95163eb6-a8f2-45d5-b816-84dd6ffbdab2-etc-swift\") pod \"swift-ring-rebalance-sxrg6\" (UID: \"95163eb6-a8f2-45d5-b816-84dd6ffbdab2\") " pod="openstack/swift-ring-rebalance-sxrg6" Dec 03 13:16:58 crc kubenswrapper[4986]: I1203 13:16:58.799023 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95163eb6-a8f2-45d5-b816-84dd6ffbdab2-combined-ca-bundle\") pod \"swift-ring-rebalance-sxrg6\" (UID: \"95163eb6-a8f2-45d5-b816-84dd6ffbdab2\") " pod="openstack/swift-ring-rebalance-sxrg6" Dec 03 13:16:58 crc kubenswrapper[4986]: I1203 13:16:58.799044 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/95163eb6-a8f2-45d5-b816-84dd6ffbdab2-dispersionconf\") pod \"swift-ring-rebalance-sxrg6\" (UID: \"95163eb6-a8f2-45d5-b816-84dd6ffbdab2\") " pod="openstack/swift-ring-rebalance-sxrg6" Dec 03 13:16:58 crc kubenswrapper[4986]: I1203 13:16:58.799067 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bw9n\" (UniqueName: \"kubernetes.io/projected/95163eb6-a8f2-45d5-b816-84dd6ffbdab2-kube-api-access-6bw9n\") pod \"swift-ring-rebalance-sxrg6\" (UID: \"95163eb6-a8f2-45d5-b816-84dd6ffbdab2\") " pod="openstack/swift-ring-rebalance-sxrg6" Dec 03 13:16:58 crc kubenswrapper[4986]: I1203 13:16:58.799084 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/95163eb6-a8f2-45d5-b816-84dd6ffbdab2-swiftconf\") pod \"swift-ring-rebalance-sxrg6\" (UID: \"95163eb6-a8f2-45d5-b816-84dd6ffbdab2\") " pod="openstack/swift-ring-rebalance-sxrg6" Dec 03 13:16:58 crc kubenswrapper[4986]: I1203 13:16:58.799114 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/95163eb6-a8f2-45d5-b816-84dd6ffbdab2-scripts\") pod \"swift-ring-rebalance-sxrg6\" (UID: \"95163eb6-a8f2-45d5-b816-84dd6ffbdab2\") " pod="openstack/swift-ring-rebalance-sxrg6" Dec 03 13:16:58 crc kubenswrapper[4986]: I1203 13:16:58.799157 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/95163eb6-a8f2-45d5-b816-84dd6ffbdab2-ring-data-devices\") pod \"swift-ring-rebalance-sxrg6\" (UID: \"95163eb6-a8f2-45d5-b816-84dd6ffbdab2\") " pod="openstack/swift-ring-rebalance-sxrg6" Dec 03 13:16:58 crc kubenswrapper[4986]: I1203 13:16:58.799761 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/95163eb6-a8f2-45d5-b816-84dd6ffbdab2-etc-swift\") pod \"swift-ring-rebalance-sxrg6\" (UID: \"95163eb6-a8f2-45d5-b816-84dd6ffbdab2\") " pod="openstack/swift-ring-rebalance-sxrg6" Dec 03 13:16:58 crc kubenswrapper[4986]: I1203 13:16:58.800023 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/95163eb6-a8f2-45d5-b816-84dd6ffbdab2-ring-data-devices\") pod \"swift-ring-rebalance-sxrg6\" (UID: \"95163eb6-a8f2-45d5-b816-84dd6ffbdab2\") " pod="openstack/swift-ring-rebalance-sxrg6" Dec 03 13:16:58 crc kubenswrapper[4986]: I1203 13:16:58.800485 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/95163eb6-a8f2-45d5-b816-84dd6ffbdab2-scripts\") pod \"swift-ring-rebalance-sxrg6\" (UID: \"95163eb6-a8f2-45d5-b816-84dd6ffbdab2\") " pod="openstack/swift-ring-rebalance-sxrg6" Dec 03 13:16:58 crc kubenswrapper[4986]: I1203 13:16:58.803930 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95163eb6-a8f2-45d5-b816-84dd6ffbdab2-combined-ca-bundle\") pod \"swift-ring-rebalance-sxrg6\" (UID: \"95163eb6-a8f2-45d5-b816-84dd6ffbdab2\") " pod="openstack/swift-ring-rebalance-sxrg6" Dec 03 13:16:58 crc kubenswrapper[4986]: I1203 13:16:58.805337 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/95163eb6-a8f2-45d5-b816-84dd6ffbdab2-swiftconf\") pod \"swift-ring-rebalance-sxrg6\" (UID: \"95163eb6-a8f2-45d5-b816-84dd6ffbdab2\") " pod="openstack/swift-ring-rebalance-sxrg6" Dec 03 13:16:58 crc kubenswrapper[4986]: I1203 13:16:58.817956 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/95163eb6-a8f2-45d5-b816-84dd6ffbdab2-dispersionconf\") pod \"swift-ring-rebalance-sxrg6\" (UID: \"95163eb6-a8f2-45d5-b816-84dd6ffbdab2\") " pod="openstack/swift-ring-rebalance-sxrg6" Dec 03 13:16:58 crc kubenswrapper[4986]: I1203 13:16:58.818299 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bw9n\" (UniqueName: \"kubernetes.io/projected/95163eb6-a8f2-45d5-b816-84dd6ffbdab2-kube-api-access-6bw9n\") pod \"swift-ring-rebalance-sxrg6\" (UID: \"95163eb6-a8f2-45d5-b816-84dd6ffbdab2\") " pod="openstack/swift-ring-rebalance-sxrg6" Dec 03 13:16:58 crc kubenswrapper[4986]: I1203 13:16:58.947197 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-sxrg6" Dec 03 13:16:58 crc kubenswrapper[4986]: I1203 13:16:58.966688 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a0496538-1ab2-45a2-94ab-fc3474533ec3","Type":"ContainerStarted","Data":"ea7254afe1cf617571e7012d875c066183319aef8a4b92304bbc120d071670f4"} Dec 03 13:16:59 crc kubenswrapper[4986]: I1203 13:16:59.045508 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-2dwn7" Dec 03 13:16:59 crc kubenswrapper[4986]: I1203 13:16:59.461176 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-sxrg6"] Dec 03 13:16:59 crc kubenswrapper[4986]: W1203 13:16:59.468505 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod95163eb6_a8f2_45d5_b816_84dd6ffbdab2.slice/crio-c370e0eee3e0d72e01ed0a46e52f67b654683a3c4eb3aa9123574205df33842e WatchSource:0}: Error finding container c370e0eee3e0d72e01ed0a46e52f67b654683a3c4eb3aa9123574205df33842e: Status 404 returned error can't find the container with id c370e0eee3e0d72e01ed0a46e52f67b654683a3c4eb3aa9123574205df33842e Dec 03 13:16:59 crc kubenswrapper[4986]: I1203 13:16:59.976682 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-sxrg6" event={"ID":"95163eb6-a8f2-45d5-b816-84dd6ffbdab2","Type":"ContainerStarted","Data":"c370e0eee3e0d72e01ed0a46e52f67b654683a3c4eb3aa9123574205df33842e"} Dec 03 13:17:00 crc kubenswrapper[4986]: I1203 13:17:00.986992 4986 generic.go:334] "Generic (PLEG): container finished" podID="50659f56-763b-4cac-9ab4-d660c7d777af" containerID="a11d2b182519e459e9a2b7596f2422cd434e47fefee4d81a203876247dc427cc" exitCode=0 Dec 03 13:17:00 crc kubenswrapper[4986]: I1203 13:17:00.987105 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"50659f56-763b-4cac-9ab4-d660c7d777af","Type":"ContainerDied","Data":"a11d2b182519e459e9a2b7596f2422cd434e47fefee4d81a203876247dc427cc"} Dec 03 13:17:01 crc kubenswrapper[4986]: I1203 13:17:01.998988 4986 generic.go:334] "Generic (PLEG): container finished" podID="a0496538-1ab2-45a2-94ab-fc3474533ec3" containerID="ea7254afe1cf617571e7012d875c066183319aef8a4b92304bbc120d071670f4" exitCode=0 Dec 03 13:17:01 crc kubenswrapper[4986]: I1203 13:17:01.999033 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a0496538-1ab2-45a2-94ab-fc3474533ec3","Type":"ContainerDied","Data":"ea7254afe1cf617571e7012d875c066183319aef8a4b92304bbc120d071670f4"} Dec 03 13:17:02 crc kubenswrapper[4986]: I1203 13:17:02.462991 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe-etc-swift\") pod \"swift-storage-0\" (UID: \"cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe\") " pod="openstack/swift-storage-0" Dec 03 13:17:02 crc kubenswrapper[4986]: E1203 13:17:02.463148 4986 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 03 13:17:02 crc kubenswrapper[4986]: E1203 13:17:02.463436 4986 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 03 13:17:02 crc kubenswrapper[4986]: E1203 13:17:02.463489 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe-etc-swift podName:cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe nodeName:}" failed. No retries permitted until 2025-12-03 13:17:10.463472323 +0000 UTC m=+1289.929903504 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe-etc-swift") pod "swift-storage-0" (UID: "cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe") : configmap "swift-ring-files" not found Dec 03 13:17:03 crc kubenswrapper[4986]: I1203 13:17:03.832593 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-gfndf" Dec 03 13:17:03 crc kubenswrapper[4986]: I1203 13:17:03.892235 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-2dwn7"] Dec 03 13:17:03 crc kubenswrapper[4986]: I1203 13:17:03.892462 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-2dwn7" podUID="84767b4d-e93f-41d9-a7e3-795def131772" containerName="dnsmasq-dns" containerID="cri-o://2f8b0a490b4b3d4508218fe6a139bd02292710da2067c2999f42ca14a42fe761" gracePeriod=10 Dec 03 13:17:04 crc kubenswrapper[4986]: I1203 13:17:04.039029 4986 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-86db49b7ff-2dwn7" podUID="84767b4d-e93f-41d9-a7e3-795def131772" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.109:5353: connect: connection refused" Dec 03 13:17:07 crc kubenswrapper[4986]: I1203 13:17:07.047449 4986 generic.go:334] "Generic (PLEG): container finished" podID="84767b4d-e93f-41d9-a7e3-795def131772" containerID="2f8b0a490b4b3d4508218fe6a139bd02292710da2067c2999f42ca14a42fe761" exitCode=0 Dec 03 13:17:07 crc kubenswrapper[4986]: I1203 13:17:07.047516 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-2dwn7" event={"ID":"84767b4d-e93f-41d9-a7e3-795def131772","Type":"ContainerDied","Data":"2f8b0a490b4b3d4508218fe6a139bd02292710da2067c2999f42ca14a42fe761"} Dec 03 13:17:07 crc kubenswrapper[4986]: I1203 13:17:07.131074 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-2dwn7" Dec 03 13:17:07 crc kubenswrapper[4986]: I1203 13:17:07.240906 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84767b4d-e93f-41d9-a7e3-795def131772-config\") pod \"84767b4d-e93f-41d9-a7e3-795def131772\" (UID: \"84767b4d-e93f-41d9-a7e3-795def131772\") " Dec 03 13:17:07 crc kubenswrapper[4986]: I1203 13:17:07.240981 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sshdg\" (UniqueName: \"kubernetes.io/projected/84767b4d-e93f-41d9-a7e3-795def131772-kube-api-access-sshdg\") pod \"84767b4d-e93f-41d9-a7e3-795def131772\" (UID: \"84767b4d-e93f-41d9-a7e3-795def131772\") " Dec 03 13:17:07 crc kubenswrapper[4986]: I1203 13:17:07.241082 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/84767b4d-e93f-41d9-a7e3-795def131772-dns-svc\") pod \"84767b4d-e93f-41d9-a7e3-795def131772\" (UID: \"84767b4d-e93f-41d9-a7e3-795def131772\") " Dec 03 13:17:07 crc kubenswrapper[4986]: I1203 13:17:07.241110 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/84767b4d-e93f-41d9-a7e3-795def131772-ovsdbserver-nb\") pod \"84767b4d-e93f-41d9-a7e3-795def131772\" (UID: \"84767b4d-e93f-41d9-a7e3-795def131772\") " Dec 03 13:17:07 crc kubenswrapper[4986]: I1203 13:17:07.241161 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/84767b4d-e93f-41d9-a7e3-795def131772-ovsdbserver-sb\") pod \"84767b4d-e93f-41d9-a7e3-795def131772\" (UID: \"84767b4d-e93f-41d9-a7e3-795def131772\") " Dec 03 13:17:07 crc kubenswrapper[4986]: I1203 13:17:07.244499 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84767b4d-e93f-41d9-a7e3-795def131772-kube-api-access-sshdg" (OuterVolumeSpecName: "kube-api-access-sshdg") pod "84767b4d-e93f-41d9-a7e3-795def131772" (UID: "84767b4d-e93f-41d9-a7e3-795def131772"). InnerVolumeSpecName "kube-api-access-sshdg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:17:07 crc kubenswrapper[4986]: I1203 13:17:07.277425 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84767b4d-e93f-41d9-a7e3-795def131772-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "84767b4d-e93f-41d9-a7e3-795def131772" (UID: "84767b4d-e93f-41d9-a7e3-795def131772"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:17:07 crc kubenswrapper[4986]: I1203 13:17:07.279777 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84767b4d-e93f-41d9-a7e3-795def131772-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "84767b4d-e93f-41d9-a7e3-795def131772" (UID: "84767b4d-e93f-41d9-a7e3-795def131772"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:17:07 crc kubenswrapper[4986]: I1203 13:17:07.280683 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84767b4d-e93f-41d9-a7e3-795def131772-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "84767b4d-e93f-41d9-a7e3-795def131772" (UID: "84767b4d-e93f-41d9-a7e3-795def131772"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:17:07 crc kubenswrapper[4986]: I1203 13:17:07.285278 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84767b4d-e93f-41d9-a7e3-795def131772-config" (OuterVolumeSpecName: "config") pod "84767b4d-e93f-41d9-a7e3-795def131772" (UID: "84767b4d-e93f-41d9-a7e3-795def131772"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:17:07 crc kubenswrapper[4986]: I1203 13:17:07.342947 4986 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/84767b4d-e93f-41d9-a7e3-795def131772-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 03 13:17:07 crc kubenswrapper[4986]: I1203 13:17:07.342980 4986 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/84767b4d-e93f-41d9-a7e3-795def131772-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 03 13:17:07 crc kubenswrapper[4986]: I1203 13:17:07.342990 4986 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/84767b4d-e93f-41d9-a7e3-795def131772-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 03 13:17:07 crc kubenswrapper[4986]: I1203 13:17:07.342997 4986 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84767b4d-e93f-41d9-a7e3-795def131772-config\") on node \"crc\" DevicePath \"\"" Dec 03 13:17:07 crc kubenswrapper[4986]: I1203 13:17:07.343007 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sshdg\" (UniqueName: \"kubernetes.io/projected/84767b4d-e93f-41d9-a7e3-795def131772-kube-api-access-sshdg\") on node \"crc\" DevicePath \"\"" Dec 03 13:17:08 crc kubenswrapper[4986]: I1203 13:17:08.066229 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"905d78e0-0235-400d-8004-1f612a11b60a","Type":"ContainerStarted","Data":"79f945f512f544cd283b82b6888a1e78995218f60e522667796219634df9af94"} Dec 03 13:17:08 crc kubenswrapper[4986]: I1203 13:17:08.068273 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a0496538-1ab2-45a2-94ab-fc3474533ec3","Type":"ContainerStarted","Data":"7a18731b3c061f664048de1865d2d074c930ce92a3a38f3ce1eca2bd4513f39b"} Dec 03 13:17:08 crc kubenswrapper[4986]: I1203 13:17:08.075983 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-2dwn7" Dec 03 13:17:08 crc kubenswrapper[4986]: I1203 13:17:08.076000 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-2dwn7" event={"ID":"84767b4d-e93f-41d9-a7e3-795def131772","Type":"ContainerDied","Data":"073e9b29202a69b000d3d746b9e0cc4aa6cc7c5ca5d6d200fabd7020ef67dfbf"} Dec 03 13:17:08 crc kubenswrapper[4986]: I1203 13:17:08.076160 4986 scope.go:117] "RemoveContainer" containerID="2f8b0a490b4b3d4508218fe6a139bd02292710da2067c2999f42ca14a42fe761" Dec 03 13:17:08 crc kubenswrapper[4986]: I1203 13:17:08.078872 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"50659f56-763b-4cac-9ab4-d660c7d777af","Type":"ContainerStarted","Data":"85dd37f833c144dc53640187096af98782ab6aab59d6d884c5ee48e5db1d5b1a"} Dec 03 13:17:08 crc kubenswrapper[4986]: I1203 13:17:08.080216 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-sxrg6" event={"ID":"95163eb6-a8f2-45d5-b816-84dd6ffbdab2","Type":"ContainerStarted","Data":"317e23184f159b9ae8d449f2bea1731d4bc56f3662f85c7e210b5cade1f79b4c"} Dec 03 13:17:08 crc kubenswrapper[4986]: I1203 13:17:08.091949 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=3.698588189 podStartE2EDuration="41.091926266s" podCreationTimestamp="2025-12-03 13:16:27 +0000 UTC" firstStartedPulling="2025-12-03 13:16:29.576409304 +0000 UTC m=+1249.042840495" lastFinishedPulling="2025-12-03 13:17:06.969747381 +0000 UTC m=+1286.436178572" observedRunningTime="2025-12-03 13:17:08.086043776 +0000 UTC m=+1287.552474987" watchObservedRunningTime="2025-12-03 13:17:08.091926266 +0000 UTC m=+1287.558357457" Dec 03 13:17:08 crc kubenswrapper[4986]: I1203 13:17:08.100436 4986 scope.go:117] "RemoveContainer" containerID="603543b1a680b0caa9e5848c5cceea5803e2e00298e5d397d530277f8bf518ea" Dec 03 13:17:08 crc kubenswrapper[4986]: I1203 13:17:08.114545 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=22.749966772 podStartE2EDuration="50.114524065s" podCreationTimestamp="2025-12-03 13:16:18 +0000 UTC" firstStartedPulling="2025-12-03 13:16:29.367608455 +0000 UTC m=+1248.834039646" lastFinishedPulling="2025-12-03 13:16:56.732165708 +0000 UTC m=+1276.198596939" observedRunningTime="2025-12-03 13:17:08.106103838 +0000 UTC m=+1287.572535049" watchObservedRunningTime="2025-12-03 13:17:08.114524065 +0000 UTC m=+1287.580955256" Dec 03 13:17:08 crc kubenswrapper[4986]: I1203 13:17:08.138798 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=-9223371987.716 podStartE2EDuration="49.138776528s" podCreationTimestamp="2025-12-03 13:16:19 +0000 UTC" firstStartedPulling="2025-12-03 13:16:29.359952539 +0000 UTC m=+1248.826383730" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:17:08.134209695 +0000 UTC m=+1287.600640896" watchObservedRunningTime="2025-12-03 13:17:08.138776528 +0000 UTC m=+1287.605207719" Dec 03 13:17:08 crc kubenswrapper[4986]: I1203 13:17:08.158111 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-sxrg6" podStartSLOduration=2.643486967 podStartE2EDuration="10.158096259s" podCreationTimestamp="2025-12-03 13:16:58 +0000 UTC" firstStartedPulling="2025-12-03 13:16:59.469844446 +0000 UTC m=+1278.936275637" lastFinishedPulling="2025-12-03 13:17:06.984453738 +0000 UTC m=+1286.450884929" observedRunningTime="2025-12-03 13:17:08.156685831 +0000 UTC m=+1287.623117022" watchObservedRunningTime="2025-12-03 13:17:08.158096259 +0000 UTC m=+1287.624527450" Dec 03 13:17:08 crc kubenswrapper[4986]: I1203 13:17:08.176977 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-2dwn7"] Dec 03 13:17:08 crc kubenswrapper[4986]: I1203 13:17:08.186487 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-2dwn7"] Dec 03 13:17:08 crc kubenswrapper[4986]: E1203 13:17:08.276363 4986 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod84767b4d_e93f_41d9_a7e3_795def131772.slice\": RecentStats: unable to find data in memory cache]" Dec 03 13:17:08 crc kubenswrapper[4986]: I1203 13:17:08.482924 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Dec 03 13:17:08 crc kubenswrapper[4986]: I1203 13:17:08.955131 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84767b4d-e93f-41d9-a7e3-795def131772" path="/var/lib/kubelet/pods/84767b4d-e93f-41d9-a7e3-795def131772/volumes" Dec 03 13:17:10 crc kubenswrapper[4986]: I1203 13:17:10.313935 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Dec 03 13:17:10 crc kubenswrapper[4986]: I1203 13:17:10.314364 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Dec 03 13:17:10 crc kubenswrapper[4986]: I1203 13:17:10.483247 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Dec 03 13:17:10 crc kubenswrapper[4986]: I1203 13:17:10.494103 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe-etc-swift\") pod \"swift-storage-0\" (UID: \"cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe\") " pod="openstack/swift-storage-0" Dec 03 13:17:10 crc kubenswrapper[4986]: E1203 13:17:10.494243 4986 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 03 13:17:10 crc kubenswrapper[4986]: E1203 13:17:10.494268 4986 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 03 13:17:10 crc kubenswrapper[4986]: E1203 13:17:10.494349 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe-etc-swift podName:cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe nodeName:}" failed. No retries permitted until 2025-12-03 13:17:26.494328154 +0000 UTC m=+1305.960759365 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe-etc-swift") pod "swift-storage-0" (UID: "cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe") : configmap "swift-ring-files" not found Dec 03 13:17:10 crc kubenswrapper[4986]: I1203 13:17:10.544046 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Dec 03 13:17:11 crc kubenswrapper[4986]: I1203 13:17:11.351871 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Dec 03 13:17:11 crc kubenswrapper[4986]: I1203 13:17:11.351961 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Dec 03 13:17:12 crc kubenswrapper[4986]: I1203 13:17:12.030340 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Dec 03 13:17:12 crc kubenswrapper[4986]: I1203 13:17:12.188498 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Dec 03 13:17:13 crc kubenswrapper[4986]: I1203 13:17:13.524038 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Dec 03 13:17:13 crc kubenswrapper[4986]: I1203 13:17:13.695460 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Dec 03 13:17:13 crc kubenswrapper[4986]: E1203 13:17:13.695825 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84767b4d-e93f-41d9-a7e3-795def131772" containerName="init" Dec 03 13:17:13 crc kubenswrapper[4986]: I1203 13:17:13.695844 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="84767b4d-e93f-41d9-a7e3-795def131772" containerName="init" Dec 03 13:17:13 crc kubenswrapper[4986]: E1203 13:17:13.695865 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84767b4d-e93f-41d9-a7e3-795def131772" containerName="dnsmasq-dns" Dec 03 13:17:13 crc kubenswrapper[4986]: I1203 13:17:13.695875 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="84767b4d-e93f-41d9-a7e3-795def131772" containerName="dnsmasq-dns" Dec 03 13:17:13 crc kubenswrapper[4986]: I1203 13:17:13.696077 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="84767b4d-e93f-41d9-a7e3-795def131772" containerName="dnsmasq-dns" Dec 03 13:17:13 crc kubenswrapper[4986]: I1203 13:17:13.704002 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 03 13:17:13 crc kubenswrapper[4986]: I1203 13:17:13.705867 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Dec 03 13:17:13 crc kubenswrapper[4986]: I1203 13:17:13.706579 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Dec 03 13:17:13 crc kubenswrapper[4986]: I1203 13:17:13.706816 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Dec 03 13:17:13 crc kubenswrapper[4986]: I1203 13:17:13.711218 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-6z27x" Dec 03 13:17:13 crc kubenswrapper[4986]: I1203 13:17:13.711513 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Dec 03 13:17:13 crc kubenswrapper[4986]: I1203 13:17:13.847988 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c06d0ad-4862-4c0f-9cad-5f29aa8af72a-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"2c06d0ad-4862-4c0f-9cad-5f29aa8af72a\") " pod="openstack/ovn-northd-0" Dec 03 13:17:13 crc kubenswrapper[4986]: I1203 13:17:13.848217 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfg6n\" (UniqueName: \"kubernetes.io/projected/2c06d0ad-4862-4c0f-9cad-5f29aa8af72a-kube-api-access-vfg6n\") pod \"ovn-northd-0\" (UID: \"2c06d0ad-4862-4c0f-9cad-5f29aa8af72a\") " pod="openstack/ovn-northd-0" Dec 03 13:17:13 crc kubenswrapper[4986]: I1203 13:17:13.848382 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c06d0ad-4862-4c0f-9cad-5f29aa8af72a-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"2c06d0ad-4862-4c0f-9cad-5f29aa8af72a\") " pod="openstack/ovn-northd-0" Dec 03 13:17:13 crc kubenswrapper[4986]: I1203 13:17:13.848518 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2c06d0ad-4862-4c0f-9cad-5f29aa8af72a-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"2c06d0ad-4862-4c0f-9cad-5f29aa8af72a\") " pod="openstack/ovn-northd-0" Dec 03 13:17:13 crc kubenswrapper[4986]: I1203 13:17:13.848695 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2c06d0ad-4862-4c0f-9cad-5f29aa8af72a-scripts\") pod \"ovn-northd-0\" (UID: \"2c06d0ad-4862-4c0f-9cad-5f29aa8af72a\") " pod="openstack/ovn-northd-0" Dec 03 13:17:13 crc kubenswrapper[4986]: I1203 13:17:13.848841 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c06d0ad-4862-4c0f-9cad-5f29aa8af72a-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"2c06d0ad-4862-4c0f-9cad-5f29aa8af72a\") " pod="openstack/ovn-northd-0" Dec 03 13:17:13 crc kubenswrapper[4986]: I1203 13:17:13.848989 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c06d0ad-4862-4c0f-9cad-5f29aa8af72a-config\") pod \"ovn-northd-0\" (UID: \"2c06d0ad-4862-4c0f-9cad-5f29aa8af72a\") " pod="openstack/ovn-northd-0" Dec 03 13:17:13 crc kubenswrapper[4986]: I1203 13:17:13.950620 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfg6n\" (UniqueName: \"kubernetes.io/projected/2c06d0ad-4862-4c0f-9cad-5f29aa8af72a-kube-api-access-vfg6n\") pod \"ovn-northd-0\" (UID: \"2c06d0ad-4862-4c0f-9cad-5f29aa8af72a\") " pod="openstack/ovn-northd-0" Dec 03 13:17:13 crc kubenswrapper[4986]: I1203 13:17:13.950691 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c06d0ad-4862-4c0f-9cad-5f29aa8af72a-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"2c06d0ad-4862-4c0f-9cad-5f29aa8af72a\") " pod="openstack/ovn-northd-0" Dec 03 13:17:13 crc kubenswrapper[4986]: I1203 13:17:13.950724 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2c06d0ad-4862-4c0f-9cad-5f29aa8af72a-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"2c06d0ad-4862-4c0f-9cad-5f29aa8af72a\") " pod="openstack/ovn-northd-0" Dec 03 13:17:13 crc kubenswrapper[4986]: I1203 13:17:13.950784 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2c06d0ad-4862-4c0f-9cad-5f29aa8af72a-scripts\") pod \"ovn-northd-0\" (UID: \"2c06d0ad-4862-4c0f-9cad-5f29aa8af72a\") " pod="openstack/ovn-northd-0" Dec 03 13:17:13 crc kubenswrapper[4986]: I1203 13:17:13.950824 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c06d0ad-4862-4c0f-9cad-5f29aa8af72a-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"2c06d0ad-4862-4c0f-9cad-5f29aa8af72a\") " pod="openstack/ovn-northd-0" Dec 03 13:17:13 crc kubenswrapper[4986]: I1203 13:17:13.950868 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c06d0ad-4862-4c0f-9cad-5f29aa8af72a-config\") pod \"ovn-northd-0\" (UID: \"2c06d0ad-4862-4c0f-9cad-5f29aa8af72a\") " pod="openstack/ovn-northd-0" Dec 03 13:17:13 crc kubenswrapper[4986]: I1203 13:17:13.950917 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c06d0ad-4862-4c0f-9cad-5f29aa8af72a-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"2c06d0ad-4862-4c0f-9cad-5f29aa8af72a\") " pod="openstack/ovn-northd-0" Dec 03 13:17:13 crc kubenswrapper[4986]: I1203 13:17:13.952423 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2c06d0ad-4862-4c0f-9cad-5f29aa8af72a-scripts\") pod \"ovn-northd-0\" (UID: \"2c06d0ad-4862-4c0f-9cad-5f29aa8af72a\") " pod="openstack/ovn-northd-0" Dec 03 13:17:13 crc kubenswrapper[4986]: I1203 13:17:13.952637 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c06d0ad-4862-4c0f-9cad-5f29aa8af72a-config\") pod \"ovn-northd-0\" (UID: \"2c06d0ad-4862-4c0f-9cad-5f29aa8af72a\") " pod="openstack/ovn-northd-0" Dec 03 13:17:13 crc kubenswrapper[4986]: I1203 13:17:13.952658 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2c06d0ad-4862-4c0f-9cad-5f29aa8af72a-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"2c06d0ad-4862-4c0f-9cad-5f29aa8af72a\") " pod="openstack/ovn-northd-0" Dec 03 13:17:13 crc kubenswrapper[4986]: I1203 13:17:13.959068 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c06d0ad-4862-4c0f-9cad-5f29aa8af72a-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"2c06d0ad-4862-4c0f-9cad-5f29aa8af72a\") " pod="openstack/ovn-northd-0" Dec 03 13:17:13 crc kubenswrapper[4986]: I1203 13:17:13.959141 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c06d0ad-4862-4c0f-9cad-5f29aa8af72a-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"2c06d0ad-4862-4c0f-9cad-5f29aa8af72a\") " pod="openstack/ovn-northd-0" Dec 03 13:17:13 crc kubenswrapper[4986]: I1203 13:17:13.961826 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c06d0ad-4862-4c0f-9cad-5f29aa8af72a-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"2c06d0ad-4862-4c0f-9cad-5f29aa8af72a\") " pod="openstack/ovn-northd-0" Dec 03 13:17:13 crc kubenswrapper[4986]: I1203 13:17:13.967797 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfg6n\" (UniqueName: \"kubernetes.io/projected/2c06d0ad-4862-4c0f-9cad-5f29aa8af72a-kube-api-access-vfg6n\") pod \"ovn-northd-0\" (UID: \"2c06d0ad-4862-4c0f-9cad-5f29aa8af72a\") " pod="openstack/ovn-northd-0" Dec 03 13:17:14 crc kubenswrapper[4986]: I1203 13:17:14.023871 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 03 13:17:14 crc kubenswrapper[4986]: I1203 13:17:14.123665 4986 generic.go:334] "Generic (PLEG): container finished" podID="95163eb6-a8f2-45d5-b816-84dd6ffbdab2" containerID="317e23184f159b9ae8d449f2bea1731d4bc56f3662f85c7e210b5cade1f79b4c" exitCode=0 Dec 03 13:17:14 crc kubenswrapper[4986]: I1203 13:17:14.123706 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-sxrg6" event={"ID":"95163eb6-a8f2-45d5-b816-84dd6ffbdab2","Type":"ContainerDied","Data":"317e23184f159b9ae8d449f2bea1731d4bc56f3662f85c7e210b5cade1f79b4c"} Dec 03 13:17:14 crc kubenswrapper[4986]: I1203 13:17:14.420983 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Dec 03 13:17:14 crc kubenswrapper[4986]: I1203 13:17:14.492448 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Dec 03 13:17:14 crc kubenswrapper[4986]: W1203 13:17:14.521703 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2c06d0ad_4862_4c0f_9cad_5f29aa8af72a.slice/crio-5fd310593e6e125ea411852e0c6a8800dd1f7054216a54bba1f0ccddc39d8795 WatchSource:0}: Error finding container 5fd310593e6e125ea411852e0c6a8800dd1f7054216a54bba1f0ccddc39d8795: Status 404 returned error can't find the container with id 5fd310593e6e125ea411852e0c6a8800dd1f7054216a54bba1f0ccddc39d8795 Dec 03 13:17:14 crc kubenswrapper[4986]: I1203 13:17:14.526787 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Dec 03 13:17:15 crc kubenswrapper[4986]: I1203 13:17:15.133782 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"2c06d0ad-4862-4c0f-9cad-5f29aa8af72a","Type":"ContainerStarted","Data":"5fd310593e6e125ea411852e0c6a8800dd1f7054216a54bba1f0ccddc39d8795"} Dec 03 13:17:15 crc kubenswrapper[4986]: I1203 13:17:15.803539 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-sxrg6" Dec 03 13:17:15 crc kubenswrapper[4986]: I1203 13:17:15.886353 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bw9n\" (UniqueName: \"kubernetes.io/projected/95163eb6-a8f2-45d5-b816-84dd6ffbdab2-kube-api-access-6bw9n\") pod \"95163eb6-a8f2-45d5-b816-84dd6ffbdab2\" (UID: \"95163eb6-a8f2-45d5-b816-84dd6ffbdab2\") " Dec 03 13:17:15 crc kubenswrapper[4986]: I1203 13:17:15.886465 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/95163eb6-a8f2-45d5-b816-84dd6ffbdab2-swiftconf\") pod \"95163eb6-a8f2-45d5-b816-84dd6ffbdab2\" (UID: \"95163eb6-a8f2-45d5-b816-84dd6ffbdab2\") " Dec 03 13:17:15 crc kubenswrapper[4986]: I1203 13:17:15.886528 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95163eb6-a8f2-45d5-b816-84dd6ffbdab2-combined-ca-bundle\") pod \"95163eb6-a8f2-45d5-b816-84dd6ffbdab2\" (UID: \"95163eb6-a8f2-45d5-b816-84dd6ffbdab2\") " Dec 03 13:17:15 crc kubenswrapper[4986]: I1203 13:17:15.886835 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/95163eb6-a8f2-45d5-b816-84dd6ffbdab2-ring-data-devices\") pod \"95163eb6-a8f2-45d5-b816-84dd6ffbdab2\" (UID: \"95163eb6-a8f2-45d5-b816-84dd6ffbdab2\") " Dec 03 13:17:15 crc kubenswrapper[4986]: I1203 13:17:15.886868 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/95163eb6-a8f2-45d5-b816-84dd6ffbdab2-dispersionconf\") pod \"95163eb6-a8f2-45d5-b816-84dd6ffbdab2\" (UID: \"95163eb6-a8f2-45d5-b816-84dd6ffbdab2\") " Dec 03 13:17:15 crc kubenswrapper[4986]: I1203 13:17:15.886892 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/95163eb6-a8f2-45d5-b816-84dd6ffbdab2-scripts\") pod \"95163eb6-a8f2-45d5-b816-84dd6ffbdab2\" (UID: \"95163eb6-a8f2-45d5-b816-84dd6ffbdab2\") " Dec 03 13:17:15 crc kubenswrapper[4986]: I1203 13:17:15.886953 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/95163eb6-a8f2-45d5-b816-84dd6ffbdab2-etc-swift\") pod \"95163eb6-a8f2-45d5-b816-84dd6ffbdab2\" (UID: \"95163eb6-a8f2-45d5-b816-84dd6ffbdab2\") " Dec 03 13:17:15 crc kubenswrapper[4986]: I1203 13:17:15.887987 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95163eb6-a8f2-45d5-b816-84dd6ffbdab2-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "95163eb6-a8f2-45d5-b816-84dd6ffbdab2" (UID: "95163eb6-a8f2-45d5-b816-84dd6ffbdab2"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:17:15 crc kubenswrapper[4986]: I1203 13:17:15.888631 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95163eb6-a8f2-45d5-b816-84dd6ffbdab2-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "95163eb6-a8f2-45d5-b816-84dd6ffbdab2" (UID: "95163eb6-a8f2-45d5-b816-84dd6ffbdab2"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 13:17:15 crc kubenswrapper[4986]: I1203 13:17:15.891414 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95163eb6-a8f2-45d5-b816-84dd6ffbdab2-kube-api-access-6bw9n" (OuterVolumeSpecName: "kube-api-access-6bw9n") pod "95163eb6-a8f2-45d5-b816-84dd6ffbdab2" (UID: "95163eb6-a8f2-45d5-b816-84dd6ffbdab2"). InnerVolumeSpecName "kube-api-access-6bw9n". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:17:15 crc kubenswrapper[4986]: I1203 13:17:15.897370 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95163eb6-a8f2-45d5-b816-84dd6ffbdab2-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "95163eb6-a8f2-45d5-b816-84dd6ffbdab2" (UID: "95163eb6-a8f2-45d5-b816-84dd6ffbdab2"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:17:15 crc kubenswrapper[4986]: I1203 13:17:15.911832 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95163eb6-a8f2-45d5-b816-84dd6ffbdab2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "95163eb6-a8f2-45d5-b816-84dd6ffbdab2" (UID: "95163eb6-a8f2-45d5-b816-84dd6ffbdab2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:17:15 crc kubenswrapper[4986]: I1203 13:17:15.914767 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95163eb6-a8f2-45d5-b816-84dd6ffbdab2-scripts" (OuterVolumeSpecName: "scripts") pod "95163eb6-a8f2-45d5-b816-84dd6ffbdab2" (UID: "95163eb6-a8f2-45d5-b816-84dd6ffbdab2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:17:15 crc kubenswrapper[4986]: I1203 13:17:15.917844 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95163eb6-a8f2-45d5-b816-84dd6ffbdab2-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "95163eb6-a8f2-45d5-b816-84dd6ffbdab2" (UID: "95163eb6-a8f2-45d5-b816-84dd6ffbdab2"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:17:15 crc kubenswrapper[4986]: I1203 13:17:15.988638 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6bw9n\" (UniqueName: \"kubernetes.io/projected/95163eb6-a8f2-45d5-b816-84dd6ffbdab2-kube-api-access-6bw9n\") on node \"crc\" DevicePath \"\"" Dec 03 13:17:15 crc kubenswrapper[4986]: I1203 13:17:15.988666 4986 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/95163eb6-a8f2-45d5-b816-84dd6ffbdab2-swiftconf\") on node \"crc\" DevicePath \"\"" Dec 03 13:17:15 crc kubenswrapper[4986]: I1203 13:17:15.988676 4986 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95163eb6-a8f2-45d5-b816-84dd6ffbdab2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 13:17:15 crc kubenswrapper[4986]: I1203 13:17:15.988685 4986 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/95163eb6-a8f2-45d5-b816-84dd6ffbdab2-ring-data-devices\") on node \"crc\" DevicePath \"\"" Dec 03 13:17:15 crc kubenswrapper[4986]: I1203 13:17:15.988694 4986 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/95163eb6-a8f2-45d5-b816-84dd6ffbdab2-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 13:17:15 crc kubenswrapper[4986]: I1203 13:17:15.988702 4986 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/95163eb6-a8f2-45d5-b816-84dd6ffbdab2-dispersionconf\") on node \"crc\" DevicePath \"\"" Dec 03 13:17:15 crc kubenswrapper[4986]: I1203 13:17:15.988709 4986 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/95163eb6-a8f2-45d5-b816-84dd6ffbdab2-etc-swift\") on node \"crc\" DevicePath \"\"" Dec 03 13:17:16 crc kubenswrapper[4986]: I1203 13:17:16.143615 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-sxrg6" Dec 03 13:17:16 crc kubenswrapper[4986]: I1203 13:17:16.145467 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-sxrg6" event={"ID":"95163eb6-a8f2-45d5-b816-84dd6ffbdab2","Type":"ContainerDied","Data":"c370e0eee3e0d72e01ed0a46e52f67b654683a3c4eb3aa9123574205df33842e"} Dec 03 13:17:16 crc kubenswrapper[4986]: I1203 13:17:16.145525 4986 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c370e0eee3e0d72e01ed0a46e52f67b654683a3c4eb3aa9123574205df33842e" Dec 03 13:17:16 crc kubenswrapper[4986]: I1203 13:17:16.148152 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"2c06d0ad-4862-4c0f-9cad-5f29aa8af72a","Type":"ContainerStarted","Data":"516328148bf942450e9d03d0bed19ae37842bd5472c7212555c885fdfbf54b75"} Dec 03 13:17:16 crc kubenswrapper[4986]: I1203 13:17:16.148185 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"2c06d0ad-4862-4c0f-9cad-5f29aa8af72a","Type":"ContainerStarted","Data":"bdd34756187d74e9df63d537608dab74c53d2ff55a833b22e7563ed16afb5dd0"} Dec 03 13:17:16 crc kubenswrapper[4986]: I1203 13:17:16.148911 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Dec 03 13:17:16 crc kubenswrapper[4986]: I1203 13:17:16.177941 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.035245675 podStartE2EDuration="3.177913162s" podCreationTimestamp="2025-12-03 13:17:13 +0000 UTC" firstStartedPulling="2025-12-03 13:17:14.52390303 +0000 UTC m=+1293.990334221" lastFinishedPulling="2025-12-03 13:17:15.666570497 +0000 UTC m=+1295.133001708" observedRunningTime="2025-12-03 13:17:16.165524838 +0000 UTC m=+1295.631956039" watchObservedRunningTime="2025-12-03 13:17:16.177913162 +0000 UTC m=+1295.644344373" Dec 03 13:17:16 crc kubenswrapper[4986]: I1203 13:17:16.510303 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-45czf" Dec 03 13:17:16 crc kubenswrapper[4986]: I1203 13:17:16.512828 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-45czf" Dec 03 13:17:16 crc kubenswrapper[4986]: I1203 13:17:16.744928 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-kqn7t-config-vdn5p"] Dec 03 13:17:16 crc kubenswrapper[4986]: E1203 13:17:16.745227 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95163eb6-a8f2-45d5-b816-84dd6ffbdab2" containerName="swift-ring-rebalance" Dec 03 13:17:16 crc kubenswrapper[4986]: I1203 13:17:16.745244 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="95163eb6-a8f2-45d5-b816-84dd6ffbdab2" containerName="swift-ring-rebalance" Dec 03 13:17:16 crc kubenswrapper[4986]: I1203 13:17:16.745436 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="95163eb6-a8f2-45d5-b816-84dd6ffbdab2" containerName="swift-ring-rebalance" Dec 03 13:17:16 crc kubenswrapper[4986]: I1203 13:17:16.746016 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-kqn7t-config-vdn5p" Dec 03 13:17:16 crc kubenswrapper[4986]: I1203 13:17:16.749105 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Dec 03 13:17:16 crc kubenswrapper[4986]: I1203 13:17:16.767070 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-kqn7t-config-vdn5p"] Dec 03 13:17:16 crc kubenswrapper[4986]: I1203 13:17:16.805334 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0cfc00cf-a128-4114-82af-933df88fe42b-var-log-ovn\") pod \"ovn-controller-kqn7t-config-vdn5p\" (UID: \"0cfc00cf-a128-4114-82af-933df88fe42b\") " pod="openstack/ovn-controller-kqn7t-config-vdn5p" Dec 03 13:17:16 crc kubenswrapper[4986]: I1203 13:17:16.805394 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0cfc00cf-a128-4114-82af-933df88fe42b-additional-scripts\") pod \"ovn-controller-kqn7t-config-vdn5p\" (UID: \"0cfc00cf-a128-4114-82af-933df88fe42b\") " pod="openstack/ovn-controller-kqn7t-config-vdn5p" Dec 03 13:17:16 crc kubenswrapper[4986]: I1203 13:17:16.805586 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0cfc00cf-a128-4114-82af-933df88fe42b-var-run\") pod \"ovn-controller-kqn7t-config-vdn5p\" (UID: \"0cfc00cf-a128-4114-82af-933df88fe42b\") " pod="openstack/ovn-controller-kqn7t-config-vdn5p" Dec 03 13:17:16 crc kubenswrapper[4986]: I1203 13:17:16.805633 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0cfc00cf-a128-4114-82af-933df88fe42b-scripts\") pod \"ovn-controller-kqn7t-config-vdn5p\" (UID: \"0cfc00cf-a128-4114-82af-933df88fe42b\") " pod="openstack/ovn-controller-kqn7t-config-vdn5p" Dec 03 13:17:16 crc kubenswrapper[4986]: I1203 13:17:16.805724 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbsj6\" (UniqueName: \"kubernetes.io/projected/0cfc00cf-a128-4114-82af-933df88fe42b-kube-api-access-mbsj6\") pod \"ovn-controller-kqn7t-config-vdn5p\" (UID: \"0cfc00cf-a128-4114-82af-933df88fe42b\") " pod="openstack/ovn-controller-kqn7t-config-vdn5p" Dec 03 13:17:16 crc kubenswrapper[4986]: I1203 13:17:16.805760 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0cfc00cf-a128-4114-82af-933df88fe42b-var-run-ovn\") pod \"ovn-controller-kqn7t-config-vdn5p\" (UID: \"0cfc00cf-a128-4114-82af-933df88fe42b\") " pod="openstack/ovn-controller-kqn7t-config-vdn5p" Dec 03 13:17:16 crc kubenswrapper[4986]: I1203 13:17:16.907507 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbsj6\" (UniqueName: \"kubernetes.io/projected/0cfc00cf-a128-4114-82af-933df88fe42b-kube-api-access-mbsj6\") pod \"ovn-controller-kqn7t-config-vdn5p\" (UID: \"0cfc00cf-a128-4114-82af-933df88fe42b\") " pod="openstack/ovn-controller-kqn7t-config-vdn5p" Dec 03 13:17:16 crc kubenswrapper[4986]: I1203 13:17:16.907966 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0cfc00cf-a128-4114-82af-933df88fe42b-var-run-ovn\") pod \"ovn-controller-kqn7t-config-vdn5p\" (UID: \"0cfc00cf-a128-4114-82af-933df88fe42b\") " pod="openstack/ovn-controller-kqn7t-config-vdn5p" Dec 03 13:17:16 crc kubenswrapper[4986]: I1203 13:17:16.908159 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0cfc00cf-a128-4114-82af-933df88fe42b-var-log-ovn\") pod \"ovn-controller-kqn7t-config-vdn5p\" (UID: \"0cfc00cf-a128-4114-82af-933df88fe42b\") " pod="openstack/ovn-controller-kqn7t-config-vdn5p" Dec 03 13:17:16 crc kubenswrapper[4986]: I1203 13:17:16.908202 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0cfc00cf-a128-4114-82af-933df88fe42b-additional-scripts\") pod \"ovn-controller-kqn7t-config-vdn5p\" (UID: \"0cfc00cf-a128-4114-82af-933df88fe42b\") " pod="openstack/ovn-controller-kqn7t-config-vdn5p" Dec 03 13:17:16 crc kubenswrapper[4986]: I1203 13:17:16.908277 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0cfc00cf-a128-4114-82af-933df88fe42b-var-run\") pod \"ovn-controller-kqn7t-config-vdn5p\" (UID: \"0cfc00cf-a128-4114-82af-933df88fe42b\") " pod="openstack/ovn-controller-kqn7t-config-vdn5p" Dec 03 13:17:16 crc kubenswrapper[4986]: I1203 13:17:16.908321 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0cfc00cf-a128-4114-82af-933df88fe42b-scripts\") pod \"ovn-controller-kqn7t-config-vdn5p\" (UID: \"0cfc00cf-a128-4114-82af-933df88fe42b\") " pod="openstack/ovn-controller-kqn7t-config-vdn5p" Dec 03 13:17:16 crc kubenswrapper[4986]: I1203 13:17:16.908447 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0cfc00cf-a128-4114-82af-933df88fe42b-var-log-ovn\") pod \"ovn-controller-kqn7t-config-vdn5p\" (UID: \"0cfc00cf-a128-4114-82af-933df88fe42b\") " pod="openstack/ovn-controller-kqn7t-config-vdn5p" Dec 03 13:17:16 crc kubenswrapper[4986]: I1203 13:17:16.908544 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0cfc00cf-a128-4114-82af-933df88fe42b-var-run\") pod \"ovn-controller-kqn7t-config-vdn5p\" (UID: \"0cfc00cf-a128-4114-82af-933df88fe42b\") " pod="openstack/ovn-controller-kqn7t-config-vdn5p" Dec 03 13:17:16 crc kubenswrapper[4986]: I1203 13:17:16.909157 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0cfc00cf-a128-4114-82af-933df88fe42b-additional-scripts\") pod \"ovn-controller-kqn7t-config-vdn5p\" (UID: \"0cfc00cf-a128-4114-82af-933df88fe42b\") " pod="openstack/ovn-controller-kqn7t-config-vdn5p" Dec 03 13:17:16 crc kubenswrapper[4986]: I1203 13:17:16.909249 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0cfc00cf-a128-4114-82af-933df88fe42b-var-run-ovn\") pod \"ovn-controller-kqn7t-config-vdn5p\" (UID: \"0cfc00cf-a128-4114-82af-933df88fe42b\") " pod="openstack/ovn-controller-kqn7t-config-vdn5p" Dec 03 13:17:16 crc kubenswrapper[4986]: I1203 13:17:16.911578 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0cfc00cf-a128-4114-82af-933df88fe42b-scripts\") pod \"ovn-controller-kqn7t-config-vdn5p\" (UID: \"0cfc00cf-a128-4114-82af-933df88fe42b\") " pod="openstack/ovn-controller-kqn7t-config-vdn5p" Dec 03 13:17:16 crc kubenswrapper[4986]: I1203 13:17:16.937118 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbsj6\" (UniqueName: \"kubernetes.io/projected/0cfc00cf-a128-4114-82af-933df88fe42b-kube-api-access-mbsj6\") pod \"ovn-controller-kqn7t-config-vdn5p\" (UID: \"0cfc00cf-a128-4114-82af-933df88fe42b\") " pod="openstack/ovn-controller-kqn7t-config-vdn5p" Dec 03 13:17:17 crc kubenswrapper[4986]: I1203 13:17:17.122888 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-kqn7t-config-vdn5p" Dec 03 13:17:17 crc kubenswrapper[4986]: I1203 13:17:17.161342 4986 generic.go:334] "Generic (PLEG): container finished" podID="5e8a62bd-1e92-464d-b905-8eb18cc44646" containerID="920c3ace47a148c8d0728716a7fd4dea74b6788fc8e1d4cbff7078089dbef14c" exitCode=0 Dec 03 13:17:17 crc kubenswrapper[4986]: I1203 13:17:17.161417 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"5e8a62bd-1e92-464d-b905-8eb18cc44646","Type":"ContainerDied","Data":"920c3ace47a148c8d0728716a7fd4dea74b6788fc8e1d4cbff7078089dbef14c"} Dec 03 13:17:17 crc kubenswrapper[4986]: I1203 13:17:17.168087 4986 generic.go:334] "Generic (PLEG): container finished" podID="285bb825-5a17-4d45-87d6-852513d0351b" containerID="42ee92ce1ede62523add70392662fba98598b4a99ffb08d37172d9fc355676d5" exitCode=0 Dec 03 13:17:17 crc kubenswrapper[4986]: I1203 13:17:17.168255 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"285bb825-5a17-4d45-87d6-852513d0351b","Type":"ContainerDied","Data":"42ee92ce1ede62523add70392662fba98598b4a99ffb08d37172d9fc355676d5"} Dec 03 13:17:17 crc kubenswrapper[4986]: I1203 13:17:17.572787 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-kqn7t-config-vdn5p"] Dec 03 13:17:17 crc kubenswrapper[4986]: W1203 13:17:17.574399 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0cfc00cf_a128_4114_82af_933df88fe42b.slice/crio-1b3d9da965c65decf6bc78bbf8676bb2a0e371253ef25d94e3efca20f6acf8bf WatchSource:0}: Error finding container 1b3d9da965c65decf6bc78bbf8676bb2a0e371253ef25d94e3efca20f6acf8bf: Status 404 returned error can't find the container with id 1b3d9da965c65decf6bc78bbf8676bb2a0e371253ef25d94e3efca20f6acf8bf Dec 03 13:17:18 crc kubenswrapper[4986]: I1203 13:17:18.177588 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"285bb825-5a17-4d45-87d6-852513d0351b","Type":"ContainerStarted","Data":"cd77f1dedb74dc55252506af1fb7526b464897f502c56d636536bcccd59e644a"} Dec 03 13:17:18 crc kubenswrapper[4986]: I1203 13:17:18.178067 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Dec 03 13:17:18 crc kubenswrapper[4986]: I1203 13:17:18.179367 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"5e8a62bd-1e92-464d-b905-8eb18cc44646","Type":"ContainerStarted","Data":"70937ee4d6412671fece4d919a544662e17c8e49205ef4554e63372362219691"} Dec 03 13:17:18 crc kubenswrapper[4986]: I1203 13:17:18.179579 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Dec 03 13:17:18 crc kubenswrapper[4986]: I1203 13:17:18.181546 4986 generic.go:334] "Generic (PLEG): container finished" podID="0cfc00cf-a128-4114-82af-933df88fe42b" containerID="b951f7cf14f01a5b8c721de9534bb34469eddf3f7b318ff8fe714f44176bdb0d" exitCode=0 Dec 03 13:17:18 crc kubenswrapper[4986]: I1203 13:17:18.181578 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-kqn7t-config-vdn5p" event={"ID":"0cfc00cf-a128-4114-82af-933df88fe42b","Type":"ContainerDied","Data":"b951f7cf14f01a5b8c721de9534bb34469eddf3f7b318ff8fe714f44176bdb0d"} Dec 03 13:17:18 crc kubenswrapper[4986]: I1203 13:17:18.181610 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-kqn7t-config-vdn5p" event={"ID":"0cfc00cf-a128-4114-82af-933df88fe42b","Type":"ContainerStarted","Data":"1b3d9da965c65decf6bc78bbf8676bb2a0e371253ef25d94e3efca20f6acf8bf"} Dec 03 13:17:18 crc kubenswrapper[4986]: I1203 13:17:18.211367 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=48.548352421 podStartE2EDuration="1m2.211346893s" podCreationTimestamp="2025-12-03 13:16:16 +0000 UTC" firstStartedPulling="2025-12-03 13:16:29.359764564 +0000 UTC m=+1248.826195755" lastFinishedPulling="2025-12-03 13:16:43.022759036 +0000 UTC m=+1262.489190227" observedRunningTime="2025-12-03 13:17:18.208237849 +0000 UTC m=+1297.674669050" watchObservedRunningTime="2025-12-03 13:17:18.211346893 +0000 UTC m=+1297.677778084" Dec 03 13:17:18 crc kubenswrapper[4986]: I1203 13:17:18.260267 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=47.554500247 podStartE2EDuration="1m1.260244411s" podCreationTimestamp="2025-12-03 13:16:17 +0000 UTC" firstStartedPulling="2025-12-03 13:16:28.78135416 +0000 UTC m=+1248.247785351" lastFinishedPulling="2025-12-03 13:16:42.487098324 +0000 UTC m=+1261.953529515" observedRunningTime="2025-12-03 13:17:18.251414714 +0000 UTC m=+1297.717845905" watchObservedRunningTime="2025-12-03 13:17:18.260244411 +0000 UTC m=+1297.726675622" Dec 03 13:17:19 crc kubenswrapper[4986]: I1203 13:17:19.497381 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-kqn7t-config-vdn5p" Dec 03 13:17:19 crc kubenswrapper[4986]: I1203 13:17:19.667747 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0cfc00cf-a128-4114-82af-933df88fe42b-var-run-ovn\") pod \"0cfc00cf-a128-4114-82af-933df88fe42b\" (UID: \"0cfc00cf-a128-4114-82af-933df88fe42b\") " Dec 03 13:17:19 crc kubenswrapper[4986]: I1203 13:17:19.667926 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0cfc00cf-a128-4114-82af-933df88fe42b-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "0cfc00cf-a128-4114-82af-933df88fe42b" (UID: "0cfc00cf-a128-4114-82af-933df88fe42b"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 13:17:19 crc kubenswrapper[4986]: I1203 13:17:19.667964 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0cfc00cf-a128-4114-82af-933df88fe42b-var-log-ovn\") pod \"0cfc00cf-a128-4114-82af-933df88fe42b\" (UID: \"0cfc00cf-a128-4114-82af-933df88fe42b\") " Dec 03 13:17:19 crc kubenswrapper[4986]: I1203 13:17:19.668020 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0cfc00cf-a128-4114-82af-933df88fe42b-additional-scripts\") pod \"0cfc00cf-a128-4114-82af-933df88fe42b\" (UID: \"0cfc00cf-a128-4114-82af-933df88fe42b\") " Dec 03 13:17:19 crc kubenswrapper[4986]: I1203 13:17:19.668069 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0cfc00cf-a128-4114-82af-933df88fe42b-var-run\") pod \"0cfc00cf-a128-4114-82af-933df88fe42b\" (UID: \"0cfc00cf-a128-4114-82af-933df88fe42b\") " Dec 03 13:17:19 crc kubenswrapper[4986]: I1203 13:17:19.668122 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0cfc00cf-a128-4114-82af-933df88fe42b-scripts\") pod \"0cfc00cf-a128-4114-82af-933df88fe42b\" (UID: \"0cfc00cf-a128-4114-82af-933df88fe42b\") " Dec 03 13:17:19 crc kubenswrapper[4986]: I1203 13:17:19.668147 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mbsj6\" (UniqueName: \"kubernetes.io/projected/0cfc00cf-a128-4114-82af-933df88fe42b-kube-api-access-mbsj6\") pod \"0cfc00cf-a128-4114-82af-933df88fe42b\" (UID: \"0cfc00cf-a128-4114-82af-933df88fe42b\") " Dec 03 13:17:19 crc kubenswrapper[4986]: I1203 13:17:19.668271 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0cfc00cf-a128-4114-82af-933df88fe42b-var-run" (OuterVolumeSpecName: "var-run") pod "0cfc00cf-a128-4114-82af-933df88fe42b" (UID: "0cfc00cf-a128-4114-82af-933df88fe42b"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 13:17:19 crc kubenswrapper[4986]: I1203 13:17:19.668364 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0cfc00cf-a128-4114-82af-933df88fe42b-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "0cfc00cf-a128-4114-82af-933df88fe42b" (UID: "0cfc00cf-a128-4114-82af-933df88fe42b"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 13:17:19 crc kubenswrapper[4986]: I1203 13:17:19.668590 4986 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0cfc00cf-a128-4114-82af-933df88fe42b-var-log-ovn\") on node \"crc\" DevicePath \"\"" Dec 03 13:17:19 crc kubenswrapper[4986]: I1203 13:17:19.668607 4986 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0cfc00cf-a128-4114-82af-933df88fe42b-var-run\") on node \"crc\" DevicePath \"\"" Dec 03 13:17:19 crc kubenswrapper[4986]: I1203 13:17:19.668619 4986 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0cfc00cf-a128-4114-82af-933df88fe42b-var-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 03 13:17:19 crc kubenswrapper[4986]: I1203 13:17:19.668915 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0cfc00cf-a128-4114-82af-933df88fe42b-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "0cfc00cf-a128-4114-82af-933df88fe42b" (UID: "0cfc00cf-a128-4114-82af-933df88fe42b"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:17:19 crc kubenswrapper[4986]: I1203 13:17:19.669275 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0cfc00cf-a128-4114-82af-933df88fe42b-scripts" (OuterVolumeSpecName: "scripts") pod "0cfc00cf-a128-4114-82af-933df88fe42b" (UID: "0cfc00cf-a128-4114-82af-933df88fe42b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:17:19 crc kubenswrapper[4986]: I1203 13:17:19.677678 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0cfc00cf-a128-4114-82af-933df88fe42b-kube-api-access-mbsj6" (OuterVolumeSpecName: "kube-api-access-mbsj6") pod "0cfc00cf-a128-4114-82af-933df88fe42b" (UID: "0cfc00cf-a128-4114-82af-933df88fe42b"). InnerVolumeSpecName "kube-api-access-mbsj6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:17:19 crc kubenswrapper[4986]: I1203 13:17:19.769788 4986 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0cfc00cf-a128-4114-82af-933df88fe42b-additional-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 13:17:19 crc kubenswrapper[4986]: I1203 13:17:19.770110 4986 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0cfc00cf-a128-4114-82af-933df88fe42b-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 13:17:19 crc kubenswrapper[4986]: I1203 13:17:19.770119 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mbsj6\" (UniqueName: \"kubernetes.io/projected/0cfc00cf-a128-4114-82af-933df88fe42b-kube-api-access-mbsj6\") on node \"crc\" DevicePath \"\"" Dec 03 13:17:20 crc kubenswrapper[4986]: I1203 13:17:20.199725 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-kqn7t-config-vdn5p" event={"ID":"0cfc00cf-a128-4114-82af-933df88fe42b","Type":"ContainerDied","Data":"1b3d9da965c65decf6bc78bbf8676bb2a0e371253ef25d94e3efca20f6acf8bf"} Dec 03 13:17:20 crc kubenswrapper[4986]: I1203 13:17:20.199762 4986 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b3d9da965c65decf6bc78bbf8676bb2a0e371253ef25d94e3efca20f6acf8bf" Dec 03 13:17:20 crc kubenswrapper[4986]: I1203 13:17:20.200227 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-kqn7t-config-vdn5p" Dec 03 13:17:20 crc kubenswrapper[4986]: I1203 13:17:20.614079 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-kqn7t-config-vdn5p"] Dec 03 13:17:20 crc kubenswrapper[4986]: I1203 13:17:20.622084 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-kqn7t-config-vdn5p"] Dec 03 13:17:20 crc kubenswrapper[4986]: I1203 13:17:20.954900 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0cfc00cf-a128-4114-82af-933df88fe42b" path="/var/lib/kubelet/pods/0cfc00cf-a128-4114-82af-933df88fe42b/volumes" Dec 03 13:17:21 crc kubenswrapper[4986]: I1203 13:17:21.338021 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-3f59-account-create-update-x5h2k"] Dec 03 13:17:21 crc kubenswrapper[4986]: E1203 13:17:21.338372 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cfc00cf-a128-4114-82af-933df88fe42b" containerName="ovn-config" Dec 03 13:17:21 crc kubenswrapper[4986]: I1203 13:17:21.338387 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cfc00cf-a128-4114-82af-933df88fe42b" containerName="ovn-config" Dec 03 13:17:21 crc kubenswrapper[4986]: I1203 13:17:21.338537 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="0cfc00cf-a128-4114-82af-933df88fe42b" containerName="ovn-config" Dec 03 13:17:21 crc kubenswrapper[4986]: I1203 13:17:21.339035 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-3f59-account-create-update-x5h2k" Dec 03 13:17:21 crc kubenswrapper[4986]: I1203 13:17:21.347912 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Dec 03 13:17:21 crc kubenswrapper[4986]: I1203 13:17:21.348838 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-3f59-account-create-update-x5h2k"] Dec 03 13:17:21 crc kubenswrapper[4986]: I1203 13:17:21.395650 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-x5mrn"] Dec 03 13:17:21 crc kubenswrapper[4986]: I1203 13:17:21.397225 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-x5mrn" Dec 03 13:17:21 crc kubenswrapper[4986]: I1203 13:17:21.408973 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-x5mrn"] Dec 03 13:17:21 crc kubenswrapper[4986]: I1203 13:17:21.498007 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/47e9face-2119-40b5-a421-74eabeb2971a-operator-scripts\") pod \"keystone-db-create-x5mrn\" (UID: \"47e9face-2119-40b5-a421-74eabeb2971a\") " pod="openstack/keystone-db-create-x5mrn" Dec 03 13:17:21 crc kubenswrapper[4986]: I1203 13:17:21.498149 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/39b99d6d-488c-4eba-ae0b-fe1e7d8ec681-operator-scripts\") pod \"keystone-3f59-account-create-update-x5h2k\" (UID: \"39b99d6d-488c-4eba-ae0b-fe1e7d8ec681\") " pod="openstack/keystone-3f59-account-create-update-x5h2k" Dec 03 13:17:21 crc kubenswrapper[4986]: I1203 13:17:21.498191 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ht86h\" (UniqueName: \"kubernetes.io/projected/47e9face-2119-40b5-a421-74eabeb2971a-kube-api-access-ht86h\") pod \"keystone-db-create-x5mrn\" (UID: \"47e9face-2119-40b5-a421-74eabeb2971a\") " pod="openstack/keystone-db-create-x5mrn" Dec 03 13:17:21 crc kubenswrapper[4986]: I1203 13:17:21.498241 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fn7rg\" (UniqueName: \"kubernetes.io/projected/39b99d6d-488c-4eba-ae0b-fe1e7d8ec681-kube-api-access-fn7rg\") pod \"keystone-3f59-account-create-update-x5h2k\" (UID: \"39b99d6d-488c-4eba-ae0b-fe1e7d8ec681\") " pod="openstack/keystone-3f59-account-create-update-x5h2k" Dec 03 13:17:21 crc kubenswrapper[4986]: I1203 13:17:21.599986 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ht86h\" (UniqueName: \"kubernetes.io/projected/47e9face-2119-40b5-a421-74eabeb2971a-kube-api-access-ht86h\") pod \"keystone-db-create-x5mrn\" (UID: \"47e9face-2119-40b5-a421-74eabeb2971a\") " pod="openstack/keystone-db-create-x5mrn" Dec 03 13:17:21 crc kubenswrapper[4986]: I1203 13:17:21.600083 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fn7rg\" (UniqueName: \"kubernetes.io/projected/39b99d6d-488c-4eba-ae0b-fe1e7d8ec681-kube-api-access-fn7rg\") pod \"keystone-3f59-account-create-update-x5h2k\" (UID: \"39b99d6d-488c-4eba-ae0b-fe1e7d8ec681\") " pod="openstack/keystone-3f59-account-create-update-x5h2k" Dec 03 13:17:21 crc kubenswrapper[4986]: I1203 13:17:21.600116 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/47e9face-2119-40b5-a421-74eabeb2971a-operator-scripts\") pod \"keystone-db-create-x5mrn\" (UID: \"47e9face-2119-40b5-a421-74eabeb2971a\") " pod="openstack/keystone-db-create-x5mrn" Dec 03 13:17:21 crc kubenswrapper[4986]: I1203 13:17:21.600246 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/39b99d6d-488c-4eba-ae0b-fe1e7d8ec681-operator-scripts\") pod \"keystone-3f59-account-create-update-x5h2k\" (UID: \"39b99d6d-488c-4eba-ae0b-fe1e7d8ec681\") " pod="openstack/keystone-3f59-account-create-update-x5h2k" Dec 03 13:17:21 crc kubenswrapper[4986]: I1203 13:17:21.600853 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/47e9face-2119-40b5-a421-74eabeb2971a-operator-scripts\") pod \"keystone-db-create-x5mrn\" (UID: \"47e9face-2119-40b5-a421-74eabeb2971a\") " pod="openstack/keystone-db-create-x5mrn" Dec 03 13:17:21 crc kubenswrapper[4986]: I1203 13:17:21.600914 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/39b99d6d-488c-4eba-ae0b-fe1e7d8ec681-operator-scripts\") pod \"keystone-3f59-account-create-update-x5h2k\" (UID: \"39b99d6d-488c-4eba-ae0b-fe1e7d8ec681\") " pod="openstack/keystone-3f59-account-create-update-x5h2k" Dec 03 13:17:21 crc kubenswrapper[4986]: I1203 13:17:21.605684 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-8k4ww"] Dec 03 13:17:21 crc kubenswrapper[4986]: I1203 13:17:21.606880 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-8k4ww" Dec 03 13:17:21 crc kubenswrapper[4986]: I1203 13:17:21.622575 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-8k4ww"] Dec 03 13:17:21 crc kubenswrapper[4986]: I1203 13:17:21.630582 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ht86h\" (UniqueName: \"kubernetes.io/projected/47e9face-2119-40b5-a421-74eabeb2971a-kube-api-access-ht86h\") pod \"keystone-db-create-x5mrn\" (UID: \"47e9face-2119-40b5-a421-74eabeb2971a\") " pod="openstack/keystone-db-create-x5mrn" Dec 03 13:17:21 crc kubenswrapper[4986]: I1203 13:17:21.630591 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fn7rg\" (UniqueName: \"kubernetes.io/projected/39b99d6d-488c-4eba-ae0b-fe1e7d8ec681-kube-api-access-fn7rg\") pod \"keystone-3f59-account-create-update-x5h2k\" (UID: \"39b99d6d-488c-4eba-ae0b-fe1e7d8ec681\") " pod="openstack/keystone-3f59-account-create-update-x5h2k" Dec 03 13:17:21 crc kubenswrapper[4986]: I1203 13:17:21.659908 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-3f59-account-create-update-x5h2k" Dec 03 13:17:21 crc kubenswrapper[4986]: I1203 13:17:21.716102 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-b707-account-create-update-k5hp4"] Dec 03 13:17:21 crc kubenswrapper[4986]: I1203 13:17:21.717039 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-b707-account-create-update-k5hp4" Dec 03 13:17:21 crc kubenswrapper[4986]: I1203 13:17:21.721067 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-x5mrn" Dec 03 13:17:21 crc kubenswrapper[4986]: I1203 13:17:21.730552 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Dec 03 13:17:21 crc kubenswrapper[4986]: I1203 13:17:21.801613 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-b707-account-create-update-k5hp4"] Dec 03 13:17:21 crc kubenswrapper[4986]: I1203 13:17:21.839136 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zb8cc\" (UniqueName: \"kubernetes.io/projected/7bdae7ad-2192-4b77-a71a-075565088c9b-kube-api-access-zb8cc\") pod \"placement-b707-account-create-update-k5hp4\" (UID: \"7bdae7ad-2192-4b77-a71a-075565088c9b\") " pod="openstack/placement-b707-account-create-update-k5hp4" Dec 03 13:17:21 crc kubenswrapper[4986]: I1203 13:17:21.839176 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e65336fe-a982-408e-8858-894e6b336af0-operator-scripts\") pod \"placement-db-create-8k4ww\" (UID: \"e65336fe-a982-408e-8858-894e6b336af0\") " pod="openstack/placement-db-create-8k4ww" Dec 03 13:17:21 crc kubenswrapper[4986]: I1203 13:17:21.839204 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7bdae7ad-2192-4b77-a71a-075565088c9b-operator-scripts\") pod \"placement-b707-account-create-update-k5hp4\" (UID: \"7bdae7ad-2192-4b77-a71a-075565088c9b\") " pod="openstack/placement-b707-account-create-update-k5hp4" Dec 03 13:17:21 crc kubenswrapper[4986]: I1203 13:17:21.839254 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lk2k\" (UniqueName: \"kubernetes.io/projected/e65336fe-a982-408e-8858-894e6b336af0-kube-api-access-9lk2k\") pod \"placement-db-create-8k4ww\" (UID: \"e65336fe-a982-408e-8858-894e6b336af0\") " pod="openstack/placement-db-create-8k4ww" Dec 03 13:17:21 crc kubenswrapper[4986]: I1203 13:17:21.924221 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-28x7q"] Dec 03 13:17:21 crc kubenswrapper[4986]: I1203 13:17:21.925384 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-28x7q" Dec 03 13:17:21 crc kubenswrapper[4986]: I1203 13:17:21.940550 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lk2k\" (UniqueName: \"kubernetes.io/projected/e65336fe-a982-408e-8858-894e6b336af0-kube-api-access-9lk2k\") pod \"placement-db-create-8k4ww\" (UID: \"e65336fe-a982-408e-8858-894e6b336af0\") " pod="openstack/placement-db-create-8k4ww" Dec 03 13:17:21 crc kubenswrapper[4986]: I1203 13:17:21.940611 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lsrb\" (UniqueName: \"kubernetes.io/projected/ee3e34f6-cd96-4584-a59f-27e2e0d13ddd-kube-api-access-6lsrb\") pod \"glance-db-create-28x7q\" (UID: \"ee3e34f6-cd96-4584-a59f-27e2e0d13ddd\") " pod="openstack/glance-db-create-28x7q" Dec 03 13:17:21 crc kubenswrapper[4986]: I1203 13:17:21.940652 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ee3e34f6-cd96-4584-a59f-27e2e0d13ddd-operator-scripts\") pod \"glance-db-create-28x7q\" (UID: \"ee3e34f6-cd96-4584-a59f-27e2e0d13ddd\") " pod="openstack/glance-db-create-28x7q" Dec 03 13:17:21 crc kubenswrapper[4986]: I1203 13:17:21.940741 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zb8cc\" (UniqueName: \"kubernetes.io/projected/7bdae7ad-2192-4b77-a71a-075565088c9b-kube-api-access-zb8cc\") pod \"placement-b707-account-create-update-k5hp4\" (UID: \"7bdae7ad-2192-4b77-a71a-075565088c9b\") " pod="openstack/placement-b707-account-create-update-k5hp4" Dec 03 13:17:21 crc kubenswrapper[4986]: I1203 13:17:21.940767 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e65336fe-a982-408e-8858-894e6b336af0-operator-scripts\") pod \"placement-db-create-8k4ww\" (UID: \"e65336fe-a982-408e-8858-894e6b336af0\") " pod="openstack/placement-db-create-8k4ww" Dec 03 13:17:21 crc kubenswrapper[4986]: I1203 13:17:21.940801 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7bdae7ad-2192-4b77-a71a-075565088c9b-operator-scripts\") pod \"placement-b707-account-create-update-k5hp4\" (UID: \"7bdae7ad-2192-4b77-a71a-075565088c9b\") " pod="openstack/placement-b707-account-create-update-k5hp4" Dec 03 13:17:21 crc kubenswrapper[4986]: I1203 13:17:21.941753 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7bdae7ad-2192-4b77-a71a-075565088c9b-operator-scripts\") pod \"placement-b707-account-create-update-k5hp4\" (UID: \"7bdae7ad-2192-4b77-a71a-075565088c9b\") " pod="openstack/placement-b707-account-create-update-k5hp4" Dec 03 13:17:21 crc kubenswrapper[4986]: I1203 13:17:21.942407 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e65336fe-a982-408e-8858-894e6b336af0-operator-scripts\") pod \"placement-db-create-8k4ww\" (UID: \"e65336fe-a982-408e-8858-894e6b336af0\") " pod="openstack/placement-db-create-8k4ww" Dec 03 13:17:21 crc kubenswrapper[4986]: I1203 13:17:21.981639 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lk2k\" (UniqueName: \"kubernetes.io/projected/e65336fe-a982-408e-8858-894e6b336af0-kube-api-access-9lk2k\") pod \"placement-db-create-8k4ww\" (UID: \"e65336fe-a982-408e-8858-894e6b336af0\") " pod="openstack/placement-db-create-8k4ww" Dec 03 13:17:22 crc kubenswrapper[4986]: I1203 13:17:22.006460 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zb8cc\" (UniqueName: \"kubernetes.io/projected/7bdae7ad-2192-4b77-a71a-075565088c9b-kube-api-access-zb8cc\") pod \"placement-b707-account-create-update-k5hp4\" (UID: \"7bdae7ad-2192-4b77-a71a-075565088c9b\") " pod="openstack/placement-b707-account-create-update-k5hp4" Dec 03 13:17:22 crc kubenswrapper[4986]: I1203 13:17:22.021398 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-28x7q"] Dec 03 13:17:22 crc kubenswrapper[4986]: I1203 13:17:22.042121 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ee3e34f6-cd96-4584-a59f-27e2e0d13ddd-operator-scripts\") pod \"glance-db-create-28x7q\" (UID: \"ee3e34f6-cd96-4584-a59f-27e2e0d13ddd\") " pod="openstack/glance-db-create-28x7q" Dec 03 13:17:22 crc kubenswrapper[4986]: I1203 13:17:22.043120 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ee3e34f6-cd96-4584-a59f-27e2e0d13ddd-operator-scripts\") pod \"glance-db-create-28x7q\" (UID: \"ee3e34f6-cd96-4584-a59f-27e2e0d13ddd\") " pod="openstack/glance-db-create-28x7q" Dec 03 13:17:22 crc kubenswrapper[4986]: I1203 13:17:22.043590 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lsrb\" (UniqueName: \"kubernetes.io/projected/ee3e34f6-cd96-4584-a59f-27e2e0d13ddd-kube-api-access-6lsrb\") pod \"glance-db-create-28x7q\" (UID: \"ee3e34f6-cd96-4584-a59f-27e2e0d13ddd\") " pod="openstack/glance-db-create-28x7q" Dec 03 13:17:22 crc kubenswrapper[4986]: I1203 13:17:22.054026 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-fabf-account-create-update-hhmms"] Dec 03 13:17:22 crc kubenswrapper[4986]: I1203 13:17:22.055240 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-fabf-account-create-update-hhmms" Dec 03 13:17:22 crc kubenswrapper[4986]: I1203 13:17:22.069599 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lsrb\" (UniqueName: \"kubernetes.io/projected/ee3e34f6-cd96-4584-a59f-27e2e0d13ddd-kube-api-access-6lsrb\") pod \"glance-db-create-28x7q\" (UID: \"ee3e34f6-cd96-4584-a59f-27e2e0d13ddd\") " pod="openstack/glance-db-create-28x7q" Dec 03 13:17:22 crc kubenswrapper[4986]: I1203 13:17:22.071220 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Dec 03 13:17:22 crc kubenswrapper[4986]: I1203 13:17:22.095644 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-fabf-account-create-update-hhmms"] Dec 03 13:17:22 crc kubenswrapper[4986]: I1203 13:17:22.144862 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/50f31d81-05b7-43b6-b74a-07cd1a9f90b4-operator-scripts\") pod \"glance-fabf-account-create-update-hhmms\" (UID: \"50f31d81-05b7-43b6-b74a-07cd1a9f90b4\") " pod="openstack/glance-fabf-account-create-update-hhmms" Dec 03 13:17:22 crc kubenswrapper[4986]: I1203 13:17:22.144943 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwxtv\" (UniqueName: \"kubernetes.io/projected/50f31d81-05b7-43b6-b74a-07cd1a9f90b4-kube-api-access-zwxtv\") pod \"glance-fabf-account-create-update-hhmms\" (UID: \"50f31d81-05b7-43b6-b74a-07cd1a9f90b4\") " pod="openstack/glance-fabf-account-create-update-hhmms" Dec 03 13:17:22 crc kubenswrapper[4986]: I1203 13:17:22.163948 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-b707-account-create-update-k5hp4" Dec 03 13:17:22 crc kubenswrapper[4986]: I1203 13:17:22.222877 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-8k4ww" Dec 03 13:17:22 crc kubenswrapper[4986]: I1203 13:17:22.246094 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/50f31d81-05b7-43b6-b74a-07cd1a9f90b4-operator-scripts\") pod \"glance-fabf-account-create-update-hhmms\" (UID: \"50f31d81-05b7-43b6-b74a-07cd1a9f90b4\") " pod="openstack/glance-fabf-account-create-update-hhmms" Dec 03 13:17:22 crc kubenswrapper[4986]: I1203 13:17:22.246178 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwxtv\" (UniqueName: \"kubernetes.io/projected/50f31d81-05b7-43b6-b74a-07cd1a9f90b4-kube-api-access-zwxtv\") pod \"glance-fabf-account-create-update-hhmms\" (UID: \"50f31d81-05b7-43b6-b74a-07cd1a9f90b4\") " pod="openstack/glance-fabf-account-create-update-hhmms" Dec 03 13:17:22 crc kubenswrapper[4986]: I1203 13:17:22.246992 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/50f31d81-05b7-43b6-b74a-07cd1a9f90b4-operator-scripts\") pod \"glance-fabf-account-create-update-hhmms\" (UID: \"50f31d81-05b7-43b6-b74a-07cd1a9f90b4\") " pod="openstack/glance-fabf-account-create-update-hhmms" Dec 03 13:17:22 crc kubenswrapper[4986]: I1203 13:17:22.252222 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-28x7q" Dec 03 13:17:22 crc kubenswrapper[4986]: I1203 13:17:22.264171 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwxtv\" (UniqueName: \"kubernetes.io/projected/50f31d81-05b7-43b6-b74a-07cd1a9f90b4-kube-api-access-zwxtv\") pod \"glance-fabf-account-create-update-hhmms\" (UID: \"50f31d81-05b7-43b6-b74a-07cd1a9f90b4\") " pod="openstack/glance-fabf-account-create-update-hhmms" Dec 03 13:17:22 crc kubenswrapper[4986]: I1203 13:17:22.365658 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-x5mrn"] Dec 03 13:17:22 crc kubenswrapper[4986]: W1203 13:17:22.372897 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod47e9face_2119_40b5_a421_74eabeb2971a.slice/crio-ec77b3b7d07bb410db6521ce74fa81619a30b77e1520f901b572f7dc2547db22 WatchSource:0}: Error finding container ec77b3b7d07bb410db6521ce74fa81619a30b77e1520f901b572f7dc2547db22: Status 404 returned error can't find the container with id ec77b3b7d07bb410db6521ce74fa81619a30b77e1520f901b572f7dc2547db22 Dec 03 13:17:22 crc kubenswrapper[4986]: I1203 13:17:22.386901 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-fabf-account-create-update-hhmms" Dec 03 13:17:22 crc kubenswrapper[4986]: I1203 13:17:22.453021 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-3f59-account-create-update-x5h2k"] Dec 03 13:17:22 crc kubenswrapper[4986]: W1203 13:17:22.674398 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7bdae7ad_2192_4b77_a71a_075565088c9b.slice/crio-064d18a6a6d960812fb7f2c1fd74999b6354654d9e2826e38ceb0967d4ec920a WatchSource:0}: Error finding container 064d18a6a6d960812fb7f2c1fd74999b6354654d9e2826e38ceb0967d4ec920a: Status 404 returned error can't find the container with id 064d18a6a6d960812fb7f2c1fd74999b6354654d9e2826e38ceb0967d4ec920a Dec 03 13:17:22 crc kubenswrapper[4986]: I1203 13:17:22.694041 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-b707-account-create-update-k5hp4"] Dec 03 13:17:22 crc kubenswrapper[4986]: I1203 13:17:22.741980 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-8k4ww"] Dec 03 13:17:22 crc kubenswrapper[4986]: I1203 13:17:22.828335 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-28x7q"] Dec 03 13:17:22 crc kubenswrapper[4986]: W1203 13:17:22.837642 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podee3e34f6_cd96_4584_a59f_27e2e0d13ddd.slice/crio-d4455847775f7c071371555c84a3e5709f2db8fa8596c3c05f74033a2f1b8ad5 WatchSource:0}: Error finding container d4455847775f7c071371555c84a3e5709f2db8fa8596c3c05f74033a2f1b8ad5: Status 404 returned error can't find the container with id d4455847775f7c071371555c84a3e5709f2db8fa8596c3c05f74033a2f1b8ad5 Dec 03 13:17:22 crc kubenswrapper[4986]: I1203 13:17:22.909406 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-fabf-account-create-update-hhmms"] Dec 03 13:17:22 crc kubenswrapper[4986]: W1203 13:17:22.999791 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod50f31d81_05b7_43b6_b74a_07cd1a9f90b4.slice/crio-1eef3d80747543762b2ad7e248e25c8e38bdd8b80768657b4aaf9d5d8ccdfd78 WatchSource:0}: Error finding container 1eef3d80747543762b2ad7e248e25c8e38bdd8b80768657b4aaf9d5d8ccdfd78: Status 404 returned error can't find the container with id 1eef3d80747543762b2ad7e248e25c8e38bdd8b80768657b4aaf9d5d8ccdfd78 Dec 03 13:17:23 crc kubenswrapper[4986]: I1203 13:17:23.226563 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-b707-account-create-update-k5hp4" event={"ID":"7bdae7ad-2192-4b77-a71a-075565088c9b","Type":"ContainerStarted","Data":"1ab9d00e0ff7b5c63de8c4b1518e1354faed78b2b09c9f3b82deb2a2bf8ba4d9"} Dec 03 13:17:23 crc kubenswrapper[4986]: I1203 13:17:23.226667 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-b707-account-create-update-k5hp4" event={"ID":"7bdae7ad-2192-4b77-a71a-075565088c9b","Type":"ContainerStarted","Data":"064d18a6a6d960812fb7f2c1fd74999b6354654d9e2826e38ceb0967d4ec920a"} Dec 03 13:17:23 crc kubenswrapper[4986]: I1203 13:17:23.228448 4986 generic.go:334] "Generic (PLEG): container finished" podID="39b99d6d-488c-4eba-ae0b-fe1e7d8ec681" containerID="f21d8c8045fad323b71f79041de6c44cb4ab826c5d615d90ebebe53fdba0c4a5" exitCode=0 Dec 03 13:17:23 crc kubenswrapper[4986]: I1203 13:17:23.228548 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-3f59-account-create-update-x5h2k" event={"ID":"39b99d6d-488c-4eba-ae0b-fe1e7d8ec681","Type":"ContainerDied","Data":"f21d8c8045fad323b71f79041de6c44cb4ab826c5d615d90ebebe53fdba0c4a5"} Dec 03 13:17:23 crc kubenswrapper[4986]: I1203 13:17:23.228592 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-3f59-account-create-update-x5h2k" event={"ID":"39b99d6d-488c-4eba-ae0b-fe1e7d8ec681","Type":"ContainerStarted","Data":"34604aebe634d304a7a48b941a70677919d3c83e7901316b4d16c98f4ab537a1"} Dec 03 13:17:23 crc kubenswrapper[4986]: I1203 13:17:23.230433 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-fabf-account-create-update-hhmms" event={"ID":"50f31d81-05b7-43b6-b74a-07cd1a9f90b4","Type":"ContainerStarted","Data":"6de88ecfba6dbb362b821f8765bc09066efc3682b0a90efe23287b844163c29b"} Dec 03 13:17:23 crc kubenswrapper[4986]: I1203 13:17:23.230471 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-fabf-account-create-update-hhmms" event={"ID":"50f31d81-05b7-43b6-b74a-07cd1a9f90b4","Type":"ContainerStarted","Data":"1eef3d80747543762b2ad7e248e25c8e38bdd8b80768657b4aaf9d5d8ccdfd78"} Dec 03 13:17:23 crc kubenswrapper[4986]: I1203 13:17:23.232741 4986 generic.go:334] "Generic (PLEG): container finished" podID="47e9face-2119-40b5-a421-74eabeb2971a" containerID="0f944103ad3896b9d879413f5384fc2fa00d73b5882cf87ae2aa7a25f3069320" exitCode=0 Dec 03 13:17:23 crc kubenswrapper[4986]: I1203 13:17:23.232847 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-x5mrn" event={"ID":"47e9face-2119-40b5-a421-74eabeb2971a","Type":"ContainerDied","Data":"0f944103ad3896b9d879413f5384fc2fa00d73b5882cf87ae2aa7a25f3069320"} Dec 03 13:17:23 crc kubenswrapper[4986]: I1203 13:17:23.232891 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-x5mrn" event={"ID":"47e9face-2119-40b5-a421-74eabeb2971a","Type":"ContainerStarted","Data":"ec77b3b7d07bb410db6521ce74fa81619a30b77e1520f901b572f7dc2547db22"} Dec 03 13:17:23 crc kubenswrapper[4986]: I1203 13:17:23.234586 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-8k4ww" event={"ID":"e65336fe-a982-408e-8858-894e6b336af0","Type":"ContainerStarted","Data":"6ed6e0cec26c36bf59dd08dc66d9114352fac63214e60adde6faa248467decea"} Dec 03 13:17:23 crc kubenswrapper[4986]: I1203 13:17:23.234635 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-8k4ww" event={"ID":"e65336fe-a982-408e-8858-894e6b336af0","Type":"ContainerStarted","Data":"e573d9e1a46a96d0ddfa98d4a093f013fe0deeaa5e47dc569399d9361907bce7"} Dec 03 13:17:23 crc kubenswrapper[4986]: I1203 13:17:23.236965 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-28x7q" event={"ID":"ee3e34f6-cd96-4584-a59f-27e2e0d13ddd","Type":"ContainerStarted","Data":"8440bfded328d55648f43303fa627aa731e69eba9b5e5f8bec2d88552b71ba8b"} Dec 03 13:17:23 crc kubenswrapper[4986]: I1203 13:17:23.236996 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-28x7q" event={"ID":"ee3e34f6-cd96-4584-a59f-27e2e0d13ddd","Type":"ContainerStarted","Data":"d4455847775f7c071371555c84a3e5709f2db8fa8596c3c05f74033a2f1b8ad5"} Dec 03 13:17:23 crc kubenswrapper[4986]: I1203 13:17:23.250817 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-b707-account-create-update-k5hp4" podStartSLOduration=2.250798935 podStartE2EDuration="2.250798935s" podCreationTimestamp="2025-12-03 13:17:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:17:23.24761888 +0000 UTC m=+1302.714050081" watchObservedRunningTime="2025-12-03 13:17:23.250798935 +0000 UTC m=+1302.717230126" Dec 03 13:17:23 crc kubenswrapper[4986]: I1203 13:17:23.290433 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-fabf-account-create-update-hhmms" podStartSLOduration=1.290413854 podStartE2EDuration="1.290413854s" podCreationTimestamp="2025-12-03 13:17:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:17:23.284097053 +0000 UTC m=+1302.750528244" watchObservedRunningTime="2025-12-03 13:17:23.290413854 +0000 UTC m=+1302.756845045" Dec 03 13:17:23 crc kubenswrapper[4986]: I1203 13:17:23.305528 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-28x7q" podStartSLOduration=2.3055065900000002 podStartE2EDuration="2.30550659s" podCreationTimestamp="2025-12-03 13:17:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:17:23.30288307 +0000 UTC m=+1302.769314261" watchObservedRunningTime="2025-12-03 13:17:23.30550659 +0000 UTC m=+1302.771937781" Dec 03 13:17:23 crc kubenswrapper[4986]: I1203 13:17:23.331547 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-8k4ww" podStartSLOduration=2.331529782 podStartE2EDuration="2.331529782s" podCreationTimestamp="2025-12-03 13:17:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:17:23.328392738 +0000 UTC m=+1302.794823949" watchObservedRunningTime="2025-12-03 13:17:23.331529782 +0000 UTC m=+1302.797960973" Dec 03 13:17:24 crc kubenswrapper[4986]: I1203 13:17:24.246547 4986 generic.go:334] "Generic (PLEG): container finished" podID="e65336fe-a982-408e-8858-894e6b336af0" containerID="6ed6e0cec26c36bf59dd08dc66d9114352fac63214e60adde6faa248467decea" exitCode=0 Dec 03 13:17:24 crc kubenswrapper[4986]: I1203 13:17:24.246673 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-8k4ww" event={"ID":"e65336fe-a982-408e-8858-894e6b336af0","Type":"ContainerDied","Data":"6ed6e0cec26c36bf59dd08dc66d9114352fac63214e60adde6faa248467decea"} Dec 03 13:17:24 crc kubenswrapper[4986]: I1203 13:17:24.249894 4986 generic.go:334] "Generic (PLEG): container finished" podID="ee3e34f6-cd96-4584-a59f-27e2e0d13ddd" containerID="8440bfded328d55648f43303fa627aa731e69eba9b5e5f8bec2d88552b71ba8b" exitCode=0 Dec 03 13:17:24 crc kubenswrapper[4986]: I1203 13:17:24.249959 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-28x7q" event={"ID":"ee3e34f6-cd96-4584-a59f-27e2e0d13ddd","Type":"ContainerDied","Data":"8440bfded328d55648f43303fa627aa731e69eba9b5e5f8bec2d88552b71ba8b"} Dec 03 13:17:24 crc kubenswrapper[4986]: I1203 13:17:24.252342 4986 generic.go:334] "Generic (PLEG): container finished" podID="7bdae7ad-2192-4b77-a71a-075565088c9b" containerID="1ab9d00e0ff7b5c63de8c4b1518e1354faed78b2b09c9f3b82deb2a2bf8ba4d9" exitCode=0 Dec 03 13:17:24 crc kubenswrapper[4986]: I1203 13:17:24.252405 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-b707-account-create-update-k5hp4" event={"ID":"7bdae7ad-2192-4b77-a71a-075565088c9b","Type":"ContainerDied","Data":"1ab9d00e0ff7b5c63de8c4b1518e1354faed78b2b09c9f3b82deb2a2bf8ba4d9"} Dec 03 13:17:24 crc kubenswrapper[4986]: I1203 13:17:24.258120 4986 generic.go:334] "Generic (PLEG): container finished" podID="50f31d81-05b7-43b6-b74a-07cd1a9f90b4" containerID="6de88ecfba6dbb362b821f8765bc09066efc3682b0a90efe23287b844163c29b" exitCode=0 Dec 03 13:17:24 crc kubenswrapper[4986]: I1203 13:17:24.258204 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-fabf-account-create-update-hhmms" event={"ID":"50f31d81-05b7-43b6-b74a-07cd1a9f90b4","Type":"ContainerDied","Data":"6de88ecfba6dbb362b821f8765bc09066efc3682b0a90efe23287b844163c29b"} Dec 03 13:17:24 crc kubenswrapper[4986]: I1203 13:17:24.697642 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-3f59-account-create-update-x5h2k" Dec 03 13:17:24 crc kubenswrapper[4986]: I1203 13:17:24.701900 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-x5mrn" Dec 03 13:17:24 crc kubenswrapper[4986]: I1203 13:17:24.896042 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/47e9face-2119-40b5-a421-74eabeb2971a-operator-scripts\") pod \"47e9face-2119-40b5-a421-74eabeb2971a\" (UID: \"47e9face-2119-40b5-a421-74eabeb2971a\") " Dec 03 13:17:24 crc kubenswrapper[4986]: I1203 13:17:24.896129 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ht86h\" (UniqueName: \"kubernetes.io/projected/47e9face-2119-40b5-a421-74eabeb2971a-kube-api-access-ht86h\") pod \"47e9face-2119-40b5-a421-74eabeb2971a\" (UID: \"47e9face-2119-40b5-a421-74eabeb2971a\") " Dec 03 13:17:24 crc kubenswrapper[4986]: I1203 13:17:24.896184 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fn7rg\" (UniqueName: \"kubernetes.io/projected/39b99d6d-488c-4eba-ae0b-fe1e7d8ec681-kube-api-access-fn7rg\") pod \"39b99d6d-488c-4eba-ae0b-fe1e7d8ec681\" (UID: \"39b99d6d-488c-4eba-ae0b-fe1e7d8ec681\") " Dec 03 13:17:24 crc kubenswrapper[4986]: I1203 13:17:24.896264 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/39b99d6d-488c-4eba-ae0b-fe1e7d8ec681-operator-scripts\") pod \"39b99d6d-488c-4eba-ae0b-fe1e7d8ec681\" (UID: \"39b99d6d-488c-4eba-ae0b-fe1e7d8ec681\") " Dec 03 13:17:24 crc kubenswrapper[4986]: I1203 13:17:24.897156 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39b99d6d-488c-4eba-ae0b-fe1e7d8ec681-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "39b99d6d-488c-4eba-ae0b-fe1e7d8ec681" (UID: "39b99d6d-488c-4eba-ae0b-fe1e7d8ec681"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:17:24 crc kubenswrapper[4986]: I1203 13:17:24.897180 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47e9face-2119-40b5-a421-74eabeb2971a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "47e9face-2119-40b5-a421-74eabeb2971a" (UID: "47e9face-2119-40b5-a421-74eabeb2971a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:17:24 crc kubenswrapper[4986]: I1203 13:17:24.901720 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47e9face-2119-40b5-a421-74eabeb2971a-kube-api-access-ht86h" (OuterVolumeSpecName: "kube-api-access-ht86h") pod "47e9face-2119-40b5-a421-74eabeb2971a" (UID: "47e9face-2119-40b5-a421-74eabeb2971a"). InnerVolumeSpecName "kube-api-access-ht86h". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:17:24 crc kubenswrapper[4986]: I1203 13:17:24.903564 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39b99d6d-488c-4eba-ae0b-fe1e7d8ec681-kube-api-access-fn7rg" (OuterVolumeSpecName: "kube-api-access-fn7rg") pod "39b99d6d-488c-4eba-ae0b-fe1e7d8ec681" (UID: "39b99d6d-488c-4eba-ae0b-fe1e7d8ec681"). InnerVolumeSpecName "kube-api-access-fn7rg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:17:24 crc kubenswrapper[4986]: I1203 13:17:24.998543 4986 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/47e9face-2119-40b5-a421-74eabeb2971a-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 13:17:24 crc kubenswrapper[4986]: I1203 13:17:24.998861 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ht86h\" (UniqueName: \"kubernetes.io/projected/47e9face-2119-40b5-a421-74eabeb2971a-kube-api-access-ht86h\") on node \"crc\" DevicePath \"\"" Dec 03 13:17:24 crc kubenswrapper[4986]: I1203 13:17:24.998878 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fn7rg\" (UniqueName: \"kubernetes.io/projected/39b99d6d-488c-4eba-ae0b-fe1e7d8ec681-kube-api-access-fn7rg\") on node \"crc\" DevicePath \"\"" Dec 03 13:17:24 crc kubenswrapper[4986]: I1203 13:17:24.998891 4986 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/39b99d6d-488c-4eba-ae0b-fe1e7d8ec681-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 13:17:25 crc kubenswrapper[4986]: I1203 13:17:25.268047 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-3f59-account-create-update-x5h2k" Dec 03 13:17:25 crc kubenswrapper[4986]: I1203 13:17:25.268065 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-3f59-account-create-update-x5h2k" event={"ID":"39b99d6d-488c-4eba-ae0b-fe1e7d8ec681","Type":"ContainerDied","Data":"34604aebe634d304a7a48b941a70677919d3c83e7901316b4d16c98f4ab537a1"} Dec 03 13:17:25 crc kubenswrapper[4986]: I1203 13:17:25.268105 4986 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34604aebe634d304a7a48b941a70677919d3c83e7901316b4d16c98f4ab537a1" Dec 03 13:17:25 crc kubenswrapper[4986]: I1203 13:17:25.271612 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-x5mrn" Dec 03 13:17:25 crc kubenswrapper[4986]: I1203 13:17:25.272200 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-x5mrn" event={"ID":"47e9face-2119-40b5-a421-74eabeb2971a","Type":"ContainerDied","Data":"ec77b3b7d07bb410db6521ce74fa81619a30b77e1520f901b572f7dc2547db22"} Dec 03 13:17:25 crc kubenswrapper[4986]: I1203 13:17:25.272260 4986 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec77b3b7d07bb410db6521ce74fa81619a30b77e1520f901b572f7dc2547db22" Dec 03 13:17:25 crc kubenswrapper[4986]: I1203 13:17:25.770050 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-fabf-account-create-update-hhmms" Dec 03 13:17:25 crc kubenswrapper[4986]: I1203 13:17:25.775371 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-b707-account-create-update-k5hp4" Dec 03 13:17:25 crc kubenswrapper[4986]: I1203 13:17:25.780179 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-8k4ww" Dec 03 13:17:25 crc kubenswrapper[4986]: I1203 13:17:25.786713 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-28x7q" Dec 03 13:17:25 crc kubenswrapper[4986]: I1203 13:17:25.912934 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ee3e34f6-cd96-4584-a59f-27e2e0d13ddd-operator-scripts\") pod \"ee3e34f6-cd96-4584-a59f-27e2e0d13ddd\" (UID: \"ee3e34f6-cd96-4584-a59f-27e2e0d13ddd\") " Dec 03 13:17:25 crc kubenswrapper[4986]: I1203 13:17:25.912971 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/50f31d81-05b7-43b6-b74a-07cd1a9f90b4-operator-scripts\") pod \"50f31d81-05b7-43b6-b74a-07cd1a9f90b4\" (UID: \"50f31d81-05b7-43b6-b74a-07cd1a9f90b4\") " Dec 03 13:17:25 crc kubenswrapper[4986]: I1203 13:17:25.913021 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zb8cc\" (UniqueName: \"kubernetes.io/projected/7bdae7ad-2192-4b77-a71a-075565088c9b-kube-api-access-zb8cc\") pod \"7bdae7ad-2192-4b77-a71a-075565088c9b\" (UID: \"7bdae7ad-2192-4b77-a71a-075565088c9b\") " Dec 03 13:17:25 crc kubenswrapper[4986]: I1203 13:17:25.913664 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee3e34f6-cd96-4584-a59f-27e2e0d13ddd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ee3e34f6-cd96-4584-a59f-27e2e0d13ddd" (UID: "ee3e34f6-cd96-4584-a59f-27e2e0d13ddd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:17:25 crc kubenswrapper[4986]: I1203 13:17:25.913716 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50f31d81-05b7-43b6-b74a-07cd1a9f90b4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "50f31d81-05b7-43b6-b74a-07cd1a9f90b4" (UID: "50f31d81-05b7-43b6-b74a-07cd1a9f90b4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:17:25 crc kubenswrapper[4986]: I1203 13:17:25.913748 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e65336fe-a982-408e-8858-894e6b336af0-operator-scripts\") pod \"e65336fe-a982-408e-8858-894e6b336af0\" (UID: \"e65336fe-a982-408e-8858-894e6b336af0\") " Dec 03 13:17:25 crc kubenswrapper[4986]: I1203 13:17:25.914228 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e65336fe-a982-408e-8858-894e6b336af0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e65336fe-a982-408e-8858-894e6b336af0" (UID: "e65336fe-a982-408e-8858-894e6b336af0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:17:25 crc kubenswrapper[4986]: I1203 13:17:25.914265 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9lk2k\" (UniqueName: \"kubernetes.io/projected/e65336fe-a982-408e-8858-894e6b336af0-kube-api-access-9lk2k\") pod \"e65336fe-a982-408e-8858-894e6b336af0\" (UID: \"e65336fe-a982-408e-8858-894e6b336af0\") " Dec 03 13:17:25 crc kubenswrapper[4986]: I1203 13:17:25.914326 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7bdae7ad-2192-4b77-a71a-075565088c9b-operator-scripts\") pod \"7bdae7ad-2192-4b77-a71a-075565088c9b\" (UID: \"7bdae7ad-2192-4b77-a71a-075565088c9b\") " Dec 03 13:17:25 crc kubenswrapper[4986]: I1203 13:17:25.914636 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bdae7ad-2192-4b77-a71a-075565088c9b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7bdae7ad-2192-4b77-a71a-075565088c9b" (UID: "7bdae7ad-2192-4b77-a71a-075565088c9b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:17:25 crc kubenswrapper[4986]: I1203 13:17:25.914671 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6lsrb\" (UniqueName: \"kubernetes.io/projected/ee3e34f6-cd96-4584-a59f-27e2e0d13ddd-kube-api-access-6lsrb\") pod \"ee3e34f6-cd96-4584-a59f-27e2e0d13ddd\" (UID: \"ee3e34f6-cd96-4584-a59f-27e2e0d13ddd\") " Dec 03 13:17:25 crc kubenswrapper[4986]: I1203 13:17:25.915029 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zwxtv\" (UniqueName: \"kubernetes.io/projected/50f31d81-05b7-43b6-b74a-07cd1a9f90b4-kube-api-access-zwxtv\") pod \"50f31d81-05b7-43b6-b74a-07cd1a9f90b4\" (UID: \"50f31d81-05b7-43b6-b74a-07cd1a9f90b4\") " Dec 03 13:17:25 crc kubenswrapper[4986]: I1203 13:17:25.915375 4986 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/50f31d81-05b7-43b6-b74a-07cd1a9f90b4-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 13:17:25 crc kubenswrapper[4986]: I1203 13:17:25.915396 4986 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ee3e34f6-cd96-4584-a59f-27e2e0d13ddd-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 13:17:25 crc kubenswrapper[4986]: I1203 13:17:25.915411 4986 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e65336fe-a982-408e-8858-894e6b336af0-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 13:17:25 crc kubenswrapper[4986]: I1203 13:17:25.915421 4986 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7bdae7ad-2192-4b77-a71a-075565088c9b-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 13:17:25 crc kubenswrapper[4986]: I1203 13:17:25.918489 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bdae7ad-2192-4b77-a71a-075565088c9b-kube-api-access-zb8cc" (OuterVolumeSpecName: "kube-api-access-zb8cc") pod "7bdae7ad-2192-4b77-a71a-075565088c9b" (UID: "7bdae7ad-2192-4b77-a71a-075565088c9b"). InnerVolumeSpecName "kube-api-access-zb8cc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:17:25 crc kubenswrapper[4986]: I1203 13:17:25.918537 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e65336fe-a982-408e-8858-894e6b336af0-kube-api-access-9lk2k" (OuterVolumeSpecName: "kube-api-access-9lk2k") pod "e65336fe-a982-408e-8858-894e6b336af0" (UID: "e65336fe-a982-408e-8858-894e6b336af0"). InnerVolumeSpecName "kube-api-access-9lk2k". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:17:25 crc kubenswrapper[4986]: I1203 13:17:25.918612 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee3e34f6-cd96-4584-a59f-27e2e0d13ddd-kube-api-access-6lsrb" (OuterVolumeSpecName: "kube-api-access-6lsrb") pod "ee3e34f6-cd96-4584-a59f-27e2e0d13ddd" (UID: "ee3e34f6-cd96-4584-a59f-27e2e0d13ddd"). InnerVolumeSpecName "kube-api-access-6lsrb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:17:25 crc kubenswrapper[4986]: I1203 13:17:25.919391 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50f31d81-05b7-43b6-b74a-07cd1a9f90b4-kube-api-access-zwxtv" (OuterVolumeSpecName: "kube-api-access-zwxtv") pod "50f31d81-05b7-43b6-b74a-07cd1a9f90b4" (UID: "50f31d81-05b7-43b6-b74a-07cd1a9f90b4"). InnerVolumeSpecName "kube-api-access-zwxtv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:17:26 crc kubenswrapper[4986]: I1203 13:17:26.017244 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6lsrb\" (UniqueName: \"kubernetes.io/projected/ee3e34f6-cd96-4584-a59f-27e2e0d13ddd-kube-api-access-6lsrb\") on node \"crc\" DevicePath \"\"" Dec 03 13:17:26 crc kubenswrapper[4986]: I1203 13:17:26.017279 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zwxtv\" (UniqueName: \"kubernetes.io/projected/50f31d81-05b7-43b6-b74a-07cd1a9f90b4-kube-api-access-zwxtv\") on node \"crc\" DevicePath \"\"" Dec 03 13:17:26 crc kubenswrapper[4986]: I1203 13:17:26.017308 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zb8cc\" (UniqueName: \"kubernetes.io/projected/7bdae7ad-2192-4b77-a71a-075565088c9b-kube-api-access-zb8cc\") on node \"crc\" DevicePath \"\"" Dec 03 13:17:26 crc kubenswrapper[4986]: I1203 13:17:26.017318 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9lk2k\" (UniqueName: \"kubernetes.io/projected/e65336fe-a982-408e-8858-894e6b336af0-kube-api-access-9lk2k\") on node \"crc\" DevicePath \"\"" Dec 03 13:17:26 crc kubenswrapper[4986]: I1203 13:17:26.286786 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-8k4ww" event={"ID":"e65336fe-a982-408e-8858-894e6b336af0","Type":"ContainerDied","Data":"e573d9e1a46a96d0ddfa98d4a093f013fe0deeaa5e47dc569399d9361907bce7"} Dec 03 13:17:26 crc kubenswrapper[4986]: I1203 13:17:26.286837 4986 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e573d9e1a46a96d0ddfa98d4a093f013fe0deeaa5e47dc569399d9361907bce7" Dec 03 13:17:26 crc kubenswrapper[4986]: I1203 13:17:26.286930 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-8k4ww" Dec 03 13:17:26 crc kubenswrapper[4986]: I1203 13:17:26.295247 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-28x7q" Dec 03 13:17:26 crc kubenswrapper[4986]: I1203 13:17:26.295321 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-28x7q" event={"ID":"ee3e34f6-cd96-4584-a59f-27e2e0d13ddd","Type":"ContainerDied","Data":"d4455847775f7c071371555c84a3e5709f2db8fa8596c3c05f74033a2f1b8ad5"} Dec 03 13:17:26 crc kubenswrapper[4986]: I1203 13:17:26.295394 4986 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4455847775f7c071371555c84a3e5709f2db8fa8596c3c05f74033a2f1b8ad5" Dec 03 13:17:26 crc kubenswrapper[4986]: I1203 13:17:26.300240 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-b707-account-create-update-k5hp4" event={"ID":"7bdae7ad-2192-4b77-a71a-075565088c9b","Type":"ContainerDied","Data":"064d18a6a6d960812fb7f2c1fd74999b6354654d9e2826e38ceb0967d4ec920a"} Dec 03 13:17:26 crc kubenswrapper[4986]: I1203 13:17:26.300353 4986 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="064d18a6a6d960812fb7f2c1fd74999b6354654d9e2826e38ceb0967d4ec920a" Dec 03 13:17:26 crc kubenswrapper[4986]: I1203 13:17:26.300265 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-b707-account-create-update-k5hp4" Dec 03 13:17:26 crc kubenswrapper[4986]: I1203 13:17:26.307723 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-fabf-account-create-update-hhmms" event={"ID":"50f31d81-05b7-43b6-b74a-07cd1a9f90b4","Type":"ContainerDied","Data":"1eef3d80747543762b2ad7e248e25c8e38bdd8b80768657b4aaf9d5d8ccdfd78"} Dec 03 13:17:26 crc kubenswrapper[4986]: I1203 13:17:26.307787 4986 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1eef3d80747543762b2ad7e248e25c8e38bdd8b80768657b4aaf9d5d8ccdfd78" Dec 03 13:17:26 crc kubenswrapper[4986]: I1203 13:17:26.307917 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-fabf-account-create-update-hhmms" Dec 03 13:17:26 crc kubenswrapper[4986]: I1203 13:17:26.526300 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe-etc-swift\") pod \"swift-storage-0\" (UID: \"cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe\") " pod="openstack/swift-storage-0" Dec 03 13:17:26 crc kubenswrapper[4986]: I1203 13:17:26.532057 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe-etc-swift\") pod \"swift-storage-0\" (UID: \"cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe\") " pod="openstack/swift-storage-0" Dec 03 13:17:26 crc kubenswrapper[4986]: I1203 13:17:26.743424 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Dec 03 13:17:27 crc kubenswrapper[4986]: I1203 13:17:27.270092 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-8f5bt"] Dec 03 13:17:27 crc kubenswrapper[4986]: E1203 13:17:27.270796 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bdae7ad-2192-4b77-a71a-075565088c9b" containerName="mariadb-account-create-update" Dec 03 13:17:27 crc kubenswrapper[4986]: I1203 13:17:27.270811 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bdae7ad-2192-4b77-a71a-075565088c9b" containerName="mariadb-account-create-update" Dec 03 13:17:27 crc kubenswrapper[4986]: E1203 13:17:27.270827 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39b99d6d-488c-4eba-ae0b-fe1e7d8ec681" containerName="mariadb-account-create-update" Dec 03 13:17:27 crc kubenswrapper[4986]: I1203 13:17:27.270836 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="39b99d6d-488c-4eba-ae0b-fe1e7d8ec681" containerName="mariadb-account-create-update" Dec 03 13:17:27 crc kubenswrapper[4986]: E1203 13:17:27.270878 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e65336fe-a982-408e-8858-894e6b336af0" containerName="mariadb-database-create" Dec 03 13:17:27 crc kubenswrapper[4986]: I1203 13:17:27.270887 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="e65336fe-a982-408e-8858-894e6b336af0" containerName="mariadb-database-create" Dec 03 13:17:27 crc kubenswrapper[4986]: E1203 13:17:27.270902 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50f31d81-05b7-43b6-b74a-07cd1a9f90b4" containerName="mariadb-account-create-update" Dec 03 13:17:27 crc kubenswrapper[4986]: I1203 13:17:27.270909 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="50f31d81-05b7-43b6-b74a-07cd1a9f90b4" containerName="mariadb-account-create-update" Dec 03 13:17:27 crc kubenswrapper[4986]: E1203 13:17:27.270922 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee3e34f6-cd96-4584-a59f-27e2e0d13ddd" containerName="mariadb-database-create" Dec 03 13:17:27 crc kubenswrapper[4986]: I1203 13:17:27.270930 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee3e34f6-cd96-4584-a59f-27e2e0d13ddd" containerName="mariadb-database-create" Dec 03 13:17:27 crc kubenswrapper[4986]: E1203 13:17:27.270950 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47e9face-2119-40b5-a421-74eabeb2971a" containerName="mariadb-database-create" Dec 03 13:17:27 crc kubenswrapper[4986]: I1203 13:17:27.270958 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="47e9face-2119-40b5-a421-74eabeb2971a" containerName="mariadb-database-create" Dec 03 13:17:27 crc kubenswrapper[4986]: I1203 13:17:27.271135 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="39b99d6d-488c-4eba-ae0b-fe1e7d8ec681" containerName="mariadb-account-create-update" Dec 03 13:17:27 crc kubenswrapper[4986]: I1203 13:17:27.271148 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee3e34f6-cd96-4584-a59f-27e2e0d13ddd" containerName="mariadb-database-create" Dec 03 13:17:27 crc kubenswrapper[4986]: I1203 13:17:27.271166 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="e65336fe-a982-408e-8858-894e6b336af0" containerName="mariadb-database-create" Dec 03 13:17:27 crc kubenswrapper[4986]: I1203 13:17:27.271175 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="7bdae7ad-2192-4b77-a71a-075565088c9b" containerName="mariadb-account-create-update" Dec 03 13:17:27 crc kubenswrapper[4986]: I1203 13:17:27.271186 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="50f31d81-05b7-43b6-b74a-07cd1a9f90b4" containerName="mariadb-account-create-update" Dec 03 13:17:27 crc kubenswrapper[4986]: I1203 13:17:27.271196 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="47e9face-2119-40b5-a421-74eabeb2971a" containerName="mariadb-database-create" Dec 03 13:17:27 crc kubenswrapper[4986]: I1203 13:17:27.271887 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-8f5bt" Dec 03 13:17:27 crc kubenswrapper[4986]: I1203 13:17:27.275027 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-kj54w" Dec 03 13:17:27 crc kubenswrapper[4986]: I1203 13:17:27.275249 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Dec 03 13:17:27 crc kubenswrapper[4986]: I1203 13:17:27.300733 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-8f5bt"] Dec 03 13:17:27 crc kubenswrapper[4986]: I1203 13:17:27.345489 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0874137d-06da-450a-9e93-ad53257c5115-config-data\") pod \"glance-db-sync-8f5bt\" (UID: \"0874137d-06da-450a-9e93-ad53257c5115\") " pod="openstack/glance-db-sync-8f5bt" Dec 03 13:17:27 crc kubenswrapper[4986]: I1203 13:17:27.345677 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0874137d-06da-450a-9e93-ad53257c5115-combined-ca-bundle\") pod \"glance-db-sync-8f5bt\" (UID: \"0874137d-06da-450a-9e93-ad53257c5115\") " pod="openstack/glance-db-sync-8f5bt" Dec 03 13:17:27 crc kubenswrapper[4986]: I1203 13:17:27.345715 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qph6n\" (UniqueName: \"kubernetes.io/projected/0874137d-06da-450a-9e93-ad53257c5115-kube-api-access-qph6n\") pod \"glance-db-sync-8f5bt\" (UID: \"0874137d-06da-450a-9e93-ad53257c5115\") " pod="openstack/glance-db-sync-8f5bt" Dec 03 13:17:27 crc kubenswrapper[4986]: I1203 13:17:27.345781 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0874137d-06da-450a-9e93-ad53257c5115-db-sync-config-data\") pod \"glance-db-sync-8f5bt\" (UID: \"0874137d-06da-450a-9e93-ad53257c5115\") " pod="openstack/glance-db-sync-8f5bt" Dec 03 13:17:27 crc kubenswrapper[4986]: I1203 13:17:27.363467 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Dec 03 13:17:27 crc kubenswrapper[4986]: I1203 13:17:27.446744 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0874137d-06da-450a-9e93-ad53257c5115-db-sync-config-data\") pod \"glance-db-sync-8f5bt\" (UID: \"0874137d-06da-450a-9e93-ad53257c5115\") " pod="openstack/glance-db-sync-8f5bt" Dec 03 13:17:27 crc kubenswrapper[4986]: I1203 13:17:27.447084 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0874137d-06da-450a-9e93-ad53257c5115-config-data\") pod \"glance-db-sync-8f5bt\" (UID: \"0874137d-06da-450a-9e93-ad53257c5115\") " pod="openstack/glance-db-sync-8f5bt" Dec 03 13:17:27 crc kubenswrapper[4986]: I1203 13:17:27.447161 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0874137d-06da-450a-9e93-ad53257c5115-combined-ca-bundle\") pod \"glance-db-sync-8f5bt\" (UID: \"0874137d-06da-450a-9e93-ad53257c5115\") " pod="openstack/glance-db-sync-8f5bt" Dec 03 13:17:27 crc kubenswrapper[4986]: I1203 13:17:27.447203 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qph6n\" (UniqueName: \"kubernetes.io/projected/0874137d-06da-450a-9e93-ad53257c5115-kube-api-access-qph6n\") pod \"glance-db-sync-8f5bt\" (UID: \"0874137d-06da-450a-9e93-ad53257c5115\") " pod="openstack/glance-db-sync-8f5bt" Dec 03 13:17:27 crc kubenswrapper[4986]: I1203 13:17:27.453326 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0874137d-06da-450a-9e93-ad53257c5115-config-data\") pod \"glance-db-sync-8f5bt\" (UID: \"0874137d-06da-450a-9e93-ad53257c5115\") " pod="openstack/glance-db-sync-8f5bt" Dec 03 13:17:27 crc kubenswrapper[4986]: I1203 13:17:27.453424 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0874137d-06da-450a-9e93-ad53257c5115-combined-ca-bundle\") pod \"glance-db-sync-8f5bt\" (UID: \"0874137d-06da-450a-9e93-ad53257c5115\") " pod="openstack/glance-db-sync-8f5bt" Dec 03 13:17:27 crc kubenswrapper[4986]: I1203 13:17:27.453714 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0874137d-06da-450a-9e93-ad53257c5115-db-sync-config-data\") pod \"glance-db-sync-8f5bt\" (UID: \"0874137d-06da-450a-9e93-ad53257c5115\") " pod="openstack/glance-db-sync-8f5bt" Dec 03 13:17:27 crc kubenswrapper[4986]: I1203 13:17:27.467629 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qph6n\" (UniqueName: \"kubernetes.io/projected/0874137d-06da-450a-9e93-ad53257c5115-kube-api-access-qph6n\") pod \"glance-db-sync-8f5bt\" (UID: \"0874137d-06da-450a-9e93-ad53257c5115\") " pod="openstack/glance-db-sync-8f5bt" Dec 03 13:17:27 crc kubenswrapper[4986]: I1203 13:17:27.605607 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-8f5bt" Dec 03 13:17:28 crc kubenswrapper[4986]: I1203 13:17:28.135661 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-8f5bt"] Dec 03 13:17:28 crc kubenswrapper[4986]: W1203 13:17:28.143465 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0874137d_06da_450a_9e93_ad53257c5115.slice/crio-88889fbfd0cbfe77bf4a959c1e3815b4fe66102110e77687108f934a453f8112 WatchSource:0}: Error finding container 88889fbfd0cbfe77bf4a959c1e3815b4fe66102110e77687108f934a453f8112: Status 404 returned error can't find the container with id 88889fbfd0cbfe77bf4a959c1e3815b4fe66102110e77687108f934a453f8112 Dec 03 13:17:28 crc kubenswrapper[4986]: I1203 13:17:28.210252 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Dec 03 13:17:28 crc kubenswrapper[4986]: I1203 13:17:28.367632 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-8f5bt" event={"ID":"0874137d-06da-450a-9e93-ad53257c5115","Type":"ContainerStarted","Data":"88889fbfd0cbfe77bf4a959c1e3815b4fe66102110e77687108f934a453f8112"} Dec 03 13:17:28 crc kubenswrapper[4986]: I1203 13:17:28.371050 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe","Type":"ContainerStarted","Data":"c136aae2d2f7dbeb21899064c580a1da5b5cdf307a5e5656b4eb5315936e3872"} Dec 03 13:17:28 crc kubenswrapper[4986]: I1203 13:17:28.503033 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Dec 03 13:17:28 crc kubenswrapper[4986]: I1203 13:17:28.564257 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-b742-account-create-update-h8phq"] Dec 03 13:17:28 crc kubenswrapper[4986]: I1203 13:17:28.565363 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b742-account-create-update-h8phq" Dec 03 13:17:28 crc kubenswrapper[4986]: I1203 13:17:28.573359 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-75vv8"] Dec 03 13:17:28 crc kubenswrapper[4986]: I1203 13:17:28.574417 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-75vv8" Dec 03 13:17:28 crc kubenswrapper[4986]: I1203 13:17:28.574682 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Dec 03 13:17:28 crc kubenswrapper[4986]: I1203 13:17:28.576716 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-75vv8"] Dec 03 13:17:28 crc kubenswrapper[4986]: I1203 13:17:28.598741 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b742-account-create-update-h8phq"] Dec 03 13:17:28 crc kubenswrapper[4986]: I1203 13:17:28.652413 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-r7nf5"] Dec 03 13:17:28 crc kubenswrapper[4986]: I1203 13:17:28.653470 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-r7nf5" Dec 03 13:17:28 crc kubenswrapper[4986]: I1203 13:17:28.666126 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-0669-account-create-update-58qxr"] Dec 03 13:17:28 crc kubenswrapper[4986]: I1203 13:17:28.667094 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-0669-account-create-update-58qxr" Dec 03 13:17:28 crc kubenswrapper[4986]: I1203 13:17:28.670088 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Dec 03 13:17:28 crc kubenswrapper[4986]: I1203 13:17:28.678086 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjhtg\" (UniqueName: \"kubernetes.io/projected/97bbae24-c2a8-4d37-bcec-2d934bdf4cea-kube-api-access-jjhtg\") pod \"cinder-db-create-75vv8\" (UID: \"97bbae24-c2a8-4d37-bcec-2d934bdf4cea\") " pod="openstack/cinder-db-create-75vv8" Dec 03 13:17:28 crc kubenswrapper[4986]: I1203 13:17:28.678623 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0559f774-91cb-40f6-b047-15591ea39ebe-operator-scripts\") pod \"cinder-b742-account-create-update-h8phq\" (UID: \"0559f774-91cb-40f6-b047-15591ea39ebe\") " pod="openstack/cinder-b742-account-create-update-h8phq" Dec 03 13:17:28 crc kubenswrapper[4986]: I1203 13:17:28.678655 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqncf\" (UniqueName: \"kubernetes.io/projected/0559f774-91cb-40f6-b047-15591ea39ebe-kube-api-access-nqncf\") pod \"cinder-b742-account-create-update-h8phq\" (UID: \"0559f774-91cb-40f6-b047-15591ea39ebe\") " pod="openstack/cinder-b742-account-create-update-h8phq" Dec 03 13:17:28 crc kubenswrapper[4986]: I1203 13:17:28.678726 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97bbae24-c2a8-4d37-bcec-2d934bdf4cea-operator-scripts\") pod \"cinder-db-create-75vv8\" (UID: \"97bbae24-c2a8-4d37-bcec-2d934bdf4cea\") " pod="openstack/cinder-db-create-75vv8" Dec 03 13:17:28 crc kubenswrapper[4986]: I1203 13:17:28.685360 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-r7nf5"] Dec 03 13:17:28 crc kubenswrapper[4986]: I1203 13:17:28.691461 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-0669-account-create-update-58qxr"] Dec 03 13:17:28 crc kubenswrapper[4986]: I1203 13:17:28.782383 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97bbae24-c2a8-4d37-bcec-2d934bdf4cea-operator-scripts\") pod \"cinder-db-create-75vv8\" (UID: \"97bbae24-c2a8-4d37-bcec-2d934bdf4cea\") " pod="openstack/cinder-db-create-75vv8" Dec 03 13:17:28 crc kubenswrapper[4986]: I1203 13:17:28.782436 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtjt6\" (UniqueName: \"kubernetes.io/projected/bb71e9c0-6c77-4e47-8b4d-8e77a05cca59-kube-api-access-qtjt6\") pod \"barbican-db-create-r7nf5\" (UID: \"bb71e9c0-6c77-4e47-8b4d-8e77a05cca59\") " pod="openstack/barbican-db-create-r7nf5" Dec 03 13:17:28 crc kubenswrapper[4986]: I1203 13:17:28.782473 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjhtg\" (UniqueName: \"kubernetes.io/projected/97bbae24-c2a8-4d37-bcec-2d934bdf4cea-kube-api-access-jjhtg\") pod \"cinder-db-create-75vv8\" (UID: \"97bbae24-c2a8-4d37-bcec-2d934bdf4cea\") " pod="openstack/cinder-db-create-75vv8" Dec 03 13:17:28 crc kubenswrapper[4986]: I1203 13:17:28.782610 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f6fe773b-71aa-4762-ad97-bfb9636906c2-operator-scripts\") pod \"barbican-0669-account-create-update-58qxr\" (UID: \"f6fe773b-71aa-4762-ad97-bfb9636906c2\") " pod="openstack/barbican-0669-account-create-update-58qxr" Dec 03 13:17:28 crc kubenswrapper[4986]: I1203 13:17:28.782861 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0559f774-91cb-40f6-b047-15591ea39ebe-operator-scripts\") pod \"cinder-b742-account-create-update-h8phq\" (UID: \"0559f774-91cb-40f6-b047-15591ea39ebe\") " pod="openstack/cinder-b742-account-create-update-h8phq" Dec 03 13:17:28 crc kubenswrapper[4986]: I1203 13:17:28.782935 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqncf\" (UniqueName: \"kubernetes.io/projected/0559f774-91cb-40f6-b047-15591ea39ebe-kube-api-access-nqncf\") pod \"cinder-b742-account-create-update-h8phq\" (UID: \"0559f774-91cb-40f6-b047-15591ea39ebe\") " pod="openstack/cinder-b742-account-create-update-h8phq" Dec 03 13:17:28 crc kubenswrapper[4986]: I1203 13:17:28.782960 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dc9f8\" (UniqueName: \"kubernetes.io/projected/f6fe773b-71aa-4762-ad97-bfb9636906c2-kube-api-access-dc9f8\") pod \"barbican-0669-account-create-update-58qxr\" (UID: \"f6fe773b-71aa-4762-ad97-bfb9636906c2\") " pod="openstack/barbican-0669-account-create-update-58qxr" Dec 03 13:17:28 crc kubenswrapper[4986]: I1203 13:17:28.782991 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb71e9c0-6c77-4e47-8b4d-8e77a05cca59-operator-scripts\") pod \"barbican-db-create-r7nf5\" (UID: \"bb71e9c0-6c77-4e47-8b4d-8e77a05cca59\") " pod="openstack/barbican-db-create-r7nf5" Dec 03 13:17:28 crc kubenswrapper[4986]: I1203 13:17:28.783105 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97bbae24-c2a8-4d37-bcec-2d934bdf4cea-operator-scripts\") pod \"cinder-db-create-75vv8\" (UID: \"97bbae24-c2a8-4d37-bcec-2d934bdf4cea\") " pod="openstack/cinder-db-create-75vv8" Dec 03 13:17:28 crc kubenswrapper[4986]: I1203 13:17:28.783706 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0559f774-91cb-40f6-b047-15591ea39ebe-operator-scripts\") pod \"cinder-b742-account-create-update-h8phq\" (UID: \"0559f774-91cb-40f6-b047-15591ea39ebe\") " pod="openstack/cinder-b742-account-create-update-h8phq" Dec 03 13:17:28 crc kubenswrapper[4986]: I1203 13:17:28.816139 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjhtg\" (UniqueName: \"kubernetes.io/projected/97bbae24-c2a8-4d37-bcec-2d934bdf4cea-kube-api-access-jjhtg\") pod \"cinder-db-create-75vv8\" (UID: \"97bbae24-c2a8-4d37-bcec-2d934bdf4cea\") " pod="openstack/cinder-db-create-75vv8" Dec 03 13:17:28 crc kubenswrapper[4986]: I1203 13:17:28.819847 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqncf\" (UniqueName: \"kubernetes.io/projected/0559f774-91cb-40f6-b047-15591ea39ebe-kube-api-access-nqncf\") pod \"cinder-b742-account-create-update-h8phq\" (UID: \"0559f774-91cb-40f6-b047-15591ea39ebe\") " pod="openstack/cinder-b742-account-create-update-h8phq" Dec 03 13:17:28 crc kubenswrapper[4986]: I1203 13:17:28.858127 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-7sp42"] Dec 03 13:17:28 crc kubenswrapper[4986]: I1203 13:17:28.859705 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-7sp42" Dec 03 13:17:28 crc kubenswrapper[4986]: I1203 13:17:28.874353 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-7sp42"] Dec 03 13:17:28 crc kubenswrapper[4986]: I1203 13:17:28.886712 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dc9f8\" (UniqueName: \"kubernetes.io/projected/f6fe773b-71aa-4762-ad97-bfb9636906c2-kube-api-access-dc9f8\") pod \"barbican-0669-account-create-update-58qxr\" (UID: \"f6fe773b-71aa-4762-ad97-bfb9636906c2\") " pod="openstack/barbican-0669-account-create-update-58qxr" Dec 03 13:17:28 crc kubenswrapper[4986]: I1203 13:17:28.886764 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb71e9c0-6c77-4e47-8b4d-8e77a05cca59-operator-scripts\") pod \"barbican-db-create-r7nf5\" (UID: \"bb71e9c0-6c77-4e47-8b4d-8e77a05cca59\") " pod="openstack/barbican-db-create-r7nf5" Dec 03 13:17:28 crc kubenswrapper[4986]: I1203 13:17:28.886814 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtjt6\" (UniqueName: \"kubernetes.io/projected/bb71e9c0-6c77-4e47-8b4d-8e77a05cca59-kube-api-access-qtjt6\") pod \"barbican-db-create-r7nf5\" (UID: \"bb71e9c0-6c77-4e47-8b4d-8e77a05cca59\") " pod="openstack/barbican-db-create-r7nf5" Dec 03 13:17:28 crc kubenswrapper[4986]: I1203 13:17:28.886869 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f6fe773b-71aa-4762-ad97-bfb9636906c2-operator-scripts\") pod \"barbican-0669-account-create-update-58qxr\" (UID: \"f6fe773b-71aa-4762-ad97-bfb9636906c2\") " pod="openstack/barbican-0669-account-create-update-58qxr" Dec 03 13:17:28 crc kubenswrapper[4986]: I1203 13:17:28.887587 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f6fe773b-71aa-4762-ad97-bfb9636906c2-operator-scripts\") pod \"barbican-0669-account-create-update-58qxr\" (UID: \"f6fe773b-71aa-4762-ad97-bfb9636906c2\") " pod="openstack/barbican-0669-account-create-update-58qxr" Dec 03 13:17:28 crc kubenswrapper[4986]: I1203 13:17:28.888343 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb71e9c0-6c77-4e47-8b4d-8e77a05cca59-operator-scripts\") pod \"barbican-db-create-r7nf5\" (UID: \"bb71e9c0-6c77-4e47-8b4d-8e77a05cca59\") " pod="openstack/barbican-db-create-r7nf5" Dec 03 13:17:28 crc kubenswrapper[4986]: I1203 13:17:28.907150 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b742-account-create-update-h8phq" Dec 03 13:17:28 crc kubenswrapper[4986]: I1203 13:17:28.931605 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-75vv8" Dec 03 13:17:28 crc kubenswrapper[4986]: I1203 13:17:28.932083 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtjt6\" (UniqueName: \"kubernetes.io/projected/bb71e9c0-6c77-4e47-8b4d-8e77a05cca59-kube-api-access-qtjt6\") pod \"barbican-db-create-r7nf5\" (UID: \"bb71e9c0-6c77-4e47-8b4d-8e77a05cca59\") " pod="openstack/barbican-db-create-r7nf5" Dec 03 13:17:28 crc kubenswrapper[4986]: I1203 13:17:28.932325 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-zbccr"] Dec 03 13:17:28 crc kubenswrapper[4986]: I1203 13:17:28.933525 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-zbccr" Dec 03 13:17:28 crc kubenswrapper[4986]: I1203 13:17:28.940725 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 03 13:17:28 crc kubenswrapper[4986]: I1203 13:17:28.940913 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 03 13:17:28 crc kubenswrapper[4986]: I1203 13:17:28.941057 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 03 13:17:28 crc kubenswrapper[4986]: I1203 13:17:28.951275 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-mtjcz" Dec 03 13:17:28 crc kubenswrapper[4986]: I1203 13:17:28.955519 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dc9f8\" (UniqueName: \"kubernetes.io/projected/f6fe773b-71aa-4762-ad97-bfb9636906c2-kube-api-access-dc9f8\") pod \"barbican-0669-account-create-update-58qxr\" (UID: \"f6fe773b-71aa-4762-ad97-bfb9636906c2\") " pod="openstack/barbican-0669-account-create-update-58qxr" Dec 03 13:17:28 crc kubenswrapper[4986]: I1203 13:17:28.968272 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-zbccr"] Dec 03 13:17:28 crc kubenswrapper[4986]: I1203 13:17:28.968725 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-3a26-account-create-update-6d5g6"] Dec 03 13:17:28 crc kubenswrapper[4986]: I1203 13:17:28.969906 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-3a26-account-create-update-6d5g6" Dec 03 13:17:28 crc kubenswrapper[4986]: I1203 13:17:28.971628 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Dec 03 13:17:28 crc kubenswrapper[4986]: I1203 13:17:28.981599 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-r7nf5" Dec 03 13:17:28 crc kubenswrapper[4986]: I1203 13:17:28.983433 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-3a26-account-create-update-6d5g6"] Dec 03 13:17:28 crc kubenswrapper[4986]: I1203 13:17:28.988025 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f130f3db-61e7-49d0-ab61-3f3d16349860-operator-scripts\") pod \"neutron-db-create-7sp42\" (UID: \"f130f3db-61e7-49d0-ab61-3f3d16349860\") " pod="openstack/neutron-db-create-7sp42" Dec 03 13:17:28 crc kubenswrapper[4986]: I1203 13:17:28.988088 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9sdc\" (UniqueName: \"kubernetes.io/projected/f130f3db-61e7-49d0-ab61-3f3d16349860-kube-api-access-q9sdc\") pod \"neutron-db-create-7sp42\" (UID: \"f130f3db-61e7-49d0-ab61-3f3d16349860\") " pod="openstack/neutron-db-create-7sp42" Dec 03 13:17:28 crc kubenswrapper[4986]: I1203 13:17:28.991755 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-0669-account-create-update-58qxr" Dec 03 13:17:29 crc kubenswrapper[4986]: I1203 13:17:29.090360 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6np6x\" (UniqueName: \"kubernetes.io/projected/cbde4185-c1c7-47e8-b369-d23ff4d4092c-kube-api-access-6np6x\") pod \"neutron-3a26-account-create-update-6d5g6\" (UID: \"cbde4185-c1c7-47e8-b369-d23ff4d4092c\") " pod="openstack/neutron-3a26-account-create-update-6d5g6" Dec 03 13:17:29 crc kubenswrapper[4986]: I1203 13:17:29.090467 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3efb58d2-ea87-4349-a581-519e1b458a37-combined-ca-bundle\") pod \"keystone-db-sync-zbccr\" (UID: \"3efb58d2-ea87-4349-a581-519e1b458a37\") " pod="openstack/keystone-db-sync-zbccr" Dec 03 13:17:29 crc kubenswrapper[4986]: I1203 13:17:29.090616 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5ntn\" (UniqueName: \"kubernetes.io/projected/3efb58d2-ea87-4349-a581-519e1b458a37-kube-api-access-x5ntn\") pod \"keystone-db-sync-zbccr\" (UID: \"3efb58d2-ea87-4349-a581-519e1b458a37\") " pod="openstack/keystone-db-sync-zbccr" Dec 03 13:17:29 crc kubenswrapper[4986]: I1203 13:17:29.090694 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3efb58d2-ea87-4349-a581-519e1b458a37-config-data\") pod \"keystone-db-sync-zbccr\" (UID: \"3efb58d2-ea87-4349-a581-519e1b458a37\") " pod="openstack/keystone-db-sync-zbccr" Dec 03 13:17:29 crc kubenswrapper[4986]: I1203 13:17:29.090752 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cbde4185-c1c7-47e8-b369-d23ff4d4092c-operator-scripts\") pod \"neutron-3a26-account-create-update-6d5g6\" (UID: \"cbde4185-c1c7-47e8-b369-d23ff4d4092c\") " pod="openstack/neutron-3a26-account-create-update-6d5g6" Dec 03 13:17:29 crc kubenswrapper[4986]: I1203 13:17:29.090782 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f130f3db-61e7-49d0-ab61-3f3d16349860-operator-scripts\") pod \"neutron-db-create-7sp42\" (UID: \"f130f3db-61e7-49d0-ab61-3f3d16349860\") " pod="openstack/neutron-db-create-7sp42" Dec 03 13:17:29 crc kubenswrapper[4986]: I1203 13:17:29.090855 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9sdc\" (UniqueName: \"kubernetes.io/projected/f130f3db-61e7-49d0-ab61-3f3d16349860-kube-api-access-q9sdc\") pod \"neutron-db-create-7sp42\" (UID: \"f130f3db-61e7-49d0-ab61-3f3d16349860\") " pod="openstack/neutron-db-create-7sp42" Dec 03 13:17:29 crc kubenswrapper[4986]: I1203 13:17:29.091989 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f130f3db-61e7-49d0-ab61-3f3d16349860-operator-scripts\") pod \"neutron-db-create-7sp42\" (UID: \"f130f3db-61e7-49d0-ab61-3f3d16349860\") " pod="openstack/neutron-db-create-7sp42" Dec 03 13:17:29 crc kubenswrapper[4986]: I1203 13:17:29.113665 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9sdc\" (UniqueName: \"kubernetes.io/projected/f130f3db-61e7-49d0-ab61-3f3d16349860-kube-api-access-q9sdc\") pod \"neutron-db-create-7sp42\" (UID: \"f130f3db-61e7-49d0-ab61-3f3d16349860\") " pod="openstack/neutron-db-create-7sp42" Dec 03 13:17:29 crc kubenswrapper[4986]: I1203 13:17:29.119853 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Dec 03 13:17:29 crc kubenswrapper[4986]: I1203 13:17:29.192838 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6np6x\" (UniqueName: \"kubernetes.io/projected/cbde4185-c1c7-47e8-b369-d23ff4d4092c-kube-api-access-6np6x\") pod \"neutron-3a26-account-create-update-6d5g6\" (UID: \"cbde4185-c1c7-47e8-b369-d23ff4d4092c\") " pod="openstack/neutron-3a26-account-create-update-6d5g6" Dec 03 13:17:29 crc kubenswrapper[4986]: I1203 13:17:29.192954 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3efb58d2-ea87-4349-a581-519e1b458a37-combined-ca-bundle\") pod \"keystone-db-sync-zbccr\" (UID: \"3efb58d2-ea87-4349-a581-519e1b458a37\") " pod="openstack/keystone-db-sync-zbccr" Dec 03 13:17:29 crc kubenswrapper[4986]: I1203 13:17:29.193139 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5ntn\" (UniqueName: \"kubernetes.io/projected/3efb58d2-ea87-4349-a581-519e1b458a37-kube-api-access-x5ntn\") pod \"keystone-db-sync-zbccr\" (UID: \"3efb58d2-ea87-4349-a581-519e1b458a37\") " pod="openstack/keystone-db-sync-zbccr" Dec 03 13:17:29 crc kubenswrapper[4986]: I1203 13:17:29.193200 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3efb58d2-ea87-4349-a581-519e1b458a37-config-data\") pod \"keystone-db-sync-zbccr\" (UID: \"3efb58d2-ea87-4349-a581-519e1b458a37\") " pod="openstack/keystone-db-sync-zbccr" Dec 03 13:17:29 crc kubenswrapper[4986]: I1203 13:17:29.193243 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cbde4185-c1c7-47e8-b369-d23ff4d4092c-operator-scripts\") pod \"neutron-3a26-account-create-update-6d5g6\" (UID: \"cbde4185-c1c7-47e8-b369-d23ff4d4092c\") " pod="openstack/neutron-3a26-account-create-update-6d5g6" Dec 03 13:17:29 crc kubenswrapper[4986]: I1203 13:17:29.197204 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cbde4185-c1c7-47e8-b369-d23ff4d4092c-operator-scripts\") pod \"neutron-3a26-account-create-update-6d5g6\" (UID: \"cbde4185-c1c7-47e8-b369-d23ff4d4092c\") " pod="openstack/neutron-3a26-account-create-update-6d5g6" Dec 03 13:17:29 crc kubenswrapper[4986]: I1203 13:17:29.207307 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3efb58d2-ea87-4349-a581-519e1b458a37-combined-ca-bundle\") pod \"keystone-db-sync-zbccr\" (UID: \"3efb58d2-ea87-4349-a581-519e1b458a37\") " pod="openstack/keystone-db-sync-zbccr" Dec 03 13:17:29 crc kubenswrapper[4986]: I1203 13:17:29.220141 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6np6x\" (UniqueName: \"kubernetes.io/projected/cbde4185-c1c7-47e8-b369-d23ff4d4092c-kube-api-access-6np6x\") pod \"neutron-3a26-account-create-update-6d5g6\" (UID: \"cbde4185-c1c7-47e8-b369-d23ff4d4092c\") " pod="openstack/neutron-3a26-account-create-update-6d5g6" Dec 03 13:17:29 crc kubenswrapper[4986]: I1203 13:17:29.233185 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-7sp42" Dec 03 13:17:29 crc kubenswrapper[4986]: I1203 13:17:29.242659 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5ntn\" (UniqueName: \"kubernetes.io/projected/3efb58d2-ea87-4349-a581-519e1b458a37-kube-api-access-x5ntn\") pod \"keystone-db-sync-zbccr\" (UID: \"3efb58d2-ea87-4349-a581-519e1b458a37\") " pod="openstack/keystone-db-sync-zbccr" Dec 03 13:17:29 crc kubenswrapper[4986]: I1203 13:17:29.243747 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3efb58d2-ea87-4349-a581-519e1b458a37-config-data\") pod \"keystone-db-sync-zbccr\" (UID: \"3efb58d2-ea87-4349-a581-519e1b458a37\") " pod="openstack/keystone-db-sync-zbccr" Dec 03 13:17:29 crc kubenswrapper[4986]: I1203 13:17:29.341431 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-zbccr" Dec 03 13:17:29 crc kubenswrapper[4986]: I1203 13:17:29.349598 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-3a26-account-create-update-6d5g6" Dec 03 13:17:29 crc kubenswrapper[4986]: I1203 13:17:29.851742 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b742-account-create-update-h8phq"] Dec 03 13:17:29 crc kubenswrapper[4986]: I1203 13:17:29.933435 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-75vv8"] Dec 03 13:17:29 crc kubenswrapper[4986]: W1203 13:17:29.963688 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod97bbae24_c2a8_4d37_bcec_2d934bdf4cea.slice/crio-4f452e39ecfe2355673f5a4608eb5b0fec3282451db04c04df60a28908e9fa92 WatchSource:0}: Error finding container 4f452e39ecfe2355673f5a4608eb5b0fec3282451db04c04df60a28908e9fa92: Status 404 returned error can't find the container with id 4f452e39ecfe2355673f5a4608eb5b0fec3282451db04c04df60a28908e9fa92 Dec 03 13:17:29 crc kubenswrapper[4986]: I1203 13:17:29.965238 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-0669-account-create-update-58qxr"] Dec 03 13:17:30 crc kubenswrapper[4986]: I1203 13:17:30.046508 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-7sp42"] Dec 03 13:17:30 crc kubenswrapper[4986]: I1203 13:17:30.060335 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-r7nf5"] Dec 03 13:17:30 crc kubenswrapper[4986]: W1203 13:17:30.070542 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbb71e9c0_6c77_4e47_8b4d_8e77a05cca59.slice/crio-ca410a8f5105d5e91ea47ac9714af4118782fd42bc4f501be34c22ab2a139635 WatchSource:0}: Error finding container ca410a8f5105d5e91ea47ac9714af4118782fd42bc4f501be34c22ab2a139635: Status 404 returned error can't find the container with id ca410a8f5105d5e91ea47ac9714af4118782fd42bc4f501be34c22ab2a139635 Dec 03 13:17:30 crc kubenswrapper[4986]: W1203 13:17:30.087870 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf130f3db_61e7_49d0_ab61_3f3d16349860.slice/crio-755527d29c5b8aeab2ce46c39d4eaff2949359cd2f138dc643b86da3ddc6a79f WatchSource:0}: Error finding container 755527d29c5b8aeab2ce46c39d4eaff2949359cd2f138dc643b86da3ddc6a79f: Status 404 returned error can't find the container with id 755527d29c5b8aeab2ce46c39d4eaff2949359cd2f138dc643b86da3ddc6a79f Dec 03 13:17:30 crc kubenswrapper[4986]: I1203 13:17:30.134888 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-zbccr"] Dec 03 13:17:30 crc kubenswrapper[4986]: I1203 13:17:30.155806 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-3a26-account-create-update-6d5g6"] Dec 03 13:17:30 crc kubenswrapper[4986]: I1203 13:17:30.390155 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-7sp42" event={"ID":"f130f3db-61e7-49d0-ab61-3f3d16349860","Type":"ContainerStarted","Data":"0c010c527909e71a1090188eadc464bde7d31041e303ca47265e912741e72735"} Dec 03 13:17:30 crc kubenswrapper[4986]: I1203 13:17:30.390204 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-7sp42" event={"ID":"f130f3db-61e7-49d0-ab61-3f3d16349860","Type":"ContainerStarted","Data":"755527d29c5b8aeab2ce46c39d4eaff2949359cd2f138dc643b86da3ddc6a79f"} Dec 03 13:17:30 crc kubenswrapper[4986]: I1203 13:17:30.395437 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-0669-account-create-update-58qxr" event={"ID":"f6fe773b-71aa-4762-ad97-bfb9636906c2","Type":"ContainerStarted","Data":"1276216af52852b74b7b994a366629d36e4ba71525209a2cba12d6ae3626304d"} Dec 03 13:17:30 crc kubenswrapper[4986]: I1203 13:17:30.395479 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-0669-account-create-update-58qxr" event={"ID":"f6fe773b-71aa-4762-ad97-bfb9636906c2","Type":"ContainerStarted","Data":"4b74437796c09f0032eaf4be3eb8a6bf563e8247d2233bc07b2f3c6eac38ae96"} Dec 03 13:17:30 crc kubenswrapper[4986]: I1203 13:17:30.406156 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe","Type":"ContainerStarted","Data":"4fcdf311f14a9633747b5983140488be0ca969d27fa49dc3cbb32baf2a44a513"} Dec 03 13:17:30 crc kubenswrapper[4986]: I1203 13:17:30.406205 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe","Type":"ContainerStarted","Data":"298193abd22f9fd98075df9df21a0479bb510fc09c2726e539dad78db2cb3317"} Dec 03 13:17:30 crc kubenswrapper[4986]: I1203 13:17:30.406219 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe","Type":"ContainerStarted","Data":"c39e1c11ffb3097d88968bc500000240422380c536ffad836743321bc4abce52"} Dec 03 13:17:30 crc kubenswrapper[4986]: I1203 13:17:30.407621 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-zbccr" event={"ID":"3efb58d2-ea87-4349-a581-519e1b458a37","Type":"ContainerStarted","Data":"b7f90d14a477c17bd20426c5ea96d130fd63848dfafad64027b0049901671feb"} Dec 03 13:17:30 crc kubenswrapper[4986]: I1203 13:17:30.413222 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-75vv8" event={"ID":"97bbae24-c2a8-4d37-bcec-2d934bdf4cea","Type":"ContainerStarted","Data":"004d8f697ae1fd369046140c2c327428b2686d0bce755968a4b4e5e959ffbcbd"} Dec 03 13:17:30 crc kubenswrapper[4986]: I1203 13:17:30.413276 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-75vv8" event={"ID":"97bbae24-c2a8-4d37-bcec-2d934bdf4cea","Type":"ContainerStarted","Data":"4f452e39ecfe2355673f5a4608eb5b0fec3282451db04c04df60a28908e9fa92"} Dec 03 13:17:30 crc kubenswrapper[4986]: I1203 13:17:30.415874 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-7sp42" podStartSLOduration=2.415857867 podStartE2EDuration="2.415857867s" podCreationTimestamp="2025-12-03 13:17:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:17:30.412858896 +0000 UTC m=+1309.879290097" watchObservedRunningTime="2025-12-03 13:17:30.415857867 +0000 UTC m=+1309.882289048" Dec 03 13:17:30 crc kubenswrapper[4986]: I1203 13:17:30.425587 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b742-account-create-update-h8phq" event={"ID":"0559f774-91cb-40f6-b047-15591ea39ebe","Type":"ContainerStarted","Data":"fe6fb5a581d4b359f7e4eaf250175528ac0986041fb221f2d4c3bb36e410199f"} Dec 03 13:17:30 crc kubenswrapper[4986]: I1203 13:17:30.425638 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b742-account-create-update-h8phq" event={"ID":"0559f774-91cb-40f6-b047-15591ea39ebe","Type":"ContainerStarted","Data":"dd4bac78b1310d849af4ef064b317214a41820e0a7aecf58cbf678e3974ee2c1"} Dec 03 13:17:30 crc kubenswrapper[4986]: I1203 13:17:30.432531 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-0669-account-create-update-58qxr" podStartSLOduration=2.432514216 podStartE2EDuration="2.432514216s" podCreationTimestamp="2025-12-03 13:17:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:17:30.424709285 +0000 UTC m=+1309.891140496" watchObservedRunningTime="2025-12-03 13:17:30.432514216 +0000 UTC m=+1309.898945407" Dec 03 13:17:30 crc kubenswrapper[4986]: I1203 13:17:30.433437 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-3a26-account-create-update-6d5g6" event={"ID":"cbde4185-c1c7-47e8-b369-d23ff4d4092c","Type":"ContainerStarted","Data":"376ba46cf5f0f7760f3797706f290391df66835ba85410f149ebb49c44568674"} Dec 03 13:17:30 crc kubenswrapper[4986]: I1203 13:17:30.436563 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-r7nf5" event={"ID":"bb71e9c0-6c77-4e47-8b4d-8e77a05cca59","Type":"ContainerStarted","Data":"20b07599d1311ccc0995729e3fcdb9087c9da6a6db9c10453163a3bb48f919ee"} Dec 03 13:17:30 crc kubenswrapper[4986]: I1203 13:17:30.436591 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-r7nf5" event={"ID":"bb71e9c0-6c77-4e47-8b4d-8e77a05cca59","Type":"ContainerStarted","Data":"ca410a8f5105d5e91ea47ac9714af4118782fd42bc4f501be34c22ab2a139635"} Dec 03 13:17:30 crc kubenswrapper[4986]: I1203 13:17:30.449618 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-75vv8" podStartSLOduration=2.4496038159999998 podStartE2EDuration="2.449603816s" podCreationTimestamp="2025-12-03 13:17:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:17:30.443638926 +0000 UTC m=+1309.910070117" watchObservedRunningTime="2025-12-03 13:17:30.449603816 +0000 UTC m=+1309.916035007" Dec 03 13:17:30 crc kubenswrapper[4986]: I1203 13:17:30.464883 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-b742-account-create-update-h8phq" podStartSLOduration=2.464865028 podStartE2EDuration="2.464865028s" podCreationTimestamp="2025-12-03 13:17:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:17:30.463932062 +0000 UTC m=+1309.930363253" watchObservedRunningTime="2025-12-03 13:17:30.464865028 +0000 UTC m=+1309.931296219" Dec 03 13:17:30 crc kubenswrapper[4986]: I1203 13:17:30.485521 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-r7nf5" podStartSLOduration=2.485499113 podStartE2EDuration="2.485499113s" podCreationTimestamp="2025-12-03 13:17:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:17:30.480265173 +0000 UTC m=+1309.946696364" watchObservedRunningTime="2025-12-03 13:17:30.485499113 +0000 UTC m=+1309.951930304" Dec 03 13:17:31 crc kubenswrapper[4986]: I1203 13:17:31.420858 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-kqn7t" Dec 03 13:17:31 crc kubenswrapper[4986]: I1203 13:17:31.471784 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe","Type":"ContainerStarted","Data":"f32816baf444dd824703a53dbb281d0bac6dc380644e62aede22e395eb344d5c"} Dec 03 13:17:31 crc kubenswrapper[4986]: I1203 13:17:31.475628 4986 generic.go:334] "Generic (PLEG): container finished" podID="97bbae24-c2a8-4d37-bcec-2d934bdf4cea" containerID="004d8f697ae1fd369046140c2c327428b2686d0bce755968a4b4e5e959ffbcbd" exitCode=0 Dec 03 13:17:31 crc kubenswrapper[4986]: I1203 13:17:31.475703 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-75vv8" event={"ID":"97bbae24-c2a8-4d37-bcec-2d934bdf4cea","Type":"ContainerDied","Data":"004d8f697ae1fd369046140c2c327428b2686d0bce755968a4b4e5e959ffbcbd"} Dec 03 13:17:31 crc kubenswrapper[4986]: I1203 13:17:31.488883 4986 generic.go:334] "Generic (PLEG): container finished" podID="0559f774-91cb-40f6-b047-15591ea39ebe" containerID="fe6fb5a581d4b359f7e4eaf250175528ac0986041fb221f2d4c3bb36e410199f" exitCode=0 Dec 03 13:17:31 crc kubenswrapper[4986]: I1203 13:17:31.488972 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b742-account-create-update-h8phq" event={"ID":"0559f774-91cb-40f6-b047-15591ea39ebe","Type":"ContainerDied","Data":"fe6fb5a581d4b359f7e4eaf250175528ac0986041fb221f2d4c3bb36e410199f"} Dec 03 13:17:31 crc kubenswrapper[4986]: I1203 13:17:31.496768 4986 generic.go:334] "Generic (PLEG): container finished" podID="cbde4185-c1c7-47e8-b369-d23ff4d4092c" containerID="5bd2c846230b30451b644e278628c3e0410fd3d3eecf52bf7d01336263a49969" exitCode=0 Dec 03 13:17:31 crc kubenswrapper[4986]: I1203 13:17:31.496824 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-3a26-account-create-update-6d5g6" event={"ID":"cbde4185-c1c7-47e8-b369-d23ff4d4092c","Type":"ContainerDied","Data":"5bd2c846230b30451b644e278628c3e0410fd3d3eecf52bf7d01336263a49969"} Dec 03 13:17:31 crc kubenswrapper[4986]: I1203 13:17:31.498178 4986 generic.go:334] "Generic (PLEG): container finished" podID="bb71e9c0-6c77-4e47-8b4d-8e77a05cca59" containerID="20b07599d1311ccc0995729e3fcdb9087c9da6a6db9c10453163a3bb48f919ee" exitCode=0 Dec 03 13:17:31 crc kubenswrapper[4986]: I1203 13:17:31.498217 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-r7nf5" event={"ID":"bb71e9c0-6c77-4e47-8b4d-8e77a05cca59","Type":"ContainerDied","Data":"20b07599d1311ccc0995729e3fcdb9087c9da6a6db9c10453163a3bb48f919ee"} Dec 03 13:17:31 crc kubenswrapper[4986]: I1203 13:17:31.499622 4986 generic.go:334] "Generic (PLEG): container finished" podID="f130f3db-61e7-49d0-ab61-3f3d16349860" containerID="0c010c527909e71a1090188eadc464bde7d31041e303ca47265e912741e72735" exitCode=0 Dec 03 13:17:31 crc kubenswrapper[4986]: I1203 13:17:31.499684 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-7sp42" event={"ID":"f130f3db-61e7-49d0-ab61-3f3d16349860","Type":"ContainerDied","Data":"0c010c527909e71a1090188eadc464bde7d31041e303ca47265e912741e72735"} Dec 03 13:17:31 crc kubenswrapper[4986]: I1203 13:17:31.501548 4986 generic.go:334] "Generic (PLEG): container finished" podID="f6fe773b-71aa-4762-ad97-bfb9636906c2" containerID="1276216af52852b74b7b994a366629d36e4ba71525209a2cba12d6ae3626304d" exitCode=0 Dec 03 13:17:31 crc kubenswrapper[4986]: I1203 13:17:31.501571 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-0669-account-create-update-58qxr" event={"ID":"f6fe773b-71aa-4762-ad97-bfb9636906c2","Type":"ContainerDied","Data":"1276216af52852b74b7b994a366629d36e4ba71525209a2cba12d6ae3626304d"} Dec 03 13:17:32 crc kubenswrapper[4986]: I1203 13:17:32.512178 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe","Type":"ContainerStarted","Data":"c6830c2d246ea748c60b0de44d1e5c39246e6bb0d7070f999849de97f37bfad5"} Dec 03 13:17:32 crc kubenswrapper[4986]: I1203 13:17:32.512492 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe","Type":"ContainerStarted","Data":"101c172aaa3ae9b006282d421b6397408863e990c3ecac5eeeab8d308bcd973b"} Dec 03 13:17:33 crc kubenswrapper[4986]: I1203 13:17:33.491956 4986 patch_prober.go:28] interesting pod/machine-config-daemon-xggpj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 13:17:33 crc kubenswrapper[4986]: I1203 13:17:33.492250 4986 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 13:17:33 crc kubenswrapper[4986]: I1203 13:17:33.525083 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe","Type":"ContainerStarted","Data":"e4856307e34f45729b13ae367640f729880480a5741ea1db6b2901d950d98425"} Dec 03 13:17:36 crc kubenswrapper[4986]: I1203 13:17:36.983050 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-7sp42" Dec 03 13:17:36 crc kubenswrapper[4986]: I1203 13:17:36.984346 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f130f3db-61e7-49d0-ab61-3f3d16349860-operator-scripts\") pod \"f130f3db-61e7-49d0-ab61-3f3d16349860\" (UID: \"f130f3db-61e7-49d0-ab61-3f3d16349860\") " Dec 03 13:17:36 crc kubenswrapper[4986]: I1203 13:17:36.984594 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9sdc\" (UniqueName: \"kubernetes.io/projected/f130f3db-61e7-49d0-ab61-3f3d16349860-kube-api-access-q9sdc\") pod \"f130f3db-61e7-49d0-ab61-3f3d16349860\" (UID: \"f130f3db-61e7-49d0-ab61-3f3d16349860\") " Dec 03 13:17:36 crc kubenswrapper[4986]: I1203 13:17:36.986483 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f130f3db-61e7-49d0-ab61-3f3d16349860-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f130f3db-61e7-49d0-ab61-3f3d16349860" (UID: "f130f3db-61e7-49d0-ab61-3f3d16349860"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:17:36 crc kubenswrapper[4986]: I1203 13:17:36.992944 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f130f3db-61e7-49d0-ab61-3f3d16349860-kube-api-access-q9sdc" (OuterVolumeSpecName: "kube-api-access-q9sdc") pod "f130f3db-61e7-49d0-ab61-3f3d16349860" (UID: "f130f3db-61e7-49d0-ab61-3f3d16349860"). InnerVolumeSpecName "kube-api-access-q9sdc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:17:36 crc kubenswrapper[4986]: I1203 13:17:36.995574 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-75vv8" Dec 03 13:17:37 crc kubenswrapper[4986]: I1203 13:17:37.045188 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b742-account-create-update-h8phq" Dec 03 13:17:37 crc kubenswrapper[4986]: I1203 13:17:37.085298 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97bbae24-c2a8-4d37-bcec-2d934bdf4cea-operator-scripts\") pod \"97bbae24-c2a8-4d37-bcec-2d934bdf4cea\" (UID: \"97bbae24-c2a8-4d37-bcec-2d934bdf4cea\") " Dec 03 13:17:37 crc kubenswrapper[4986]: I1203 13:17:37.085348 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jjhtg\" (UniqueName: \"kubernetes.io/projected/97bbae24-c2a8-4d37-bcec-2d934bdf4cea-kube-api-access-jjhtg\") pod \"97bbae24-c2a8-4d37-bcec-2d934bdf4cea\" (UID: \"97bbae24-c2a8-4d37-bcec-2d934bdf4cea\") " Dec 03 13:17:37 crc kubenswrapper[4986]: I1203 13:17:37.085372 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0559f774-91cb-40f6-b047-15591ea39ebe-operator-scripts\") pod \"0559f774-91cb-40f6-b047-15591ea39ebe\" (UID: \"0559f774-91cb-40f6-b047-15591ea39ebe\") " Dec 03 13:17:37 crc kubenswrapper[4986]: I1203 13:17:37.085443 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqncf\" (UniqueName: \"kubernetes.io/projected/0559f774-91cb-40f6-b047-15591ea39ebe-kube-api-access-nqncf\") pod \"0559f774-91cb-40f6-b047-15591ea39ebe\" (UID: \"0559f774-91cb-40f6-b047-15591ea39ebe\") " Dec 03 13:17:37 crc kubenswrapper[4986]: I1203 13:17:37.085693 4986 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f130f3db-61e7-49d0-ab61-3f3d16349860-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 13:17:37 crc kubenswrapper[4986]: I1203 13:17:37.085710 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9sdc\" (UniqueName: \"kubernetes.io/projected/f130f3db-61e7-49d0-ab61-3f3d16349860-kube-api-access-q9sdc\") on node \"crc\" DevicePath \"\"" Dec 03 13:17:37 crc kubenswrapper[4986]: I1203 13:17:37.086847 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97bbae24-c2a8-4d37-bcec-2d934bdf4cea-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "97bbae24-c2a8-4d37-bcec-2d934bdf4cea" (UID: "97bbae24-c2a8-4d37-bcec-2d934bdf4cea"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:17:37 crc kubenswrapper[4986]: I1203 13:17:37.086850 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0559f774-91cb-40f6-b047-15591ea39ebe-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0559f774-91cb-40f6-b047-15591ea39ebe" (UID: "0559f774-91cb-40f6-b047-15591ea39ebe"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:17:37 crc kubenswrapper[4986]: I1203 13:17:37.089881 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97bbae24-c2a8-4d37-bcec-2d934bdf4cea-kube-api-access-jjhtg" (OuterVolumeSpecName: "kube-api-access-jjhtg") pod "97bbae24-c2a8-4d37-bcec-2d934bdf4cea" (UID: "97bbae24-c2a8-4d37-bcec-2d934bdf4cea"). InnerVolumeSpecName "kube-api-access-jjhtg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:17:37 crc kubenswrapper[4986]: I1203 13:17:37.090710 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0559f774-91cb-40f6-b047-15591ea39ebe-kube-api-access-nqncf" (OuterVolumeSpecName: "kube-api-access-nqncf") pod "0559f774-91cb-40f6-b047-15591ea39ebe" (UID: "0559f774-91cb-40f6-b047-15591ea39ebe"). InnerVolumeSpecName "kube-api-access-nqncf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:17:37 crc kubenswrapper[4986]: I1203 13:17:37.186834 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jjhtg\" (UniqueName: \"kubernetes.io/projected/97bbae24-c2a8-4d37-bcec-2d934bdf4cea-kube-api-access-jjhtg\") on node \"crc\" DevicePath \"\"" Dec 03 13:17:37 crc kubenswrapper[4986]: I1203 13:17:37.186869 4986 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0559f774-91cb-40f6-b047-15591ea39ebe-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 13:17:37 crc kubenswrapper[4986]: I1203 13:17:37.186880 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nqncf\" (UniqueName: \"kubernetes.io/projected/0559f774-91cb-40f6-b047-15591ea39ebe-kube-api-access-nqncf\") on node \"crc\" DevicePath \"\"" Dec 03 13:17:37 crc kubenswrapper[4986]: I1203 13:17:37.186892 4986 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97bbae24-c2a8-4d37-bcec-2d934bdf4cea-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 13:17:37 crc kubenswrapper[4986]: I1203 13:17:37.562106 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-7sp42" event={"ID":"f130f3db-61e7-49d0-ab61-3f3d16349860","Type":"ContainerDied","Data":"755527d29c5b8aeab2ce46c39d4eaff2949359cd2f138dc643b86da3ddc6a79f"} Dec 03 13:17:37 crc kubenswrapper[4986]: I1203 13:17:37.562152 4986 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="755527d29c5b8aeab2ce46c39d4eaff2949359cd2f138dc643b86da3ddc6a79f" Dec 03 13:17:37 crc kubenswrapper[4986]: I1203 13:17:37.562201 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-7sp42" Dec 03 13:17:37 crc kubenswrapper[4986]: I1203 13:17:37.568847 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-75vv8" event={"ID":"97bbae24-c2a8-4d37-bcec-2d934bdf4cea","Type":"ContainerDied","Data":"4f452e39ecfe2355673f5a4608eb5b0fec3282451db04c04df60a28908e9fa92"} Dec 03 13:17:37 crc kubenswrapper[4986]: I1203 13:17:37.568896 4986 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f452e39ecfe2355673f5a4608eb5b0fec3282451db04c04df60a28908e9fa92" Dec 03 13:17:37 crc kubenswrapper[4986]: I1203 13:17:37.568869 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-75vv8" Dec 03 13:17:37 crc kubenswrapper[4986]: I1203 13:17:37.569866 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b742-account-create-update-h8phq" event={"ID":"0559f774-91cb-40f6-b047-15591ea39ebe","Type":"ContainerDied","Data":"dd4bac78b1310d849af4ef064b317214a41820e0a7aecf58cbf678e3974ee2c1"} Dec 03 13:17:37 crc kubenswrapper[4986]: I1203 13:17:37.569898 4986 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd4bac78b1310d849af4ef064b317214a41820e0a7aecf58cbf678e3974ee2c1" Dec 03 13:17:37 crc kubenswrapper[4986]: I1203 13:17:37.569936 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b742-account-create-update-h8phq" Dec 03 13:17:51 crc kubenswrapper[4986]: E1203 13:17:51.445625 4986 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Dec 03 13:17:51 crc kubenswrapper[4986]: E1203 13:17:51.446673 4986 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qph6n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-8f5bt_openstack(0874137d-06da-450a-9e93-ad53257c5115): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 13:17:51 crc kubenswrapper[4986]: E1203 13:17:51.451383 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-8f5bt" podUID="0874137d-06da-450a-9e93-ad53257c5115" Dec 03 13:17:51 crc kubenswrapper[4986]: I1203 13:17:51.667833 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-0669-account-create-update-58qxr" Dec 03 13:17:51 crc kubenswrapper[4986]: I1203 13:17:51.732089 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-3a26-account-create-update-6d5g6" event={"ID":"cbde4185-c1c7-47e8-b369-d23ff4d4092c","Type":"ContainerDied","Data":"376ba46cf5f0f7760f3797706f290391df66835ba85410f149ebb49c44568674"} Dec 03 13:17:51 crc kubenswrapper[4986]: I1203 13:17:51.732250 4986 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="376ba46cf5f0f7760f3797706f290391df66835ba85410f149ebb49c44568674" Dec 03 13:17:51 crc kubenswrapper[4986]: I1203 13:17:51.735724 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-3a26-account-create-update-6d5g6" Dec 03 13:17:51 crc kubenswrapper[4986]: I1203 13:17:51.741161 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-r7nf5" event={"ID":"bb71e9c0-6c77-4e47-8b4d-8e77a05cca59","Type":"ContainerDied","Data":"ca410a8f5105d5e91ea47ac9714af4118782fd42bc4f501be34c22ab2a139635"} Dec 03 13:17:51 crc kubenswrapper[4986]: I1203 13:17:51.741204 4986 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca410a8f5105d5e91ea47ac9714af4118782fd42bc4f501be34c22ab2a139635" Dec 03 13:17:51 crc kubenswrapper[4986]: I1203 13:17:51.743274 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-0669-account-create-update-58qxr" event={"ID":"f6fe773b-71aa-4762-ad97-bfb9636906c2","Type":"ContainerDied","Data":"4b74437796c09f0032eaf4be3eb8a6bf563e8247d2233bc07b2f3c6eac38ae96"} Dec 03 13:17:51 crc kubenswrapper[4986]: I1203 13:17:51.743396 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-0669-account-create-update-58qxr" Dec 03 13:17:51 crc kubenswrapper[4986]: I1203 13:17:51.743384 4986 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b74437796c09f0032eaf4be3eb8a6bf563e8247d2233bc07b2f3c6eac38ae96" Dec 03 13:17:51 crc kubenswrapper[4986]: E1203 13:17:51.745356 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-8f5bt" podUID="0874137d-06da-450a-9e93-ad53257c5115" Dec 03 13:17:51 crc kubenswrapper[4986]: I1203 13:17:51.745415 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-r7nf5" Dec 03 13:17:51 crc kubenswrapper[4986]: I1203 13:17:51.768873 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qtjt6\" (UniqueName: \"kubernetes.io/projected/bb71e9c0-6c77-4e47-8b4d-8e77a05cca59-kube-api-access-qtjt6\") pod \"bb71e9c0-6c77-4e47-8b4d-8e77a05cca59\" (UID: \"bb71e9c0-6c77-4e47-8b4d-8e77a05cca59\") " Dec 03 13:17:51 crc kubenswrapper[4986]: I1203 13:17:51.768928 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cbde4185-c1c7-47e8-b369-d23ff4d4092c-operator-scripts\") pod \"cbde4185-c1c7-47e8-b369-d23ff4d4092c\" (UID: \"cbde4185-c1c7-47e8-b369-d23ff4d4092c\") " Dec 03 13:17:51 crc kubenswrapper[4986]: I1203 13:17:51.768987 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f6fe773b-71aa-4762-ad97-bfb9636906c2-operator-scripts\") pod \"f6fe773b-71aa-4762-ad97-bfb9636906c2\" (UID: \"f6fe773b-71aa-4762-ad97-bfb9636906c2\") " Dec 03 13:17:51 crc kubenswrapper[4986]: I1203 13:17:51.769030 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dc9f8\" (UniqueName: \"kubernetes.io/projected/f6fe773b-71aa-4762-ad97-bfb9636906c2-kube-api-access-dc9f8\") pod \"f6fe773b-71aa-4762-ad97-bfb9636906c2\" (UID: \"f6fe773b-71aa-4762-ad97-bfb9636906c2\") " Dec 03 13:17:51 crc kubenswrapper[4986]: I1203 13:17:51.769104 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6np6x\" (UniqueName: \"kubernetes.io/projected/cbde4185-c1c7-47e8-b369-d23ff4d4092c-kube-api-access-6np6x\") pod \"cbde4185-c1c7-47e8-b369-d23ff4d4092c\" (UID: \"cbde4185-c1c7-47e8-b369-d23ff4d4092c\") " Dec 03 13:17:51 crc kubenswrapper[4986]: I1203 13:17:51.769124 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb71e9c0-6c77-4e47-8b4d-8e77a05cca59-operator-scripts\") pod \"bb71e9c0-6c77-4e47-8b4d-8e77a05cca59\" (UID: \"bb71e9c0-6c77-4e47-8b4d-8e77a05cca59\") " Dec 03 13:17:51 crc kubenswrapper[4986]: I1203 13:17:51.769728 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cbde4185-c1c7-47e8-b369-d23ff4d4092c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cbde4185-c1c7-47e8-b369-d23ff4d4092c" (UID: "cbde4185-c1c7-47e8-b369-d23ff4d4092c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:17:51 crc kubenswrapper[4986]: I1203 13:17:51.769891 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6fe773b-71aa-4762-ad97-bfb9636906c2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f6fe773b-71aa-4762-ad97-bfb9636906c2" (UID: "f6fe773b-71aa-4762-ad97-bfb9636906c2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:17:51 crc kubenswrapper[4986]: I1203 13:17:51.770221 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb71e9c0-6c77-4e47-8b4d-8e77a05cca59-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bb71e9c0-6c77-4e47-8b4d-8e77a05cca59" (UID: "bb71e9c0-6c77-4e47-8b4d-8e77a05cca59"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:17:51 crc kubenswrapper[4986]: I1203 13:17:51.778610 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6fe773b-71aa-4762-ad97-bfb9636906c2-kube-api-access-dc9f8" (OuterVolumeSpecName: "kube-api-access-dc9f8") pod "f6fe773b-71aa-4762-ad97-bfb9636906c2" (UID: "f6fe773b-71aa-4762-ad97-bfb9636906c2"). InnerVolumeSpecName "kube-api-access-dc9f8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:17:51 crc kubenswrapper[4986]: I1203 13:17:51.789960 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb71e9c0-6c77-4e47-8b4d-8e77a05cca59-kube-api-access-qtjt6" (OuterVolumeSpecName: "kube-api-access-qtjt6") pod "bb71e9c0-6c77-4e47-8b4d-8e77a05cca59" (UID: "bb71e9c0-6c77-4e47-8b4d-8e77a05cca59"). InnerVolumeSpecName "kube-api-access-qtjt6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:17:51 crc kubenswrapper[4986]: I1203 13:17:51.790066 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbde4185-c1c7-47e8-b369-d23ff4d4092c-kube-api-access-6np6x" (OuterVolumeSpecName: "kube-api-access-6np6x") pod "cbde4185-c1c7-47e8-b369-d23ff4d4092c" (UID: "cbde4185-c1c7-47e8-b369-d23ff4d4092c"). InnerVolumeSpecName "kube-api-access-6np6x". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:17:51 crc kubenswrapper[4986]: I1203 13:17:51.870555 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qtjt6\" (UniqueName: \"kubernetes.io/projected/bb71e9c0-6c77-4e47-8b4d-8e77a05cca59-kube-api-access-qtjt6\") on node \"crc\" DevicePath \"\"" Dec 03 13:17:51 crc kubenswrapper[4986]: I1203 13:17:51.870590 4986 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cbde4185-c1c7-47e8-b369-d23ff4d4092c-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 13:17:51 crc kubenswrapper[4986]: I1203 13:17:51.870603 4986 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f6fe773b-71aa-4762-ad97-bfb9636906c2-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 13:17:51 crc kubenswrapper[4986]: I1203 13:17:51.870614 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dc9f8\" (UniqueName: \"kubernetes.io/projected/f6fe773b-71aa-4762-ad97-bfb9636906c2-kube-api-access-dc9f8\") on node \"crc\" DevicePath \"\"" Dec 03 13:17:51 crc kubenswrapper[4986]: I1203 13:17:51.870629 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6np6x\" (UniqueName: \"kubernetes.io/projected/cbde4185-c1c7-47e8-b369-d23ff4d4092c-kube-api-access-6np6x\") on node \"crc\" DevicePath \"\"" Dec 03 13:17:51 crc kubenswrapper[4986]: I1203 13:17:51.870641 4986 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb71e9c0-6c77-4e47-8b4d-8e77a05cca59-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 13:17:52 crc kubenswrapper[4986]: I1203 13:17:52.758759 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe","Type":"ContainerStarted","Data":"d01d92855bacef0205f69732e864f71750ec758af99237ea15f8395453d94078"} Dec 03 13:17:52 crc kubenswrapper[4986]: I1203 13:17:52.762599 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-3a26-account-create-update-6d5g6" Dec 03 13:17:52 crc kubenswrapper[4986]: I1203 13:17:52.762640 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-r7nf5" Dec 03 13:17:52 crc kubenswrapper[4986]: I1203 13:17:52.762589 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-zbccr" event={"ID":"3efb58d2-ea87-4349-a581-519e1b458a37","Type":"ContainerStarted","Data":"af08236fb2a6423e2ffea78204750043b885a47204fb99a7e8a9956c7d5d0a68"} Dec 03 13:17:52 crc kubenswrapper[4986]: I1203 13:17:52.787807 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-zbccr" podStartSLOduration=3.508598856 podStartE2EDuration="24.7877883s" podCreationTimestamp="2025-12-03 13:17:28 +0000 UTC" firstStartedPulling="2025-12-03 13:17:30.186424431 +0000 UTC m=+1309.652855622" lastFinishedPulling="2025-12-03 13:17:51.465613865 +0000 UTC m=+1330.932045066" observedRunningTime="2025-12-03 13:17:52.78332117 +0000 UTC m=+1332.249752361" watchObservedRunningTime="2025-12-03 13:17:52.7877883 +0000 UTC m=+1332.254219491" Dec 03 13:17:53 crc kubenswrapper[4986]: I1203 13:17:53.777932 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe","Type":"ContainerStarted","Data":"4f5c84282e7004784e7493778cf714736610e892b8a949aa7d92d46609edc6da"} Dec 03 13:17:53 crc kubenswrapper[4986]: I1203 13:17:53.778347 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe","Type":"ContainerStarted","Data":"53793c5af53a23cec3fdbfb6feaeef577db7f8fae25f42168c3b4dfaeebd8096"} Dec 03 13:17:53 crc kubenswrapper[4986]: I1203 13:17:53.778365 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe","Type":"ContainerStarted","Data":"7423583ff407373033de9a1e4df21edf8cae84c4aaf5d9b45e34f174023ddf48"} Dec 03 13:17:54 crc kubenswrapper[4986]: I1203 13:17:54.792520 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe","Type":"ContainerStarted","Data":"89755f9735adf1e26e2ca0bad1b9897ffc2383181f9ecbc0b33d56a9ae3f82d0"} Dec 03 13:17:54 crc kubenswrapper[4986]: I1203 13:17:54.792576 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe","Type":"ContainerStarted","Data":"25dd097baf43f139e8b16ba6911d6b7937df62fea7f10237fda08a52ff711d31"} Dec 03 13:17:55 crc kubenswrapper[4986]: I1203 13:17:55.807150 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe","Type":"ContainerStarted","Data":"92a4e7feb3180a776fd96a6164f8a877dd1747cc83b53472b4da0b033267314a"} Dec 03 13:17:55 crc kubenswrapper[4986]: I1203 13:17:55.808258 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe","Type":"ContainerStarted","Data":"85a280975bcabde295fa68c72288926b7ee6d2ab8a76ac70d6d0c43e36eca89e"} Dec 03 13:17:55 crc kubenswrapper[4986]: I1203 13:17:55.847424 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=37.032403617 podStartE2EDuration="1m2.847403627s" podCreationTimestamp="2025-12-03 13:16:53 +0000 UTC" firstStartedPulling="2025-12-03 13:17:27.374064289 +0000 UTC m=+1306.840495470" lastFinishedPulling="2025-12-03 13:17:53.189064299 +0000 UTC m=+1332.655495480" observedRunningTime="2025-12-03 13:17:55.843340687 +0000 UTC m=+1335.309771898" watchObservedRunningTime="2025-12-03 13:17:55.847403627 +0000 UTC m=+1335.313834818" Dec 03 13:17:56 crc kubenswrapper[4986]: I1203 13:17:56.268825 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-ttjvc"] Dec 03 13:17:56 crc kubenswrapper[4986]: E1203 13:17:56.269331 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6fe773b-71aa-4762-ad97-bfb9636906c2" containerName="mariadb-account-create-update" Dec 03 13:17:56 crc kubenswrapper[4986]: I1203 13:17:56.269347 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6fe773b-71aa-4762-ad97-bfb9636906c2" containerName="mariadb-account-create-update" Dec 03 13:17:56 crc kubenswrapper[4986]: E1203 13:17:56.269367 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb71e9c0-6c77-4e47-8b4d-8e77a05cca59" containerName="mariadb-database-create" Dec 03 13:17:56 crc kubenswrapper[4986]: I1203 13:17:56.269376 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb71e9c0-6c77-4e47-8b4d-8e77a05cca59" containerName="mariadb-database-create" Dec 03 13:17:56 crc kubenswrapper[4986]: E1203 13:17:56.269388 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97bbae24-c2a8-4d37-bcec-2d934bdf4cea" containerName="mariadb-database-create" Dec 03 13:17:56 crc kubenswrapper[4986]: I1203 13:17:56.269397 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="97bbae24-c2a8-4d37-bcec-2d934bdf4cea" containerName="mariadb-database-create" Dec 03 13:17:56 crc kubenswrapper[4986]: E1203 13:17:56.269412 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f130f3db-61e7-49d0-ab61-3f3d16349860" containerName="mariadb-database-create" Dec 03 13:17:56 crc kubenswrapper[4986]: I1203 13:17:56.269420 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="f130f3db-61e7-49d0-ab61-3f3d16349860" containerName="mariadb-database-create" Dec 03 13:17:56 crc kubenswrapper[4986]: E1203 13:17:56.269446 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0559f774-91cb-40f6-b047-15591ea39ebe" containerName="mariadb-account-create-update" Dec 03 13:17:56 crc kubenswrapper[4986]: I1203 13:17:56.269454 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="0559f774-91cb-40f6-b047-15591ea39ebe" containerName="mariadb-account-create-update" Dec 03 13:17:56 crc kubenswrapper[4986]: E1203 13:17:56.269469 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbde4185-c1c7-47e8-b369-d23ff4d4092c" containerName="mariadb-account-create-update" Dec 03 13:17:56 crc kubenswrapper[4986]: I1203 13:17:56.269477 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbde4185-c1c7-47e8-b369-d23ff4d4092c" containerName="mariadb-account-create-update" Dec 03 13:17:56 crc kubenswrapper[4986]: I1203 13:17:56.269647 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="0559f774-91cb-40f6-b047-15591ea39ebe" containerName="mariadb-account-create-update" Dec 03 13:17:56 crc kubenswrapper[4986]: I1203 13:17:56.269678 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="f130f3db-61e7-49d0-ab61-3f3d16349860" containerName="mariadb-database-create" Dec 03 13:17:56 crc kubenswrapper[4986]: I1203 13:17:56.269690 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="97bbae24-c2a8-4d37-bcec-2d934bdf4cea" containerName="mariadb-database-create" Dec 03 13:17:56 crc kubenswrapper[4986]: I1203 13:17:56.269709 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb71e9c0-6c77-4e47-8b4d-8e77a05cca59" containerName="mariadb-database-create" Dec 03 13:17:56 crc kubenswrapper[4986]: I1203 13:17:56.269725 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbde4185-c1c7-47e8-b369-d23ff4d4092c" containerName="mariadb-account-create-update" Dec 03 13:17:56 crc kubenswrapper[4986]: I1203 13:17:56.269743 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6fe773b-71aa-4762-ad97-bfb9636906c2" containerName="mariadb-account-create-update" Dec 03 13:17:56 crc kubenswrapper[4986]: I1203 13:17:56.270698 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-ttjvc" Dec 03 13:17:56 crc kubenswrapper[4986]: I1203 13:17:56.277570 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Dec 03 13:17:56 crc kubenswrapper[4986]: I1203 13:17:56.293967 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-ttjvc"] Dec 03 13:17:56 crc kubenswrapper[4986]: I1203 13:17:56.446381 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3a885abf-a7cf-4659-b2a9-3e67e924b67d-ovsdbserver-sb\") pod \"dnsmasq-dns-77585f5f8c-ttjvc\" (UID: \"3a885abf-a7cf-4659-b2a9-3e67e924b67d\") " pod="openstack/dnsmasq-dns-77585f5f8c-ttjvc" Dec 03 13:17:56 crc kubenswrapper[4986]: I1203 13:17:56.446436 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3a885abf-a7cf-4659-b2a9-3e67e924b67d-ovsdbserver-nb\") pod \"dnsmasq-dns-77585f5f8c-ttjvc\" (UID: \"3a885abf-a7cf-4659-b2a9-3e67e924b67d\") " pod="openstack/dnsmasq-dns-77585f5f8c-ttjvc" Dec 03 13:17:56 crc kubenswrapper[4986]: I1203 13:17:56.446465 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3a885abf-a7cf-4659-b2a9-3e67e924b67d-dns-swift-storage-0\") pod \"dnsmasq-dns-77585f5f8c-ttjvc\" (UID: \"3a885abf-a7cf-4659-b2a9-3e67e924b67d\") " pod="openstack/dnsmasq-dns-77585f5f8c-ttjvc" Dec 03 13:17:56 crc kubenswrapper[4986]: I1203 13:17:56.446733 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzqf5\" (UniqueName: \"kubernetes.io/projected/3a885abf-a7cf-4659-b2a9-3e67e924b67d-kube-api-access-mzqf5\") pod \"dnsmasq-dns-77585f5f8c-ttjvc\" (UID: \"3a885abf-a7cf-4659-b2a9-3e67e924b67d\") " pod="openstack/dnsmasq-dns-77585f5f8c-ttjvc" Dec 03 13:17:56 crc kubenswrapper[4986]: I1203 13:17:56.446800 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3a885abf-a7cf-4659-b2a9-3e67e924b67d-dns-svc\") pod \"dnsmasq-dns-77585f5f8c-ttjvc\" (UID: \"3a885abf-a7cf-4659-b2a9-3e67e924b67d\") " pod="openstack/dnsmasq-dns-77585f5f8c-ttjvc" Dec 03 13:17:56 crc kubenswrapper[4986]: I1203 13:17:56.446996 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a885abf-a7cf-4659-b2a9-3e67e924b67d-config\") pod \"dnsmasq-dns-77585f5f8c-ttjvc\" (UID: \"3a885abf-a7cf-4659-b2a9-3e67e924b67d\") " pod="openstack/dnsmasq-dns-77585f5f8c-ttjvc" Dec 03 13:17:56 crc kubenswrapper[4986]: I1203 13:17:56.548559 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzqf5\" (UniqueName: \"kubernetes.io/projected/3a885abf-a7cf-4659-b2a9-3e67e924b67d-kube-api-access-mzqf5\") pod \"dnsmasq-dns-77585f5f8c-ttjvc\" (UID: \"3a885abf-a7cf-4659-b2a9-3e67e924b67d\") " pod="openstack/dnsmasq-dns-77585f5f8c-ttjvc" Dec 03 13:17:56 crc kubenswrapper[4986]: I1203 13:17:56.548956 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3a885abf-a7cf-4659-b2a9-3e67e924b67d-dns-svc\") pod \"dnsmasq-dns-77585f5f8c-ttjvc\" (UID: \"3a885abf-a7cf-4659-b2a9-3e67e924b67d\") " pod="openstack/dnsmasq-dns-77585f5f8c-ttjvc" Dec 03 13:17:56 crc kubenswrapper[4986]: I1203 13:17:56.549028 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a885abf-a7cf-4659-b2a9-3e67e924b67d-config\") pod \"dnsmasq-dns-77585f5f8c-ttjvc\" (UID: \"3a885abf-a7cf-4659-b2a9-3e67e924b67d\") " pod="openstack/dnsmasq-dns-77585f5f8c-ttjvc" Dec 03 13:17:56 crc kubenswrapper[4986]: I1203 13:17:56.549102 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3a885abf-a7cf-4659-b2a9-3e67e924b67d-ovsdbserver-sb\") pod \"dnsmasq-dns-77585f5f8c-ttjvc\" (UID: \"3a885abf-a7cf-4659-b2a9-3e67e924b67d\") " pod="openstack/dnsmasq-dns-77585f5f8c-ttjvc" Dec 03 13:17:56 crc kubenswrapper[4986]: I1203 13:17:56.549131 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3a885abf-a7cf-4659-b2a9-3e67e924b67d-ovsdbserver-nb\") pod \"dnsmasq-dns-77585f5f8c-ttjvc\" (UID: \"3a885abf-a7cf-4659-b2a9-3e67e924b67d\") " pod="openstack/dnsmasq-dns-77585f5f8c-ttjvc" Dec 03 13:17:56 crc kubenswrapper[4986]: I1203 13:17:56.549157 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3a885abf-a7cf-4659-b2a9-3e67e924b67d-dns-swift-storage-0\") pod \"dnsmasq-dns-77585f5f8c-ttjvc\" (UID: \"3a885abf-a7cf-4659-b2a9-3e67e924b67d\") " pod="openstack/dnsmasq-dns-77585f5f8c-ttjvc" Dec 03 13:17:56 crc kubenswrapper[4986]: I1203 13:17:56.550097 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3a885abf-a7cf-4659-b2a9-3e67e924b67d-dns-swift-storage-0\") pod \"dnsmasq-dns-77585f5f8c-ttjvc\" (UID: \"3a885abf-a7cf-4659-b2a9-3e67e924b67d\") " pod="openstack/dnsmasq-dns-77585f5f8c-ttjvc" Dec 03 13:17:56 crc kubenswrapper[4986]: I1203 13:17:56.550118 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3a885abf-a7cf-4659-b2a9-3e67e924b67d-dns-svc\") pod \"dnsmasq-dns-77585f5f8c-ttjvc\" (UID: \"3a885abf-a7cf-4659-b2a9-3e67e924b67d\") " pod="openstack/dnsmasq-dns-77585f5f8c-ttjvc" Dec 03 13:17:56 crc kubenswrapper[4986]: I1203 13:17:56.550701 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3a885abf-a7cf-4659-b2a9-3e67e924b67d-ovsdbserver-sb\") pod \"dnsmasq-dns-77585f5f8c-ttjvc\" (UID: \"3a885abf-a7cf-4659-b2a9-3e67e924b67d\") " pod="openstack/dnsmasq-dns-77585f5f8c-ttjvc" Dec 03 13:17:56 crc kubenswrapper[4986]: I1203 13:17:56.551171 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a885abf-a7cf-4659-b2a9-3e67e924b67d-config\") pod \"dnsmasq-dns-77585f5f8c-ttjvc\" (UID: \"3a885abf-a7cf-4659-b2a9-3e67e924b67d\") " pod="openstack/dnsmasq-dns-77585f5f8c-ttjvc" Dec 03 13:17:56 crc kubenswrapper[4986]: I1203 13:17:56.551367 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3a885abf-a7cf-4659-b2a9-3e67e924b67d-ovsdbserver-nb\") pod \"dnsmasq-dns-77585f5f8c-ttjvc\" (UID: \"3a885abf-a7cf-4659-b2a9-3e67e924b67d\") " pod="openstack/dnsmasq-dns-77585f5f8c-ttjvc" Dec 03 13:17:56 crc kubenswrapper[4986]: I1203 13:17:56.574534 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzqf5\" (UniqueName: \"kubernetes.io/projected/3a885abf-a7cf-4659-b2a9-3e67e924b67d-kube-api-access-mzqf5\") pod \"dnsmasq-dns-77585f5f8c-ttjvc\" (UID: \"3a885abf-a7cf-4659-b2a9-3e67e924b67d\") " pod="openstack/dnsmasq-dns-77585f5f8c-ttjvc" Dec 03 13:17:56 crc kubenswrapper[4986]: I1203 13:17:56.594915 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-ttjvc" Dec 03 13:17:56 crc kubenswrapper[4986]: I1203 13:17:56.823552 4986 generic.go:334] "Generic (PLEG): container finished" podID="3efb58d2-ea87-4349-a581-519e1b458a37" containerID="af08236fb2a6423e2ffea78204750043b885a47204fb99a7e8a9956c7d5d0a68" exitCode=0 Dec 03 13:17:56 crc kubenswrapper[4986]: I1203 13:17:56.823655 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-zbccr" event={"ID":"3efb58d2-ea87-4349-a581-519e1b458a37","Type":"ContainerDied","Data":"af08236fb2a6423e2ffea78204750043b885a47204fb99a7e8a9956c7d5d0a68"} Dec 03 13:17:57 crc kubenswrapper[4986]: I1203 13:17:57.086641 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-ttjvc"] Dec 03 13:17:57 crc kubenswrapper[4986]: W1203 13:17:57.098436 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3a885abf_a7cf_4659_b2a9_3e67e924b67d.slice/crio-f87b5fbb3ce592211d0c11e1b02a46e486eeb207287cb939aa9eda3de08a8ab2 WatchSource:0}: Error finding container f87b5fbb3ce592211d0c11e1b02a46e486eeb207287cb939aa9eda3de08a8ab2: Status 404 returned error can't find the container with id f87b5fbb3ce592211d0c11e1b02a46e486eeb207287cb939aa9eda3de08a8ab2 Dec 03 13:17:57 crc kubenswrapper[4986]: I1203 13:17:57.832059 4986 generic.go:334] "Generic (PLEG): container finished" podID="3a885abf-a7cf-4659-b2a9-3e67e924b67d" containerID="7f5d5779fab6946efccc683d37071c8c366acfed4789b8871425a84ed54bf747" exitCode=0 Dec 03 13:17:57 crc kubenswrapper[4986]: I1203 13:17:57.832125 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-ttjvc" event={"ID":"3a885abf-a7cf-4659-b2a9-3e67e924b67d","Type":"ContainerDied","Data":"7f5d5779fab6946efccc683d37071c8c366acfed4789b8871425a84ed54bf747"} Dec 03 13:17:57 crc kubenswrapper[4986]: I1203 13:17:57.832413 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-ttjvc" event={"ID":"3a885abf-a7cf-4659-b2a9-3e67e924b67d","Type":"ContainerStarted","Data":"f87b5fbb3ce592211d0c11e1b02a46e486eeb207287cb939aa9eda3de08a8ab2"} Dec 03 13:17:58 crc kubenswrapper[4986]: I1203 13:17:58.173378 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-zbccr" Dec 03 13:17:58 crc kubenswrapper[4986]: I1203 13:17:58.282840 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x5ntn\" (UniqueName: \"kubernetes.io/projected/3efb58d2-ea87-4349-a581-519e1b458a37-kube-api-access-x5ntn\") pod \"3efb58d2-ea87-4349-a581-519e1b458a37\" (UID: \"3efb58d2-ea87-4349-a581-519e1b458a37\") " Dec 03 13:17:58 crc kubenswrapper[4986]: I1203 13:17:58.282951 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3efb58d2-ea87-4349-a581-519e1b458a37-combined-ca-bundle\") pod \"3efb58d2-ea87-4349-a581-519e1b458a37\" (UID: \"3efb58d2-ea87-4349-a581-519e1b458a37\") " Dec 03 13:17:58 crc kubenswrapper[4986]: I1203 13:17:58.283018 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3efb58d2-ea87-4349-a581-519e1b458a37-config-data\") pod \"3efb58d2-ea87-4349-a581-519e1b458a37\" (UID: \"3efb58d2-ea87-4349-a581-519e1b458a37\") " Dec 03 13:17:58 crc kubenswrapper[4986]: I1203 13:17:58.287571 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3efb58d2-ea87-4349-a581-519e1b458a37-kube-api-access-x5ntn" (OuterVolumeSpecName: "kube-api-access-x5ntn") pod "3efb58d2-ea87-4349-a581-519e1b458a37" (UID: "3efb58d2-ea87-4349-a581-519e1b458a37"). InnerVolumeSpecName "kube-api-access-x5ntn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:17:58 crc kubenswrapper[4986]: I1203 13:17:58.306828 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3efb58d2-ea87-4349-a581-519e1b458a37-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3efb58d2-ea87-4349-a581-519e1b458a37" (UID: "3efb58d2-ea87-4349-a581-519e1b458a37"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:17:58 crc kubenswrapper[4986]: I1203 13:17:58.325081 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3efb58d2-ea87-4349-a581-519e1b458a37-config-data" (OuterVolumeSpecName: "config-data") pod "3efb58d2-ea87-4349-a581-519e1b458a37" (UID: "3efb58d2-ea87-4349-a581-519e1b458a37"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:17:58 crc kubenswrapper[4986]: I1203 13:17:58.385038 4986 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3efb58d2-ea87-4349-a581-519e1b458a37-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 13:17:58 crc kubenswrapper[4986]: I1203 13:17:58.385917 4986 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3efb58d2-ea87-4349-a581-519e1b458a37-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 13:17:58 crc kubenswrapper[4986]: I1203 13:17:58.385930 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x5ntn\" (UniqueName: \"kubernetes.io/projected/3efb58d2-ea87-4349-a581-519e1b458a37-kube-api-access-x5ntn\") on node \"crc\" DevicePath \"\"" Dec 03 13:17:58 crc kubenswrapper[4986]: I1203 13:17:58.843338 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-zbccr" event={"ID":"3efb58d2-ea87-4349-a581-519e1b458a37","Type":"ContainerDied","Data":"b7f90d14a477c17bd20426c5ea96d130fd63848dfafad64027b0049901671feb"} Dec 03 13:17:58 crc kubenswrapper[4986]: I1203 13:17:58.843443 4986 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b7f90d14a477c17bd20426c5ea96d130fd63848dfafad64027b0049901671feb" Dec 03 13:17:58 crc kubenswrapper[4986]: I1203 13:17:58.843366 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-zbccr" Dec 03 13:17:58 crc kubenswrapper[4986]: I1203 13:17:58.845837 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-ttjvc" event={"ID":"3a885abf-a7cf-4659-b2a9-3e67e924b67d","Type":"ContainerStarted","Data":"579623730bd74c24a30f76b117795af0b3e6d1d9cce3fc7b664a88fd24983934"} Dec 03 13:17:58 crc kubenswrapper[4986]: I1203 13:17:58.846095 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-77585f5f8c-ttjvc" Dec 03 13:17:58 crc kubenswrapper[4986]: I1203 13:17:58.894856 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-77585f5f8c-ttjvc" podStartSLOduration=2.894834615 podStartE2EDuration="2.894834615s" podCreationTimestamp="2025-12-03 13:17:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:17:58.888839483 +0000 UTC m=+1338.355270684" watchObservedRunningTime="2025-12-03 13:17:58.894834615 +0000 UTC m=+1338.361265816" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.174804 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-vtwjw"] Dec 03 13:17:59 crc kubenswrapper[4986]: E1203 13:17:59.175258 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3efb58d2-ea87-4349-a581-519e1b458a37" containerName="keystone-db-sync" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.175297 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="3efb58d2-ea87-4349-a581-519e1b458a37" containerName="keystone-db-sync" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.175523 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="3efb58d2-ea87-4349-a581-519e1b458a37" containerName="keystone-db-sync" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.176140 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vtwjw" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.179773 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.179846 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.180050 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.180727 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-mtjcz" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.180922 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.199368 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/88108c63-cf5e-4373-85d1-80d5803bb7c9-fernet-keys\") pod \"keystone-bootstrap-vtwjw\" (UID: \"88108c63-cf5e-4373-85d1-80d5803bb7c9\") " pod="openstack/keystone-bootstrap-vtwjw" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.199837 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88108c63-cf5e-4373-85d1-80d5803bb7c9-scripts\") pod \"keystone-bootstrap-vtwjw\" (UID: \"88108c63-cf5e-4373-85d1-80d5803bb7c9\") " pod="openstack/keystone-bootstrap-vtwjw" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.199857 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88108c63-cf5e-4373-85d1-80d5803bb7c9-config-data\") pod \"keystone-bootstrap-vtwjw\" (UID: \"88108c63-cf5e-4373-85d1-80d5803bb7c9\") " pod="openstack/keystone-bootstrap-vtwjw" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.199883 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88108c63-cf5e-4373-85d1-80d5803bb7c9-combined-ca-bundle\") pod \"keystone-bootstrap-vtwjw\" (UID: \"88108c63-cf5e-4373-85d1-80d5803bb7c9\") " pod="openstack/keystone-bootstrap-vtwjw" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.199923 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/88108c63-cf5e-4373-85d1-80d5803bb7c9-credential-keys\") pod \"keystone-bootstrap-vtwjw\" (UID: \"88108c63-cf5e-4373-85d1-80d5803bb7c9\") " pod="openstack/keystone-bootstrap-vtwjw" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.199953 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bt5rp\" (UniqueName: \"kubernetes.io/projected/88108c63-cf5e-4373-85d1-80d5803bb7c9-kube-api-access-bt5rp\") pod \"keystone-bootstrap-vtwjw\" (UID: \"88108c63-cf5e-4373-85d1-80d5803bb7c9\") " pod="openstack/keystone-bootstrap-vtwjw" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.203354 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-vtwjw"] Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.234669 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-ttjvc"] Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.286120 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55fff446b9-f5m5w"] Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.287948 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55fff446b9-f5m5w" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.293786 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55fff446b9-f5m5w"] Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.301454 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/115fd2e3-2e0e-4a87-9fe2-15c1f00bcf4c-ovsdbserver-nb\") pod \"dnsmasq-dns-55fff446b9-f5m5w\" (UID: \"115fd2e3-2e0e-4a87-9fe2-15c1f00bcf4c\") " pod="openstack/dnsmasq-dns-55fff446b9-f5m5w" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.301521 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8b85q\" (UniqueName: \"kubernetes.io/projected/115fd2e3-2e0e-4a87-9fe2-15c1f00bcf4c-kube-api-access-8b85q\") pod \"dnsmasq-dns-55fff446b9-f5m5w\" (UID: \"115fd2e3-2e0e-4a87-9fe2-15c1f00bcf4c\") " pod="openstack/dnsmasq-dns-55fff446b9-f5m5w" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.301556 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/88108c63-cf5e-4373-85d1-80d5803bb7c9-fernet-keys\") pod \"keystone-bootstrap-vtwjw\" (UID: \"88108c63-cf5e-4373-85d1-80d5803bb7c9\") " pod="openstack/keystone-bootstrap-vtwjw" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.301619 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/115fd2e3-2e0e-4a87-9fe2-15c1f00bcf4c-dns-swift-storage-0\") pod \"dnsmasq-dns-55fff446b9-f5m5w\" (UID: \"115fd2e3-2e0e-4a87-9fe2-15c1f00bcf4c\") " pod="openstack/dnsmasq-dns-55fff446b9-f5m5w" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.301660 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88108c63-cf5e-4373-85d1-80d5803bb7c9-scripts\") pod \"keystone-bootstrap-vtwjw\" (UID: \"88108c63-cf5e-4373-85d1-80d5803bb7c9\") " pod="openstack/keystone-bootstrap-vtwjw" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.301677 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88108c63-cf5e-4373-85d1-80d5803bb7c9-config-data\") pod \"keystone-bootstrap-vtwjw\" (UID: \"88108c63-cf5e-4373-85d1-80d5803bb7c9\") " pod="openstack/keystone-bootstrap-vtwjw" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.301694 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/115fd2e3-2e0e-4a87-9fe2-15c1f00bcf4c-config\") pod \"dnsmasq-dns-55fff446b9-f5m5w\" (UID: \"115fd2e3-2e0e-4a87-9fe2-15c1f00bcf4c\") " pod="openstack/dnsmasq-dns-55fff446b9-f5m5w" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.301715 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88108c63-cf5e-4373-85d1-80d5803bb7c9-combined-ca-bundle\") pod \"keystone-bootstrap-vtwjw\" (UID: \"88108c63-cf5e-4373-85d1-80d5803bb7c9\") " pod="openstack/keystone-bootstrap-vtwjw" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.301740 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/115fd2e3-2e0e-4a87-9fe2-15c1f00bcf4c-ovsdbserver-sb\") pod \"dnsmasq-dns-55fff446b9-f5m5w\" (UID: \"115fd2e3-2e0e-4a87-9fe2-15c1f00bcf4c\") " pod="openstack/dnsmasq-dns-55fff446b9-f5m5w" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.301763 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/115fd2e3-2e0e-4a87-9fe2-15c1f00bcf4c-dns-svc\") pod \"dnsmasq-dns-55fff446b9-f5m5w\" (UID: \"115fd2e3-2e0e-4a87-9fe2-15c1f00bcf4c\") " pod="openstack/dnsmasq-dns-55fff446b9-f5m5w" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.301784 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/88108c63-cf5e-4373-85d1-80d5803bb7c9-credential-keys\") pod \"keystone-bootstrap-vtwjw\" (UID: \"88108c63-cf5e-4373-85d1-80d5803bb7c9\") " pod="openstack/keystone-bootstrap-vtwjw" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.301810 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bt5rp\" (UniqueName: \"kubernetes.io/projected/88108c63-cf5e-4373-85d1-80d5803bb7c9-kube-api-access-bt5rp\") pod \"keystone-bootstrap-vtwjw\" (UID: \"88108c63-cf5e-4373-85d1-80d5803bb7c9\") " pod="openstack/keystone-bootstrap-vtwjw" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.310087 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88108c63-cf5e-4373-85d1-80d5803bb7c9-scripts\") pod \"keystone-bootstrap-vtwjw\" (UID: \"88108c63-cf5e-4373-85d1-80d5803bb7c9\") " pod="openstack/keystone-bootstrap-vtwjw" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.313884 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/88108c63-cf5e-4373-85d1-80d5803bb7c9-credential-keys\") pod \"keystone-bootstrap-vtwjw\" (UID: \"88108c63-cf5e-4373-85d1-80d5803bb7c9\") " pod="openstack/keystone-bootstrap-vtwjw" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.317629 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88108c63-cf5e-4373-85d1-80d5803bb7c9-combined-ca-bundle\") pod \"keystone-bootstrap-vtwjw\" (UID: \"88108c63-cf5e-4373-85d1-80d5803bb7c9\") " pod="openstack/keystone-bootstrap-vtwjw" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.319049 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88108c63-cf5e-4373-85d1-80d5803bb7c9-config-data\") pod \"keystone-bootstrap-vtwjw\" (UID: \"88108c63-cf5e-4373-85d1-80d5803bb7c9\") " pod="openstack/keystone-bootstrap-vtwjw" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.322308 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/88108c63-cf5e-4373-85d1-80d5803bb7c9-fernet-keys\") pod \"keystone-bootstrap-vtwjw\" (UID: \"88108c63-cf5e-4373-85d1-80d5803bb7c9\") " pod="openstack/keystone-bootstrap-vtwjw" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.334431 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bt5rp\" (UniqueName: \"kubernetes.io/projected/88108c63-cf5e-4373-85d1-80d5803bb7c9-kube-api-access-bt5rp\") pod \"keystone-bootstrap-vtwjw\" (UID: \"88108c63-cf5e-4373-85d1-80d5803bb7c9\") " pod="openstack/keystone-bootstrap-vtwjw" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.401345 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-4s25c"] Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.402603 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/115fd2e3-2e0e-4a87-9fe2-15c1f00bcf4c-ovsdbserver-sb\") pod \"dnsmasq-dns-55fff446b9-f5m5w\" (UID: \"115fd2e3-2e0e-4a87-9fe2-15c1f00bcf4c\") " pod="openstack/dnsmasq-dns-55fff446b9-f5m5w" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.404007 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/115fd2e3-2e0e-4a87-9fe2-15c1f00bcf4c-dns-svc\") pod \"dnsmasq-dns-55fff446b9-f5m5w\" (UID: \"115fd2e3-2e0e-4a87-9fe2-15c1f00bcf4c\") " pod="openstack/dnsmasq-dns-55fff446b9-f5m5w" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.403533 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/115fd2e3-2e0e-4a87-9fe2-15c1f00bcf4c-ovsdbserver-sb\") pod \"dnsmasq-dns-55fff446b9-f5m5w\" (UID: \"115fd2e3-2e0e-4a87-9fe2-15c1f00bcf4c\") " pod="openstack/dnsmasq-dns-55fff446b9-f5m5w" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.404172 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/115fd2e3-2e0e-4a87-9fe2-15c1f00bcf4c-ovsdbserver-nb\") pod \"dnsmasq-dns-55fff446b9-f5m5w\" (UID: \"115fd2e3-2e0e-4a87-9fe2-15c1f00bcf4c\") " pod="openstack/dnsmasq-dns-55fff446b9-f5m5w" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.404223 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8b85q\" (UniqueName: \"kubernetes.io/projected/115fd2e3-2e0e-4a87-9fe2-15c1f00bcf4c-kube-api-access-8b85q\") pod \"dnsmasq-dns-55fff446b9-f5m5w\" (UID: \"115fd2e3-2e0e-4a87-9fe2-15c1f00bcf4c\") " pod="openstack/dnsmasq-dns-55fff446b9-f5m5w" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.404597 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-4s25c" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.404812 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/115fd2e3-2e0e-4a87-9fe2-15c1f00bcf4c-dns-svc\") pod \"dnsmasq-dns-55fff446b9-f5m5w\" (UID: \"115fd2e3-2e0e-4a87-9fe2-15c1f00bcf4c\") " pod="openstack/dnsmasq-dns-55fff446b9-f5m5w" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.405037 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/115fd2e3-2e0e-4a87-9fe2-15c1f00bcf4c-dns-swift-storage-0\") pod \"dnsmasq-dns-55fff446b9-f5m5w\" (UID: \"115fd2e3-2e0e-4a87-9fe2-15c1f00bcf4c\") " pod="openstack/dnsmasq-dns-55fff446b9-f5m5w" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.405146 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/115fd2e3-2e0e-4a87-9fe2-15c1f00bcf4c-config\") pod \"dnsmasq-dns-55fff446b9-f5m5w\" (UID: \"115fd2e3-2e0e-4a87-9fe2-15c1f00bcf4c\") " pod="openstack/dnsmasq-dns-55fff446b9-f5m5w" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.405754 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/115fd2e3-2e0e-4a87-9fe2-15c1f00bcf4c-ovsdbserver-nb\") pod \"dnsmasq-dns-55fff446b9-f5m5w\" (UID: \"115fd2e3-2e0e-4a87-9fe2-15c1f00bcf4c\") " pod="openstack/dnsmasq-dns-55fff446b9-f5m5w" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.405926 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/115fd2e3-2e0e-4a87-9fe2-15c1f00bcf4c-config\") pod \"dnsmasq-dns-55fff446b9-f5m5w\" (UID: \"115fd2e3-2e0e-4a87-9fe2-15c1f00bcf4c\") " pod="openstack/dnsmasq-dns-55fff446b9-f5m5w" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.409664 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-4s25c"] Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.411003 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/115fd2e3-2e0e-4a87-9fe2-15c1f00bcf4c-dns-swift-storage-0\") pod \"dnsmasq-dns-55fff446b9-f5m5w\" (UID: \"115fd2e3-2e0e-4a87-9fe2-15c1f00bcf4c\") " pod="openstack/dnsmasq-dns-55fff446b9-f5m5w" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.415717 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6568745857-wp84p"] Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.416669 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.416991 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6568745857-wp84p" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.418453 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-gclqm" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.418548 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.427123 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.427631 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.427781 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-gnrv9" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.427974 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.442180 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6568745857-wp84p"] Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.465972 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-qpv6c"] Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.467071 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8b85q\" (UniqueName: \"kubernetes.io/projected/115fd2e3-2e0e-4a87-9fe2-15c1f00bcf4c-kube-api-access-8b85q\") pod \"dnsmasq-dns-55fff446b9-f5m5w\" (UID: \"115fd2e3-2e0e-4a87-9fe2-15c1f00bcf4c\") " pod="openstack/dnsmasq-dns-55fff446b9-f5m5w" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.481102 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-qpv6c" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.498957 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vtwjw" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.506252 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/86290538-808d-4761-a2c6-d9b9509ec364-horizon-secret-key\") pod \"horizon-6568745857-wp84p\" (UID: \"86290538-808d-4761-a2c6-d9b9509ec364\") " pod="openstack/horizon-6568745857-wp84p" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.506323 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nxn8\" (UniqueName: \"kubernetes.io/projected/7b8ed42a-89ce-4098-9489-5291e678bf18-kube-api-access-7nxn8\") pod \"cinder-db-sync-qpv6c\" (UID: \"7b8ed42a-89ce-4098-9489-5291e678bf18\") " pod="openstack/cinder-db-sync-qpv6c" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.506359 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/86290538-808d-4761-a2c6-d9b9509ec364-scripts\") pod \"horizon-6568745857-wp84p\" (UID: \"86290538-808d-4761-a2c6-d9b9509ec364\") " pod="openstack/horizon-6568745857-wp84p" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.506386 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfnrd\" (UniqueName: \"kubernetes.io/projected/86290538-808d-4761-a2c6-d9b9509ec364-kube-api-access-wfnrd\") pod \"horizon-6568745857-wp84p\" (UID: \"86290538-808d-4761-a2c6-d9b9509ec364\") " pod="openstack/horizon-6568745857-wp84p" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.506403 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/14924968-6b9f-4a33-b504-dbfd64956b30-config\") pod \"neutron-db-sync-4s25c\" (UID: \"14924968-6b9f-4a33-b504-dbfd64956b30\") " pod="openstack/neutron-db-sync-4s25c" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.506421 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7b8ed42a-89ce-4098-9489-5291e678bf18-db-sync-config-data\") pod \"cinder-db-sync-qpv6c\" (UID: \"7b8ed42a-89ce-4098-9489-5291e678bf18\") " pod="openstack/cinder-db-sync-qpv6c" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.506442 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7b8ed42a-89ce-4098-9489-5291e678bf18-etc-machine-id\") pod \"cinder-db-sync-qpv6c\" (UID: \"7b8ed42a-89ce-4098-9489-5291e678bf18\") " pod="openstack/cinder-db-sync-qpv6c" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.506464 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/86290538-808d-4761-a2c6-d9b9509ec364-logs\") pod \"horizon-6568745857-wp84p\" (UID: \"86290538-808d-4761-a2c6-d9b9509ec364\") " pod="openstack/horizon-6568745857-wp84p" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.506502 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/86290538-808d-4761-a2c6-d9b9509ec364-config-data\") pod \"horizon-6568745857-wp84p\" (UID: \"86290538-808d-4761-a2c6-d9b9509ec364\") " pod="openstack/horizon-6568745857-wp84p" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.506532 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b8ed42a-89ce-4098-9489-5291e678bf18-combined-ca-bundle\") pod \"cinder-db-sync-qpv6c\" (UID: \"7b8ed42a-89ce-4098-9489-5291e678bf18\") " pod="openstack/cinder-db-sync-qpv6c" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.506549 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plvns\" (UniqueName: \"kubernetes.io/projected/14924968-6b9f-4a33-b504-dbfd64956b30-kube-api-access-plvns\") pod \"neutron-db-sync-4s25c\" (UID: \"14924968-6b9f-4a33-b504-dbfd64956b30\") " pod="openstack/neutron-db-sync-4s25c" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.506564 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b8ed42a-89ce-4098-9489-5291e678bf18-config-data\") pod \"cinder-db-sync-qpv6c\" (UID: \"7b8ed42a-89ce-4098-9489-5291e678bf18\") " pod="openstack/cinder-db-sync-qpv6c" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.506598 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14924968-6b9f-4a33-b504-dbfd64956b30-combined-ca-bundle\") pod \"neutron-db-sync-4s25c\" (UID: \"14924968-6b9f-4a33-b504-dbfd64956b30\") " pod="openstack/neutron-db-sync-4s25c" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.506619 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b8ed42a-89ce-4098-9489-5291e678bf18-scripts\") pod \"cinder-db-sync-qpv6c\" (UID: \"7b8ed42a-89ce-4098-9489-5291e678bf18\") " pod="openstack/cinder-db-sync-qpv6c" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.536622 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.536808 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-trqh5" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.537802 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.538029 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-qpv6c"] Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.607553 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b8ed42a-89ce-4098-9489-5291e678bf18-combined-ca-bundle\") pod \"cinder-db-sync-qpv6c\" (UID: \"7b8ed42a-89ce-4098-9489-5291e678bf18\") " pod="openstack/cinder-db-sync-qpv6c" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.607604 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plvns\" (UniqueName: \"kubernetes.io/projected/14924968-6b9f-4a33-b504-dbfd64956b30-kube-api-access-plvns\") pod \"neutron-db-sync-4s25c\" (UID: \"14924968-6b9f-4a33-b504-dbfd64956b30\") " pod="openstack/neutron-db-sync-4s25c" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.607623 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b8ed42a-89ce-4098-9489-5291e678bf18-config-data\") pod \"cinder-db-sync-qpv6c\" (UID: \"7b8ed42a-89ce-4098-9489-5291e678bf18\") " pod="openstack/cinder-db-sync-qpv6c" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.607655 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14924968-6b9f-4a33-b504-dbfd64956b30-combined-ca-bundle\") pod \"neutron-db-sync-4s25c\" (UID: \"14924968-6b9f-4a33-b504-dbfd64956b30\") " pod="openstack/neutron-db-sync-4s25c" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.607678 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b8ed42a-89ce-4098-9489-5291e678bf18-scripts\") pod \"cinder-db-sync-qpv6c\" (UID: \"7b8ed42a-89ce-4098-9489-5291e678bf18\") " pod="openstack/cinder-db-sync-qpv6c" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.607710 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/86290538-808d-4761-a2c6-d9b9509ec364-horizon-secret-key\") pod \"horizon-6568745857-wp84p\" (UID: \"86290538-808d-4761-a2c6-d9b9509ec364\") " pod="openstack/horizon-6568745857-wp84p" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.607742 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nxn8\" (UniqueName: \"kubernetes.io/projected/7b8ed42a-89ce-4098-9489-5291e678bf18-kube-api-access-7nxn8\") pod \"cinder-db-sync-qpv6c\" (UID: \"7b8ed42a-89ce-4098-9489-5291e678bf18\") " pod="openstack/cinder-db-sync-qpv6c" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.607777 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/86290538-808d-4761-a2c6-d9b9509ec364-scripts\") pod \"horizon-6568745857-wp84p\" (UID: \"86290538-808d-4761-a2c6-d9b9509ec364\") " pod="openstack/horizon-6568745857-wp84p" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.607797 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfnrd\" (UniqueName: \"kubernetes.io/projected/86290538-808d-4761-a2c6-d9b9509ec364-kube-api-access-wfnrd\") pod \"horizon-6568745857-wp84p\" (UID: \"86290538-808d-4761-a2c6-d9b9509ec364\") " pod="openstack/horizon-6568745857-wp84p" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.607816 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/14924968-6b9f-4a33-b504-dbfd64956b30-config\") pod \"neutron-db-sync-4s25c\" (UID: \"14924968-6b9f-4a33-b504-dbfd64956b30\") " pod="openstack/neutron-db-sync-4s25c" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.607833 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7b8ed42a-89ce-4098-9489-5291e678bf18-db-sync-config-data\") pod \"cinder-db-sync-qpv6c\" (UID: \"7b8ed42a-89ce-4098-9489-5291e678bf18\") " pod="openstack/cinder-db-sync-qpv6c" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.607851 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7b8ed42a-89ce-4098-9489-5291e678bf18-etc-machine-id\") pod \"cinder-db-sync-qpv6c\" (UID: \"7b8ed42a-89ce-4098-9489-5291e678bf18\") " pod="openstack/cinder-db-sync-qpv6c" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.607874 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/86290538-808d-4761-a2c6-d9b9509ec364-logs\") pod \"horizon-6568745857-wp84p\" (UID: \"86290538-808d-4761-a2c6-d9b9509ec364\") " pod="openstack/horizon-6568745857-wp84p" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.607901 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/86290538-808d-4761-a2c6-d9b9509ec364-config-data\") pod \"horizon-6568745857-wp84p\" (UID: \"86290538-808d-4761-a2c6-d9b9509ec364\") " pod="openstack/horizon-6568745857-wp84p" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.609817 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/86290538-808d-4761-a2c6-d9b9509ec364-config-data\") pod \"horizon-6568745857-wp84p\" (UID: \"86290538-808d-4761-a2c6-d9b9509ec364\") " pod="openstack/horizon-6568745857-wp84p" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.610302 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/86290538-808d-4761-a2c6-d9b9509ec364-scripts\") pod \"horizon-6568745857-wp84p\" (UID: \"86290538-808d-4761-a2c6-d9b9509ec364\") " pod="openstack/horizon-6568745857-wp84p" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.617395 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7b8ed42a-89ce-4098-9489-5291e678bf18-etc-machine-id\") pod \"cinder-db-sync-qpv6c\" (UID: \"7b8ed42a-89ce-4098-9489-5291e678bf18\") " pod="openstack/cinder-db-sync-qpv6c" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.617740 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/86290538-808d-4761-a2c6-d9b9509ec364-logs\") pod \"horizon-6568745857-wp84p\" (UID: \"86290538-808d-4761-a2c6-d9b9509ec364\") " pod="openstack/horizon-6568745857-wp84p" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.622584 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/14924968-6b9f-4a33-b504-dbfd64956b30-config\") pod \"neutron-db-sync-4s25c\" (UID: \"14924968-6b9f-4a33-b504-dbfd64956b30\") " pod="openstack/neutron-db-sync-4s25c" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.625772 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14924968-6b9f-4a33-b504-dbfd64956b30-combined-ca-bundle\") pod \"neutron-db-sync-4s25c\" (UID: \"14924968-6b9f-4a33-b504-dbfd64956b30\") " pod="openstack/neutron-db-sync-4s25c" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.626070 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7b8ed42a-89ce-4098-9489-5291e678bf18-db-sync-config-data\") pod \"cinder-db-sync-qpv6c\" (UID: \"7b8ed42a-89ce-4098-9489-5291e678bf18\") " pod="openstack/cinder-db-sync-qpv6c" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.626081 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b8ed42a-89ce-4098-9489-5291e678bf18-scripts\") pod \"cinder-db-sync-qpv6c\" (UID: \"7b8ed42a-89ce-4098-9489-5291e678bf18\") " pod="openstack/cinder-db-sync-qpv6c" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.627537 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/86290538-808d-4761-a2c6-d9b9509ec364-horizon-secret-key\") pod \"horizon-6568745857-wp84p\" (UID: \"86290538-808d-4761-a2c6-d9b9509ec364\") " pod="openstack/horizon-6568745857-wp84p" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.631921 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b8ed42a-89ce-4098-9489-5291e678bf18-config-data\") pod \"cinder-db-sync-qpv6c\" (UID: \"7b8ed42a-89ce-4098-9489-5291e678bf18\") " pod="openstack/cinder-db-sync-qpv6c" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.638659 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b8ed42a-89ce-4098-9489-5291e678bf18-combined-ca-bundle\") pod \"cinder-db-sync-qpv6c\" (UID: \"7b8ed42a-89ce-4098-9489-5291e678bf18\") " pod="openstack/cinder-db-sync-qpv6c" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.641312 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nxn8\" (UniqueName: \"kubernetes.io/projected/7b8ed42a-89ce-4098-9489-5291e678bf18-kube-api-access-7nxn8\") pod \"cinder-db-sync-qpv6c\" (UID: \"7b8ed42a-89ce-4098-9489-5291e678bf18\") " pod="openstack/cinder-db-sync-qpv6c" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.703353 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.709664 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55fff446b9-f5m5w" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.744646 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfnrd\" (UniqueName: \"kubernetes.io/projected/86290538-808d-4761-a2c6-d9b9509ec364-kube-api-access-wfnrd\") pod \"horizon-6568745857-wp84p\" (UID: \"86290538-808d-4761-a2c6-d9b9509ec364\") " pod="openstack/horizon-6568745857-wp84p" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.749730 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.755613 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plvns\" (UniqueName: \"kubernetes.io/projected/14924968-6b9f-4a33-b504-dbfd64956b30-kube-api-access-plvns\") pod \"neutron-db-sync-4s25c\" (UID: \"14924968-6b9f-4a33-b504-dbfd64956b30\") " pod="openstack/neutron-db-sync-4s25c" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.758334 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.758588 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.779388 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6568745857-wp84p" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.804723 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-8v24s"] Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.848200 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-8v24s" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.873268 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.877714 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.877948 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.878474 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-gbnrj" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.902569 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/143eb6df-5711-4616-baaa-42417113bfed-run-httpd\") pod \"ceilometer-0\" (UID: \"143eb6df-5711-4616-baaa-42417113bfed\") " pod="openstack/ceilometer-0" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.902685 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/143eb6df-5711-4616-baaa-42417113bfed-log-httpd\") pod \"ceilometer-0\" (UID: \"143eb6df-5711-4616-baaa-42417113bfed\") " pod="openstack/ceilometer-0" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.902737 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/143eb6df-5711-4616-baaa-42417113bfed-scripts\") pod \"ceilometer-0\" (UID: \"143eb6df-5711-4616-baaa-42417113bfed\") " pod="openstack/ceilometer-0" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.902764 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/143eb6df-5711-4616-baaa-42417113bfed-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"143eb6df-5711-4616-baaa-42417113bfed\") " pod="openstack/ceilometer-0" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.902774 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-qpv6c" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.902826 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpz5g\" (UniqueName: \"kubernetes.io/projected/143eb6df-5711-4616-baaa-42417113bfed-kube-api-access-gpz5g\") pod \"ceilometer-0\" (UID: \"143eb6df-5711-4616-baaa-42417113bfed\") " pod="openstack/ceilometer-0" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.902849 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/143eb6df-5711-4616-baaa-42417113bfed-config-data\") pod \"ceilometer-0\" (UID: \"143eb6df-5711-4616-baaa-42417113bfed\") " pod="openstack/ceilometer-0" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.903359 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/143eb6df-5711-4616-baaa-42417113bfed-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"143eb6df-5711-4616-baaa-42417113bfed\") " pod="openstack/ceilometer-0" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.933325 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-8v24s"] Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.964720 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-2k7gk"] Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.967303 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-2k7gk" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.974997 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.981939 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-9675l" Dec 03 13:17:59 crc kubenswrapper[4986]: I1203 13:17:59.989181 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-2k7gk"] Dec 03 13:18:00 crc kubenswrapper[4986]: I1203 13:18:00.002907 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55fff446b9-f5m5w"] Dec 03 13:18:00 crc kubenswrapper[4986]: I1203 13:18:00.004771 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpz5g\" (UniqueName: \"kubernetes.io/projected/143eb6df-5711-4616-baaa-42417113bfed-kube-api-access-gpz5g\") pod \"ceilometer-0\" (UID: \"143eb6df-5711-4616-baaa-42417113bfed\") " pod="openstack/ceilometer-0" Dec 03 13:18:00 crc kubenswrapper[4986]: I1203 13:18:00.004823 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/143eb6df-5711-4616-baaa-42417113bfed-config-data\") pod \"ceilometer-0\" (UID: \"143eb6df-5711-4616-baaa-42417113bfed\") " pod="openstack/ceilometer-0" Dec 03 13:18:00 crc kubenswrapper[4986]: I1203 13:18:00.004893 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/143eb6df-5711-4616-baaa-42417113bfed-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"143eb6df-5711-4616-baaa-42417113bfed\") " pod="openstack/ceilometer-0" Dec 03 13:18:00 crc kubenswrapper[4986]: I1203 13:18:00.005025 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/143eb6df-5711-4616-baaa-42417113bfed-run-httpd\") pod \"ceilometer-0\" (UID: \"143eb6df-5711-4616-baaa-42417113bfed\") " pod="openstack/ceilometer-0" Dec 03 13:18:00 crc kubenswrapper[4986]: I1203 13:18:00.005109 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ca26c94-4546-4d5d-8600-30257c724198-scripts\") pod \"placement-db-sync-8v24s\" (UID: \"9ca26c94-4546-4d5d-8600-30257c724198\") " pod="openstack/placement-db-sync-8v24s" Dec 03 13:18:00 crc kubenswrapper[4986]: I1203 13:18:00.005137 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47cfw\" (UniqueName: \"kubernetes.io/projected/9ca26c94-4546-4d5d-8600-30257c724198-kube-api-access-47cfw\") pod \"placement-db-sync-8v24s\" (UID: \"9ca26c94-4546-4d5d-8600-30257c724198\") " pod="openstack/placement-db-sync-8v24s" Dec 03 13:18:00 crc kubenswrapper[4986]: I1203 13:18:00.005609 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/143eb6df-5711-4616-baaa-42417113bfed-run-httpd\") pod \"ceilometer-0\" (UID: \"143eb6df-5711-4616-baaa-42417113bfed\") " pod="openstack/ceilometer-0" Dec 03 13:18:00 crc kubenswrapper[4986]: I1203 13:18:00.006138 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/143eb6df-5711-4616-baaa-42417113bfed-log-httpd\") pod \"ceilometer-0\" (UID: \"143eb6df-5711-4616-baaa-42417113bfed\") " pod="openstack/ceilometer-0" Dec 03 13:18:00 crc kubenswrapper[4986]: I1203 13:18:00.006162 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ca26c94-4546-4d5d-8600-30257c724198-logs\") pod \"placement-db-sync-8v24s\" (UID: \"9ca26c94-4546-4d5d-8600-30257c724198\") " pod="openstack/placement-db-sync-8v24s" Dec 03 13:18:00 crc kubenswrapper[4986]: I1203 13:18:00.006189 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ca26c94-4546-4d5d-8600-30257c724198-config-data\") pod \"placement-db-sync-8v24s\" (UID: \"9ca26c94-4546-4d5d-8600-30257c724198\") " pod="openstack/placement-db-sync-8v24s" Dec 03 13:18:00 crc kubenswrapper[4986]: I1203 13:18:00.006233 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/143eb6df-5711-4616-baaa-42417113bfed-scripts\") pod \"ceilometer-0\" (UID: \"143eb6df-5711-4616-baaa-42417113bfed\") " pod="openstack/ceilometer-0" Dec 03 13:18:00 crc kubenswrapper[4986]: I1203 13:18:00.006248 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ca26c94-4546-4d5d-8600-30257c724198-combined-ca-bundle\") pod \"placement-db-sync-8v24s\" (UID: \"9ca26c94-4546-4d5d-8600-30257c724198\") " pod="openstack/placement-db-sync-8v24s" Dec 03 13:18:00 crc kubenswrapper[4986]: I1203 13:18:00.006267 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/143eb6df-5711-4616-baaa-42417113bfed-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"143eb6df-5711-4616-baaa-42417113bfed\") " pod="openstack/ceilometer-0" Dec 03 13:18:00 crc kubenswrapper[4986]: I1203 13:18:00.006707 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/143eb6df-5711-4616-baaa-42417113bfed-log-httpd\") pod \"ceilometer-0\" (UID: \"143eb6df-5711-4616-baaa-42417113bfed\") " pod="openstack/ceilometer-0" Dec 03 13:18:00 crc kubenswrapper[4986]: I1203 13:18:00.019561 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/143eb6df-5711-4616-baaa-42417113bfed-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"143eb6df-5711-4616-baaa-42417113bfed\") " pod="openstack/ceilometer-0" Dec 03 13:18:00 crc kubenswrapper[4986]: I1203 13:18:00.021092 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/143eb6df-5711-4616-baaa-42417113bfed-config-data\") pod \"ceilometer-0\" (UID: \"143eb6df-5711-4616-baaa-42417113bfed\") " pod="openstack/ceilometer-0" Dec 03 13:18:00 crc kubenswrapper[4986]: I1203 13:18:00.021398 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/143eb6df-5711-4616-baaa-42417113bfed-scripts\") pod \"ceilometer-0\" (UID: \"143eb6df-5711-4616-baaa-42417113bfed\") " pod="openstack/ceilometer-0" Dec 03 13:18:00 crc kubenswrapper[4986]: I1203 13:18:00.031017 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/143eb6df-5711-4616-baaa-42417113bfed-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"143eb6df-5711-4616-baaa-42417113bfed\") " pod="openstack/ceilometer-0" Dec 03 13:18:00 crc kubenswrapper[4986]: I1203 13:18:00.039083 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-4s25c" Dec 03 13:18:00 crc kubenswrapper[4986]: I1203 13:18:00.039460 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpz5g\" (UniqueName: \"kubernetes.io/projected/143eb6df-5711-4616-baaa-42417113bfed-kube-api-access-gpz5g\") pod \"ceilometer-0\" (UID: \"143eb6df-5711-4616-baaa-42417113bfed\") " pod="openstack/ceilometer-0" Dec 03 13:18:00 crc kubenswrapper[4986]: I1203 13:18:00.046188 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-765d896c9f-95hgf"] Dec 03 13:18:00 crc kubenswrapper[4986]: I1203 13:18:00.047875 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-765d896c9f-95hgf" Dec 03 13:18:00 crc kubenswrapper[4986]: I1203 13:18:00.071794 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-76fcf4b695-sxknq"] Dec 03 13:18:00 crc kubenswrapper[4986]: I1203 13:18:00.073584 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76fcf4b695-sxknq" Dec 03 13:18:00 crc kubenswrapper[4986]: I1203 13:18:00.086450 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-765d896c9f-95hgf"] Dec 03 13:18:00 crc kubenswrapper[4986]: I1203 13:18:00.110563 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ca26c94-4546-4d5d-8600-30257c724198-scripts\") pod \"placement-db-sync-8v24s\" (UID: \"9ca26c94-4546-4d5d-8600-30257c724198\") " pod="openstack/placement-db-sync-8v24s" Dec 03 13:18:00 crc kubenswrapper[4986]: I1203 13:18:00.110607 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47cfw\" (UniqueName: \"kubernetes.io/projected/9ca26c94-4546-4d5d-8600-30257c724198-kube-api-access-47cfw\") pod \"placement-db-sync-8v24s\" (UID: \"9ca26c94-4546-4d5d-8600-30257c724198\") " pod="openstack/placement-db-sync-8v24s" Dec 03 13:18:00 crc kubenswrapper[4986]: I1203 13:18:00.110641 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d560e389-543d-4341-a450-e6cb0f2a3057-combined-ca-bundle\") pod \"barbican-db-sync-2k7gk\" (UID: \"d560e389-543d-4341-a450-e6cb0f2a3057\") " pod="openstack/barbican-db-sync-2k7gk" Dec 03 13:18:00 crc kubenswrapper[4986]: I1203 13:18:00.110665 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ca26c94-4546-4d5d-8600-30257c724198-logs\") pod \"placement-db-sync-8v24s\" (UID: \"9ca26c94-4546-4d5d-8600-30257c724198\") " pod="openstack/placement-db-sync-8v24s" Dec 03 13:18:00 crc kubenswrapper[4986]: I1203 13:18:00.110688 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ca26c94-4546-4d5d-8600-30257c724198-config-data\") pod \"placement-db-sync-8v24s\" (UID: \"9ca26c94-4546-4d5d-8600-30257c724198\") " pod="openstack/placement-db-sync-8v24s" Dec 03 13:18:00 crc kubenswrapper[4986]: I1203 13:18:00.110724 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ca26c94-4546-4d5d-8600-30257c724198-combined-ca-bundle\") pod \"placement-db-sync-8v24s\" (UID: \"9ca26c94-4546-4d5d-8600-30257c724198\") " pod="openstack/placement-db-sync-8v24s" Dec 03 13:18:00 crc kubenswrapper[4986]: I1203 13:18:00.110761 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d560e389-543d-4341-a450-e6cb0f2a3057-db-sync-config-data\") pod \"barbican-db-sync-2k7gk\" (UID: \"d560e389-543d-4341-a450-e6cb0f2a3057\") " pod="openstack/barbican-db-sync-2k7gk" Dec 03 13:18:00 crc kubenswrapper[4986]: I1203 13:18:00.110805 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8r7v6\" (UniqueName: \"kubernetes.io/projected/d560e389-543d-4341-a450-e6cb0f2a3057-kube-api-access-8r7v6\") pod \"barbican-db-sync-2k7gk\" (UID: \"d560e389-543d-4341-a450-e6cb0f2a3057\") " pod="openstack/barbican-db-sync-2k7gk" Dec 03 13:18:00 crc kubenswrapper[4986]: I1203 13:18:00.111133 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ca26c94-4546-4d5d-8600-30257c724198-logs\") pod \"placement-db-sync-8v24s\" (UID: \"9ca26c94-4546-4d5d-8600-30257c724198\") " pod="openstack/placement-db-sync-8v24s" Dec 03 13:18:00 crc kubenswrapper[4986]: I1203 13:18:00.123971 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ca26c94-4546-4d5d-8600-30257c724198-config-data\") pod \"placement-db-sync-8v24s\" (UID: \"9ca26c94-4546-4d5d-8600-30257c724198\") " pod="openstack/placement-db-sync-8v24s" Dec 03 13:18:00 crc kubenswrapper[4986]: I1203 13:18:00.124274 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76fcf4b695-sxknq"] Dec 03 13:18:00 crc kubenswrapper[4986]: I1203 13:18:00.125366 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ca26c94-4546-4d5d-8600-30257c724198-combined-ca-bundle\") pod \"placement-db-sync-8v24s\" (UID: \"9ca26c94-4546-4d5d-8600-30257c724198\") " pod="openstack/placement-db-sync-8v24s" Dec 03 13:18:00 crc kubenswrapper[4986]: I1203 13:18:00.125808 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ca26c94-4546-4d5d-8600-30257c724198-scripts\") pod \"placement-db-sync-8v24s\" (UID: \"9ca26c94-4546-4d5d-8600-30257c724198\") " pod="openstack/placement-db-sync-8v24s" Dec 03 13:18:00 crc kubenswrapper[4986]: I1203 13:18:00.130940 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47cfw\" (UniqueName: \"kubernetes.io/projected/9ca26c94-4546-4d5d-8600-30257c724198-kube-api-access-47cfw\") pod \"placement-db-sync-8v24s\" (UID: \"9ca26c94-4546-4d5d-8600-30257c724198\") " pod="openstack/placement-db-sync-8v24s" Dec 03 13:18:00 crc kubenswrapper[4986]: I1203 13:18:00.213993 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d09ba1cc-e27c-4ea3-9f4a-2f7791afce95-ovsdbserver-nb\") pod \"dnsmasq-dns-76fcf4b695-sxknq\" (UID: \"d09ba1cc-e27c-4ea3-9f4a-2f7791afce95\") " pod="openstack/dnsmasq-dns-76fcf4b695-sxknq" Dec 03 13:18:00 crc kubenswrapper[4986]: I1203 13:18:00.214041 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d560e389-543d-4341-a450-e6cb0f2a3057-db-sync-config-data\") pod \"barbican-db-sync-2k7gk\" (UID: \"d560e389-543d-4341-a450-e6cb0f2a3057\") " pod="openstack/barbican-db-sync-2k7gk" Dec 03 13:18:00 crc kubenswrapper[4986]: I1203 13:18:00.214072 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jb5c4\" (UniqueName: \"kubernetes.io/projected/d09ba1cc-e27c-4ea3-9f4a-2f7791afce95-kube-api-access-jb5c4\") pod \"dnsmasq-dns-76fcf4b695-sxknq\" (UID: \"d09ba1cc-e27c-4ea3-9f4a-2f7791afce95\") " pod="openstack/dnsmasq-dns-76fcf4b695-sxknq" Dec 03 13:18:00 crc kubenswrapper[4986]: I1203 13:18:00.214126 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1fddf481-1472-4685-b105-65c87a3b045a-horizon-secret-key\") pod \"horizon-765d896c9f-95hgf\" (UID: \"1fddf481-1472-4685-b105-65c87a3b045a\") " pod="openstack/horizon-765d896c9f-95hgf" Dec 03 13:18:00 crc kubenswrapper[4986]: I1203 13:18:00.214203 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8r7v6\" (UniqueName: \"kubernetes.io/projected/d560e389-543d-4341-a450-e6cb0f2a3057-kube-api-access-8r7v6\") pod \"barbican-db-sync-2k7gk\" (UID: \"d560e389-543d-4341-a450-e6cb0f2a3057\") " pod="openstack/barbican-db-sync-2k7gk" Dec 03 13:18:00 crc kubenswrapper[4986]: I1203 13:18:00.214225 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d09ba1cc-e27c-4ea3-9f4a-2f7791afce95-config\") pod \"dnsmasq-dns-76fcf4b695-sxknq\" (UID: \"d09ba1cc-e27c-4ea3-9f4a-2f7791afce95\") " pod="openstack/dnsmasq-dns-76fcf4b695-sxknq" Dec 03 13:18:00 crc kubenswrapper[4986]: I1203 13:18:00.214245 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d09ba1cc-e27c-4ea3-9f4a-2f7791afce95-ovsdbserver-sb\") pod \"dnsmasq-dns-76fcf4b695-sxknq\" (UID: \"d09ba1cc-e27c-4ea3-9f4a-2f7791afce95\") " pod="openstack/dnsmasq-dns-76fcf4b695-sxknq" Dec 03 13:18:00 crc kubenswrapper[4986]: I1203 13:18:00.214260 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v52th\" (UniqueName: \"kubernetes.io/projected/1fddf481-1472-4685-b105-65c87a3b045a-kube-api-access-v52th\") pod \"horizon-765d896c9f-95hgf\" (UID: \"1fddf481-1472-4685-b105-65c87a3b045a\") " pod="openstack/horizon-765d896c9f-95hgf" Dec 03 13:18:00 crc kubenswrapper[4986]: I1203 13:18:00.214309 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d09ba1cc-e27c-4ea3-9f4a-2f7791afce95-dns-svc\") pod \"dnsmasq-dns-76fcf4b695-sxknq\" (UID: \"d09ba1cc-e27c-4ea3-9f4a-2f7791afce95\") " pod="openstack/dnsmasq-dns-76fcf4b695-sxknq" Dec 03 13:18:00 crc kubenswrapper[4986]: I1203 13:18:00.214328 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1fddf481-1472-4685-b105-65c87a3b045a-logs\") pod \"horizon-765d896c9f-95hgf\" (UID: \"1fddf481-1472-4685-b105-65c87a3b045a\") " pod="openstack/horizon-765d896c9f-95hgf" Dec 03 13:18:00 crc kubenswrapper[4986]: I1203 13:18:00.214374 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1fddf481-1472-4685-b105-65c87a3b045a-config-data\") pod \"horizon-765d896c9f-95hgf\" (UID: \"1fddf481-1472-4685-b105-65c87a3b045a\") " pod="openstack/horizon-765d896c9f-95hgf" Dec 03 13:18:00 crc kubenswrapper[4986]: I1203 13:18:00.214416 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d09ba1cc-e27c-4ea3-9f4a-2f7791afce95-dns-swift-storage-0\") pod \"dnsmasq-dns-76fcf4b695-sxknq\" (UID: \"d09ba1cc-e27c-4ea3-9f4a-2f7791afce95\") " pod="openstack/dnsmasq-dns-76fcf4b695-sxknq" Dec 03 13:18:00 crc kubenswrapper[4986]: I1203 13:18:00.214435 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1fddf481-1472-4685-b105-65c87a3b045a-scripts\") pod \"horizon-765d896c9f-95hgf\" (UID: \"1fddf481-1472-4685-b105-65c87a3b045a\") " pod="openstack/horizon-765d896c9f-95hgf" Dec 03 13:18:00 crc kubenswrapper[4986]: I1203 13:18:00.214455 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d560e389-543d-4341-a450-e6cb0f2a3057-combined-ca-bundle\") pod \"barbican-db-sync-2k7gk\" (UID: \"d560e389-543d-4341-a450-e6cb0f2a3057\") " pod="openstack/barbican-db-sync-2k7gk" Dec 03 13:18:00 crc kubenswrapper[4986]: I1203 13:18:00.216910 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d560e389-543d-4341-a450-e6cb0f2a3057-db-sync-config-data\") pod \"barbican-db-sync-2k7gk\" (UID: \"d560e389-543d-4341-a450-e6cb0f2a3057\") " pod="openstack/barbican-db-sync-2k7gk" Dec 03 13:18:00 crc kubenswrapper[4986]: I1203 13:18:00.223019 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d560e389-543d-4341-a450-e6cb0f2a3057-combined-ca-bundle\") pod \"barbican-db-sync-2k7gk\" (UID: \"d560e389-543d-4341-a450-e6cb0f2a3057\") " pod="openstack/barbican-db-sync-2k7gk" Dec 03 13:18:00 crc kubenswrapper[4986]: I1203 13:18:00.231431 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 13:18:00 crc kubenswrapper[4986]: I1203 13:18:00.231590 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8r7v6\" (UniqueName: \"kubernetes.io/projected/d560e389-543d-4341-a450-e6cb0f2a3057-kube-api-access-8r7v6\") pod \"barbican-db-sync-2k7gk\" (UID: \"d560e389-543d-4341-a450-e6cb0f2a3057\") " pod="openstack/barbican-db-sync-2k7gk" Dec 03 13:18:00 crc kubenswrapper[4986]: I1203 13:18:00.241384 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-8v24s" Dec 03 13:18:00 crc kubenswrapper[4986]: I1203 13:18:00.304749 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-2k7gk" Dec 03 13:18:00 crc kubenswrapper[4986]: I1203 13:18:00.307015 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-vtwjw"] Dec 03 13:18:00 crc kubenswrapper[4986]: I1203 13:18:00.317356 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d09ba1cc-e27c-4ea3-9f4a-2f7791afce95-ovsdbserver-nb\") pod \"dnsmasq-dns-76fcf4b695-sxknq\" (UID: \"d09ba1cc-e27c-4ea3-9f4a-2f7791afce95\") " pod="openstack/dnsmasq-dns-76fcf4b695-sxknq" Dec 03 13:18:00 crc kubenswrapper[4986]: I1203 13:18:00.317417 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jb5c4\" (UniqueName: \"kubernetes.io/projected/d09ba1cc-e27c-4ea3-9f4a-2f7791afce95-kube-api-access-jb5c4\") pod \"dnsmasq-dns-76fcf4b695-sxknq\" (UID: \"d09ba1cc-e27c-4ea3-9f4a-2f7791afce95\") " pod="openstack/dnsmasq-dns-76fcf4b695-sxknq" Dec 03 13:18:00 crc kubenswrapper[4986]: I1203 13:18:00.317435 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1fddf481-1472-4685-b105-65c87a3b045a-horizon-secret-key\") pod \"horizon-765d896c9f-95hgf\" (UID: \"1fddf481-1472-4685-b105-65c87a3b045a\") " pod="openstack/horizon-765d896c9f-95hgf" Dec 03 13:18:00 crc kubenswrapper[4986]: I1203 13:18:00.317460 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d09ba1cc-e27c-4ea3-9f4a-2f7791afce95-config\") pod \"dnsmasq-dns-76fcf4b695-sxknq\" (UID: \"d09ba1cc-e27c-4ea3-9f4a-2f7791afce95\") " pod="openstack/dnsmasq-dns-76fcf4b695-sxknq" Dec 03 13:18:00 crc kubenswrapper[4986]: I1203 13:18:00.317481 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d09ba1cc-e27c-4ea3-9f4a-2f7791afce95-ovsdbserver-sb\") pod \"dnsmasq-dns-76fcf4b695-sxknq\" (UID: \"d09ba1cc-e27c-4ea3-9f4a-2f7791afce95\") " pod="openstack/dnsmasq-dns-76fcf4b695-sxknq" Dec 03 13:18:00 crc kubenswrapper[4986]: I1203 13:18:00.317499 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v52th\" (UniqueName: \"kubernetes.io/projected/1fddf481-1472-4685-b105-65c87a3b045a-kube-api-access-v52th\") pod \"horizon-765d896c9f-95hgf\" (UID: \"1fddf481-1472-4685-b105-65c87a3b045a\") " pod="openstack/horizon-765d896c9f-95hgf" Dec 03 13:18:00 crc kubenswrapper[4986]: I1203 13:18:00.317522 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d09ba1cc-e27c-4ea3-9f4a-2f7791afce95-dns-svc\") pod \"dnsmasq-dns-76fcf4b695-sxknq\" (UID: \"d09ba1cc-e27c-4ea3-9f4a-2f7791afce95\") " pod="openstack/dnsmasq-dns-76fcf4b695-sxknq" Dec 03 13:18:00 crc kubenswrapper[4986]: I1203 13:18:00.319021 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1fddf481-1472-4685-b105-65c87a3b045a-logs\") pod \"horizon-765d896c9f-95hgf\" (UID: \"1fddf481-1472-4685-b105-65c87a3b045a\") " pod="openstack/horizon-765d896c9f-95hgf" Dec 03 13:18:00 crc kubenswrapper[4986]: I1203 13:18:00.319102 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1fddf481-1472-4685-b105-65c87a3b045a-config-data\") pod \"horizon-765d896c9f-95hgf\" (UID: \"1fddf481-1472-4685-b105-65c87a3b045a\") " pod="openstack/horizon-765d896c9f-95hgf" Dec 03 13:18:00 crc kubenswrapper[4986]: I1203 13:18:00.319167 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d09ba1cc-e27c-4ea3-9f4a-2f7791afce95-dns-swift-storage-0\") pod \"dnsmasq-dns-76fcf4b695-sxknq\" (UID: \"d09ba1cc-e27c-4ea3-9f4a-2f7791afce95\") " pod="openstack/dnsmasq-dns-76fcf4b695-sxknq" Dec 03 13:18:00 crc kubenswrapper[4986]: I1203 13:18:00.319191 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1fddf481-1472-4685-b105-65c87a3b045a-scripts\") pod \"horizon-765d896c9f-95hgf\" (UID: \"1fddf481-1472-4685-b105-65c87a3b045a\") " pod="openstack/horizon-765d896c9f-95hgf" Dec 03 13:18:00 crc kubenswrapper[4986]: I1203 13:18:00.319693 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d09ba1cc-e27c-4ea3-9f4a-2f7791afce95-ovsdbserver-nb\") pod \"dnsmasq-dns-76fcf4b695-sxknq\" (UID: \"d09ba1cc-e27c-4ea3-9f4a-2f7791afce95\") " pod="openstack/dnsmasq-dns-76fcf4b695-sxknq" Dec 03 13:18:00 crc kubenswrapper[4986]: I1203 13:18:00.320753 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1fddf481-1472-4685-b105-65c87a3b045a-scripts\") pod \"horizon-765d896c9f-95hgf\" (UID: \"1fddf481-1472-4685-b105-65c87a3b045a\") " pod="openstack/horizon-765d896c9f-95hgf" Dec 03 13:18:00 crc kubenswrapper[4986]: I1203 13:18:00.320970 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1fddf481-1472-4685-b105-65c87a3b045a-logs\") pod \"horizon-765d896c9f-95hgf\" (UID: \"1fddf481-1472-4685-b105-65c87a3b045a\") " pod="openstack/horizon-765d896c9f-95hgf" Dec 03 13:18:00 crc kubenswrapper[4986]: I1203 13:18:00.321295 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d09ba1cc-e27c-4ea3-9f4a-2f7791afce95-ovsdbserver-sb\") pod \"dnsmasq-dns-76fcf4b695-sxknq\" (UID: \"d09ba1cc-e27c-4ea3-9f4a-2f7791afce95\") " pod="openstack/dnsmasq-dns-76fcf4b695-sxknq" Dec 03 13:18:00 crc kubenswrapper[4986]: I1203 13:18:00.321316 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d09ba1cc-e27c-4ea3-9f4a-2f7791afce95-config\") pod \"dnsmasq-dns-76fcf4b695-sxknq\" (UID: \"d09ba1cc-e27c-4ea3-9f4a-2f7791afce95\") " pod="openstack/dnsmasq-dns-76fcf4b695-sxknq" Dec 03 13:18:00 crc kubenswrapper[4986]: I1203 13:18:00.321569 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d09ba1cc-e27c-4ea3-9f4a-2f7791afce95-dns-svc\") pod \"dnsmasq-dns-76fcf4b695-sxknq\" (UID: \"d09ba1cc-e27c-4ea3-9f4a-2f7791afce95\") " pod="openstack/dnsmasq-dns-76fcf4b695-sxknq" Dec 03 13:18:00 crc kubenswrapper[4986]: I1203 13:18:00.321755 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1fddf481-1472-4685-b105-65c87a3b045a-config-data\") pod \"horizon-765d896c9f-95hgf\" (UID: \"1fddf481-1472-4685-b105-65c87a3b045a\") " pod="openstack/horizon-765d896c9f-95hgf" Dec 03 13:18:00 crc kubenswrapper[4986]: I1203 13:18:00.321950 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d09ba1cc-e27c-4ea3-9f4a-2f7791afce95-dns-swift-storage-0\") pod \"dnsmasq-dns-76fcf4b695-sxknq\" (UID: \"d09ba1cc-e27c-4ea3-9f4a-2f7791afce95\") " pod="openstack/dnsmasq-dns-76fcf4b695-sxknq" Dec 03 13:18:00 crc kubenswrapper[4986]: I1203 13:18:00.327747 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1fddf481-1472-4685-b105-65c87a3b045a-horizon-secret-key\") pod \"horizon-765d896c9f-95hgf\" (UID: \"1fddf481-1472-4685-b105-65c87a3b045a\") " pod="openstack/horizon-765d896c9f-95hgf" Dec 03 13:18:00 crc kubenswrapper[4986]: I1203 13:18:00.338911 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v52th\" (UniqueName: \"kubernetes.io/projected/1fddf481-1472-4685-b105-65c87a3b045a-kube-api-access-v52th\") pod \"horizon-765d896c9f-95hgf\" (UID: \"1fddf481-1472-4685-b105-65c87a3b045a\") " pod="openstack/horizon-765d896c9f-95hgf" Dec 03 13:18:00 crc kubenswrapper[4986]: I1203 13:18:00.340946 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jb5c4\" (UniqueName: \"kubernetes.io/projected/d09ba1cc-e27c-4ea3-9f4a-2f7791afce95-kube-api-access-jb5c4\") pod \"dnsmasq-dns-76fcf4b695-sxknq\" (UID: \"d09ba1cc-e27c-4ea3-9f4a-2f7791afce95\") " pod="openstack/dnsmasq-dns-76fcf4b695-sxknq" Dec 03 13:18:00 crc kubenswrapper[4986]: I1203 13:18:00.396322 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-765d896c9f-95hgf" Dec 03 13:18:00 crc kubenswrapper[4986]: I1203 13:18:00.435507 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76fcf4b695-sxknq" Dec 03 13:18:00 crc kubenswrapper[4986]: I1203 13:18:00.478521 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55fff446b9-f5m5w"] Dec 03 13:18:00 crc kubenswrapper[4986]: I1203 13:18:00.537725 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6568745857-wp84p"] Dec 03 13:18:00 crc kubenswrapper[4986]: W1203 13:18:00.555366 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod115fd2e3_2e0e_4a87_9fe2_15c1f00bcf4c.slice/crio-07de9b1223b69fb1a6a5f349f6e4e4ac15c6510a75571553f811d1dd7462f1cf WatchSource:0}: Error finding container 07de9b1223b69fb1a6a5f349f6e4e4ac15c6510a75571553f811d1dd7462f1cf: Status 404 returned error can't find the container with id 07de9b1223b69fb1a6a5f349f6e4e4ac15c6510a75571553f811d1dd7462f1cf Dec 03 13:18:00 crc kubenswrapper[4986]: I1203 13:18:00.665796 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-qpv6c"] Dec 03 13:18:00 crc kubenswrapper[4986]: I1203 13:18:00.680566 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-4s25c"] Dec 03 13:18:00 crc kubenswrapper[4986]: I1203 13:18:00.824449 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-8v24s"] Dec 03 13:18:00 crc kubenswrapper[4986]: I1203 13:18:00.831620 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-2k7gk"] Dec 03 13:18:00 crc kubenswrapper[4986]: I1203 13:18:00.850899 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 03 13:18:00 crc kubenswrapper[4986]: I1203 13:18:00.930083 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-2k7gk" event={"ID":"d560e389-543d-4341-a450-e6cb0f2a3057","Type":"ContainerStarted","Data":"23bbf32e12786750f1d781ccd316c1cdfffc74e4889c112a7e3227663ac4706b"} Dec 03 13:18:00 crc kubenswrapper[4986]: I1203 13:18:00.931154 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-8v24s" event={"ID":"9ca26c94-4546-4d5d-8600-30257c724198","Type":"ContainerStarted","Data":"34f83e5f4fba54c270f509beb4fd02081bec03ebc17a39ba64c58ff8a2d48e88"} Dec 03 13:18:00 crc kubenswrapper[4986]: I1203 13:18:00.935240 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-qpv6c" event={"ID":"7b8ed42a-89ce-4098-9489-5291e678bf18","Type":"ContainerStarted","Data":"9749150b08d5677247a9ad250d112e717b19b6a22d41a13419398fce583c11ff"} Dec 03 13:18:00 crc kubenswrapper[4986]: I1203 13:18:00.942200 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"143eb6df-5711-4616-baaa-42417113bfed","Type":"ContainerStarted","Data":"cb6685371160ee4e0c3ef5afc3cfc3b242614925197e07206ed0a0bfdf43ace3"} Dec 03 13:18:00 crc kubenswrapper[4986]: I1203 13:18:00.958168 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-4s25c" event={"ID":"14924968-6b9f-4a33-b504-dbfd64956b30","Type":"ContainerStarted","Data":"5bc3e9d9c76461fabb0eb2dbcdf2d4f0df70792e9726568391782a93af3d1428"} Dec 03 13:18:00 crc kubenswrapper[4986]: I1203 13:18:00.960799 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6568745857-wp84p" event={"ID":"86290538-808d-4761-a2c6-d9b9509ec364","Type":"ContainerStarted","Data":"062cc3ccc9913526e0e4393adbf8eba9e080ae7e5ad8c81a6966328d3ff3e189"} Dec 03 13:18:00 crc kubenswrapper[4986]: I1203 13:18:00.962682 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vtwjw" event={"ID":"88108c63-cf5e-4373-85d1-80d5803bb7c9","Type":"ContainerStarted","Data":"f3484d8fa78fdc5944d47fb6394b80187caa2d7f0058ea310860c06240172979"} Dec 03 13:18:00 crc kubenswrapper[4986]: I1203 13:18:00.964094 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-77585f5f8c-ttjvc" podUID="3a885abf-a7cf-4659-b2a9-3e67e924b67d" containerName="dnsmasq-dns" containerID="cri-o://579623730bd74c24a30f76b117795af0b3e6d1d9cce3fc7b664a88fd24983934" gracePeriod=10 Dec 03 13:18:00 crc kubenswrapper[4986]: I1203 13:18:00.964161 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55fff446b9-f5m5w" event={"ID":"115fd2e3-2e0e-4a87-9fe2-15c1f00bcf4c","Type":"ContainerStarted","Data":"07de9b1223b69fb1a6a5f349f6e4e4ac15c6510a75571553f811d1dd7462f1cf"} Dec 03 13:18:00 crc kubenswrapper[4986]: I1203 13:18:00.964384 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55fff446b9-f5m5w" podUID="115fd2e3-2e0e-4a87-9fe2-15c1f00bcf4c" containerName="init" containerID="cri-o://5c523452464fb3afd42d05191716a35db857e2751c5600df858f7fa28ee91e90" gracePeriod=10 Dec 03 13:18:01 crc kubenswrapper[4986]: W1203 13:18:01.088118 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd09ba1cc_e27c_4ea3_9f4a_2f7791afce95.slice/crio-9fe313453051a4a52ad6520df7954c1f9c273f1440f54886bb07673f3e5478a2 WatchSource:0}: Error finding container 9fe313453051a4a52ad6520df7954c1f9c273f1440f54886bb07673f3e5478a2: Status 404 returned error can't find the container with id 9fe313453051a4a52ad6520df7954c1f9c273f1440f54886bb07673f3e5478a2 Dec 03 13:18:01 crc kubenswrapper[4986]: I1203 13:18:01.106492 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76fcf4b695-sxknq"] Dec 03 13:18:01 crc kubenswrapper[4986]: I1203 13:18:01.113383 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-765d896c9f-95hgf"] Dec 03 13:18:01 crc kubenswrapper[4986]: I1203 13:18:01.374462 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-ttjvc" Dec 03 13:18:01 crc kubenswrapper[4986]: I1203 13:18:01.452922 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55fff446b9-f5m5w" Dec 03 13:18:01 crc kubenswrapper[4986]: I1203 13:18:01.457882 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mzqf5\" (UniqueName: \"kubernetes.io/projected/3a885abf-a7cf-4659-b2a9-3e67e924b67d-kube-api-access-mzqf5\") pod \"3a885abf-a7cf-4659-b2a9-3e67e924b67d\" (UID: \"3a885abf-a7cf-4659-b2a9-3e67e924b67d\") " Dec 03 13:18:01 crc kubenswrapper[4986]: I1203 13:18:01.457925 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a885abf-a7cf-4659-b2a9-3e67e924b67d-config\") pod \"3a885abf-a7cf-4659-b2a9-3e67e924b67d\" (UID: \"3a885abf-a7cf-4659-b2a9-3e67e924b67d\") " Dec 03 13:18:01 crc kubenswrapper[4986]: I1203 13:18:01.458034 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3a885abf-a7cf-4659-b2a9-3e67e924b67d-ovsdbserver-nb\") pod \"3a885abf-a7cf-4659-b2a9-3e67e924b67d\" (UID: \"3a885abf-a7cf-4659-b2a9-3e67e924b67d\") " Dec 03 13:18:01 crc kubenswrapper[4986]: I1203 13:18:01.458104 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3a885abf-a7cf-4659-b2a9-3e67e924b67d-ovsdbserver-sb\") pod \"3a885abf-a7cf-4659-b2a9-3e67e924b67d\" (UID: \"3a885abf-a7cf-4659-b2a9-3e67e924b67d\") " Dec 03 13:18:01 crc kubenswrapper[4986]: I1203 13:18:01.458190 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3a885abf-a7cf-4659-b2a9-3e67e924b67d-dns-swift-storage-0\") pod \"3a885abf-a7cf-4659-b2a9-3e67e924b67d\" (UID: \"3a885abf-a7cf-4659-b2a9-3e67e924b67d\") " Dec 03 13:18:01 crc kubenswrapper[4986]: I1203 13:18:01.458258 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3a885abf-a7cf-4659-b2a9-3e67e924b67d-dns-svc\") pod \"3a885abf-a7cf-4659-b2a9-3e67e924b67d\" (UID: \"3a885abf-a7cf-4659-b2a9-3e67e924b67d\") " Dec 03 13:18:01 crc kubenswrapper[4986]: I1203 13:18:01.467006 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a885abf-a7cf-4659-b2a9-3e67e924b67d-kube-api-access-mzqf5" (OuterVolumeSpecName: "kube-api-access-mzqf5") pod "3a885abf-a7cf-4659-b2a9-3e67e924b67d" (UID: "3a885abf-a7cf-4659-b2a9-3e67e924b67d"). InnerVolumeSpecName "kube-api-access-mzqf5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:18:01 crc kubenswrapper[4986]: I1203 13:18:01.536902 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a885abf-a7cf-4659-b2a9-3e67e924b67d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3a885abf-a7cf-4659-b2a9-3e67e924b67d" (UID: "3a885abf-a7cf-4659-b2a9-3e67e924b67d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:18:01 crc kubenswrapper[4986]: I1203 13:18:01.556564 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a885abf-a7cf-4659-b2a9-3e67e924b67d-config" (OuterVolumeSpecName: "config") pod "3a885abf-a7cf-4659-b2a9-3e67e924b67d" (UID: "3a885abf-a7cf-4659-b2a9-3e67e924b67d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:18:01 crc kubenswrapper[4986]: I1203 13:18:01.558998 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a885abf-a7cf-4659-b2a9-3e67e924b67d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3a885abf-a7cf-4659-b2a9-3e67e924b67d" (UID: "3a885abf-a7cf-4659-b2a9-3e67e924b67d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:18:01 crc kubenswrapper[4986]: I1203 13:18:01.559963 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8b85q\" (UniqueName: \"kubernetes.io/projected/115fd2e3-2e0e-4a87-9fe2-15c1f00bcf4c-kube-api-access-8b85q\") pod \"115fd2e3-2e0e-4a87-9fe2-15c1f00bcf4c\" (UID: \"115fd2e3-2e0e-4a87-9fe2-15c1f00bcf4c\") " Dec 03 13:18:01 crc kubenswrapper[4986]: I1203 13:18:01.560023 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/115fd2e3-2e0e-4a87-9fe2-15c1f00bcf4c-dns-swift-storage-0\") pod \"115fd2e3-2e0e-4a87-9fe2-15c1f00bcf4c\" (UID: \"115fd2e3-2e0e-4a87-9fe2-15c1f00bcf4c\") " Dec 03 13:18:01 crc kubenswrapper[4986]: I1203 13:18:01.560225 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/115fd2e3-2e0e-4a87-9fe2-15c1f00bcf4c-ovsdbserver-nb\") pod \"115fd2e3-2e0e-4a87-9fe2-15c1f00bcf4c\" (UID: \"115fd2e3-2e0e-4a87-9fe2-15c1f00bcf4c\") " Dec 03 13:18:01 crc kubenswrapper[4986]: I1203 13:18:01.560260 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/115fd2e3-2e0e-4a87-9fe2-15c1f00bcf4c-dns-svc\") pod \"115fd2e3-2e0e-4a87-9fe2-15c1f00bcf4c\" (UID: \"115fd2e3-2e0e-4a87-9fe2-15c1f00bcf4c\") " Dec 03 13:18:01 crc kubenswrapper[4986]: I1203 13:18:01.560338 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/115fd2e3-2e0e-4a87-9fe2-15c1f00bcf4c-ovsdbserver-sb\") pod \"115fd2e3-2e0e-4a87-9fe2-15c1f00bcf4c\" (UID: \"115fd2e3-2e0e-4a87-9fe2-15c1f00bcf4c\") " Dec 03 13:18:01 crc kubenswrapper[4986]: I1203 13:18:01.560419 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/115fd2e3-2e0e-4a87-9fe2-15c1f00bcf4c-config\") pod \"115fd2e3-2e0e-4a87-9fe2-15c1f00bcf4c\" (UID: \"115fd2e3-2e0e-4a87-9fe2-15c1f00bcf4c\") " Dec 03 13:18:01 crc kubenswrapper[4986]: I1203 13:18:01.562142 4986 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3a885abf-a7cf-4659-b2a9-3e67e924b67d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 03 13:18:01 crc kubenswrapper[4986]: I1203 13:18:01.562166 4986 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3a885abf-a7cf-4659-b2a9-3e67e924b67d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 03 13:18:01 crc kubenswrapper[4986]: I1203 13:18:01.562179 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mzqf5\" (UniqueName: \"kubernetes.io/projected/3a885abf-a7cf-4659-b2a9-3e67e924b67d-kube-api-access-mzqf5\") on node \"crc\" DevicePath \"\"" Dec 03 13:18:01 crc kubenswrapper[4986]: I1203 13:18:01.562194 4986 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a885abf-a7cf-4659-b2a9-3e67e924b67d-config\") on node \"crc\" DevicePath \"\"" Dec 03 13:18:01 crc kubenswrapper[4986]: I1203 13:18:01.567540 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/115fd2e3-2e0e-4a87-9fe2-15c1f00bcf4c-kube-api-access-8b85q" (OuterVolumeSpecName: "kube-api-access-8b85q") pod "115fd2e3-2e0e-4a87-9fe2-15c1f00bcf4c" (UID: "115fd2e3-2e0e-4a87-9fe2-15c1f00bcf4c"). InnerVolumeSpecName "kube-api-access-8b85q". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:18:01 crc kubenswrapper[4986]: I1203 13:18:01.589443 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/115fd2e3-2e0e-4a87-9fe2-15c1f00bcf4c-config" (OuterVolumeSpecName: "config") pod "115fd2e3-2e0e-4a87-9fe2-15c1f00bcf4c" (UID: "115fd2e3-2e0e-4a87-9fe2-15c1f00bcf4c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:18:01 crc kubenswrapper[4986]: I1203 13:18:01.609585 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/115fd2e3-2e0e-4a87-9fe2-15c1f00bcf4c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "115fd2e3-2e0e-4a87-9fe2-15c1f00bcf4c" (UID: "115fd2e3-2e0e-4a87-9fe2-15c1f00bcf4c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:18:01 crc kubenswrapper[4986]: I1203 13:18:01.631883 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/115fd2e3-2e0e-4a87-9fe2-15c1f00bcf4c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "115fd2e3-2e0e-4a87-9fe2-15c1f00bcf4c" (UID: "115fd2e3-2e0e-4a87-9fe2-15c1f00bcf4c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:18:01 crc kubenswrapper[4986]: I1203 13:18:01.633243 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/115fd2e3-2e0e-4a87-9fe2-15c1f00bcf4c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "115fd2e3-2e0e-4a87-9fe2-15c1f00bcf4c" (UID: "115fd2e3-2e0e-4a87-9fe2-15c1f00bcf4c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:18:01 crc kubenswrapper[4986]: I1203 13:18:01.638701 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/115fd2e3-2e0e-4a87-9fe2-15c1f00bcf4c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "115fd2e3-2e0e-4a87-9fe2-15c1f00bcf4c" (UID: "115fd2e3-2e0e-4a87-9fe2-15c1f00bcf4c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:18:01 crc kubenswrapper[4986]: I1203 13:18:01.657427 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a885abf-a7cf-4659-b2a9-3e67e924b67d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3a885abf-a7cf-4659-b2a9-3e67e924b67d" (UID: "3a885abf-a7cf-4659-b2a9-3e67e924b67d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:18:01 crc kubenswrapper[4986]: I1203 13:18:01.666357 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8b85q\" (UniqueName: \"kubernetes.io/projected/115fd2e3-2e0e-4a87-9fe2-15c1f00bcf4c-kube-api-access-8b85q\") on node \"crc\" DevicePath \"\"" Dec 03 13:18:01 crc kubenswrapper[4986]: I1203 13:18:01.666394 4986 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/115fd2e3-2e0e-4a87-9fe2-15c1f00bcf4c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 03 13:18:01 crc kubenswrapper[4986]: I1203 13:18:01.666404 4986 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3a885abf-a7cf-4659-b2a9-3e67e924b67d-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 03 13:18:01 crc kubenswrapper[4986]: I1203 13:18:01.666413 4986 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/115fd2e3-2e0e-4a87-9fe2-15c1f00bcf4c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 03 13:18:01 crc kubenswrapper[4986]: I1203 13:18:01.666424 4986 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/115fd2e3-2e0e-4a87-9fe2-15c1f00bcf4c-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 03 13:18:01 crc kubenswrapper[4986]: I1203 13:18:01.666431 4986 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/115fd2e3-2e0e-4a87-9fe2-15c1f00bcf4c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 03 13:18:01 crc kubenswrapper[4986]: I1203 13:18:01.666439 4986 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/115fd2e3-2e0e-4a87-9fe2-15c1f00bcf4c-config\") on node \"crc\" DevicePath \"\"" Dec 03 13:18:01 crc kubenswrapper[4986]: I1203 13:18:01.767407 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6568745857-wp84p"] Dec 03 13:18:01 crc kubenswrapper[4986]: I1203 13:18:01.784956 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a885abf-a7cf-4659-b2a9-3e67e924b67d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3a885abf-a7cf-4659-b2a9-3e67e924b67d" (UID: "3a885abf-a7cf-4659-b2a9-3e67e924b67d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:18:01 crc kubenswrapper[4986]: I1203 13:18:01.796653 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-76c69b8dbf-kzlpf"] Dec 03 13:18:01 crc kubenswrapper[4986]: E1203 13:18:01.797086 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a885abf-a7cf-4659-b2a9-3e67e924b67d" containerName="dnsmasq-dns" Dec 03 13:18:01 crc kubenswrapper[4986]: I1203 13:18:01.797102 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a885abf-a7cf-4659-b2a9-3e67e924b67d" containerName="dnsmasq-dns" Dec 03 13:18:01 crc kubenswrapper[4986]: E1203 13:18:01.797127 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="115fd2e3-2e0e-4a87-9fe2-15c1f00bcf4c" containerName="init" Dec 03 13:18:01 crc kubenswrapper[4986]: I1203 13:18:01.797135 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="115fd2e3-2e0e-4a87-9fe2-15c1f00bcf4c" containerName="init" Dec 03 13:18:01 crc kubenswrapper[4986]: E1203 13:18:01.797150 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a885abf-a7cf-4659-b2a9-3e67e924b67d" containerName="init" Dec 03 13:18:01 crc kubenswrapper[4986]: I1203 13:18:01.797159 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a885abf-a7cf-4659-b2a9-3e67e924b67d" containerName="init" Dec 03 13:18:01 crc kubenswrapper[4986]: I1203 13:18:01.797389 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="115fd2e3-2e0e-4a87-9fe2-15c1f00bcf4c" containerName="init" Dec 03 13:18:01 crc kubenswrapper[4986]: I1203 13:18:01.797426 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a885abf-a7cf-4659-b2a9-3e67e924b67d" containerName="dnsmasq-dns" Dec 03 13:18:01 crc kubenswrapper[4986]: I1203 13:18:01.801326 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-76c69b8dbf-kzlpf" Dec 03 13:18:01 crc kubenswrapper[4986]: I1203 13:18:01.821705 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 03 13:18:01 crc kubenswrapper[4986]: I1203 13:18:01.843210 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-76c69b8dbf-kzlpf"] Dec 03 13:18:01 crc kubenswrapper[4986]: I1203 13:18:01.870251 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/dc9fa065-2e88-4181-ab5a-be64336bed7d-horizon-secret-key\") pod \"horizon-76c69b8dbf-kzlpf\" (UID: \"dc9fa065-2e88-4181-ab5a-be64336bed7d\") " pod="openstack/horizon-76c69b8dbf-kzlpf" Dec 03 13:18:01 crc kubenswrapper[4986]: I1203 13:18:01.870391 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dc9fa065-2e88-4181-ab5a-be64336bed7d-config-data\") pod \"horizon-76c69b8dbf-kzlpf\" (UID: \"dc9fa065-2e88-4181-ab5a-be64336bed7d\") " pod="openstack/horizon-76c69b8dbf-kzlpf" Dec 03 13:18:01 crc kubenswrapper[4986]: I1203 13:18:01.870437 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dc9fa065-2e88-4181-ab5a-be64336bed7d-scripts\") pod \"horizon-76c69b8dbf-kzlpf\" (UID: \"dc9fa065-2e88-4181-ab5a-be64336bed7d\") " pod="openstack/horizon-76c69b8dbf-kzlpf" Dec 03 13:18:01 crc kubenswrapper[4986]: I1203 13:18:01.870543 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc9fa065-2e88-4181-ab5a-be64336bed7d-logs\") pod \"horizon-76c69b8dbf-kzlpf\" (UID: \"dc9fa065-2e88-4181-ab5a-be64336bed7d\") " pod="openstack/horizon-76c69b8dbf-kzlpf" Dec 03 13:18:01 crc kubenswrapper[4986]: I1203 13:18:01.870567 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2n7mh\" (UniqueName: \"kubernetes.io/projected/dc9fa065-2e88-4181-ab5a-be64336bed7d-kube-api-access-2n7mh\") pod \"horizon-76c69b8dbf-kzlpf\" (UID: \"dc9fa065-2e88-4181-ab5a-be64336bed7d\") " pod="openstack/horizon-76c69b8dbf-kzlpf" Dec 03 13:18:01 crc kubenswrapper[4986]: I1203 13:18:01.870607 4986 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3a885abf-a7cf-4659-b2a9-3e67e924b67d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 03 13:18:01 crc kubenswrapper[4986]: I1203 13:18:01.973912 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2n7mh\" (UniqueName: \"kubernetes.io/projected/dc9fa065-2e88-4181-ab5a-be64336bed7d-kube-api-access-2n7mh\") pod \"horizon-76c69b8dbf-kzlpf\" (UID: \"dc9fa065-2e88-4181-ab5a-be64336bed7d\") " pod="openstack/horizon-76c69b8dbf-kzlpf" Dec 03 13:18:01 crc kubenswrapper[4986]: I1203 13:18:01.973995 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/dc9fa065-2e88-4181-ab5a-be64336bed7d-horizon-secret-key\") pod \"horizon-76c69b8dbf-kzlpf\" (UID: \"dc9fa065-2e88-4181-ab5a-be64336bed7d\") " pod="openstack/horizon-76c69b8dbf-kzlpf" Dec 03 13:18:01 crc kubenswrapper[4986]: I1203 13:18:01.974041 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dc9fa065-2e88-4181-ab5a-be64336bed7d-config-data\") pod \"horizon-76c69b8dbf-kzlpf\" (UID: \"dc9fa065-2e88-4181-ab5a-be64336bed7d\") " pod="openstack/horizon-76c69b8dbf-kzlpf" Dec 03 13:18:01 crc kubenswrapper[4986]: I1203 13:18:01.974089 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dc9fa065-2e88-4181-ab5a-be64336bed7d-scripts\") pod \"horizon-76c69b8dbf-kzlpf\" (UID: \"dc9fa065-2e88-4181-ab5a-be64336bed7d\") " pod="openstack/horizon-76c69b8dbf-kzlpf" Dec 03 13:18:01 crc kubenswrapper[4986]: I1203 13:18:01.974150 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc9fa065-2e88-4181-ab5a-be64336bed7d-logs\") pod \"horizon-76c69b8dbf-kzlpf\" (UID: \"dc9fa065-2e88-4181-ab5a-be64336bed7d\") " pod="openstack/horizon-76c69b8dbf-kzlpf" Dec 03 13:18:01 crc kubenswrapper[4986]: I1203 13:18:01.974622 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc9fa065-2e88-4181-ab5a-be64336bed7d-logs\") pod \"horizon-76c69b8dbf-kzlpf\" (UID: \"dc9fa065-2e88-4181-ab5a-be64336bed7d\") " pod="openstack/horizon-76c69b8dbf-kzlpf" Dec 03 13:18:01 crc kubenswrapper[4986]: I1203 13:18:01.975227 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dc9fa065-2e88-4181-ab5a-be64336bed7d-scripts\") pod \"horizon-76c69b8dbf-kzlpf\" (UID: \"dc9fa065-2e88-4181-ab5a-be64336bed7d\") " pod="openstack/horizon-76c69b8dbf-kzlpf" Dec 03 13:18:01 crc kubenswrapper[4986]: I1203 13:18:01.975802 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dc9fa065-2e88-4181-ab5a-be64336bed7d-config-data\") pod \"horizon-76c69b8dbf-kzlpf\" (UID: \"dc9fa065-2e88-4181-ab5a-be64336bed7d\") " pod="openstack/horizon-76c69b8dbf-kzlpf" Dec 03 13:18:01 crc kubenswrapper[4986]: I1203 13:18:01.979506 4986 generic.go:334] "Generic (PLEG): container finished" podID="d09ba1cc-e27c-4ea3-9f4a-2f7791afce95" containerID="21c9c3e6ec0cad740667c94517148149003615b9acfbe08287bea8a0bb9806fd" exitCode=0 Dec 03 13:18:01 crc kubenswrapper[4986]: I1203 13:18:01.979558 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76fcf4b695-sxknq" event={"ID":"d09ba1cc-e27c-4ea3-9f4a-2f7791afce95","Type":"ContainerDied","Data":"21c9c3e6ec0cad740667c94517148149003615b9acfbe08287bea8a0bb9806fd"} Dec 03 13:18:01 crc kubenswrapper[4986]: I1203 13:18:01.979582 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76fcf4b695-sxknq" event={"ID":"d09ba1cc-e27c-4ea3-9f4a-2f7791afce95","Type":"ContainerStarted","Data":"9fe313453051a4a52ad6520df7954c1f9c273f1440f54886bb07673f3e5478a2"} Dec 03 13:18:02 crc kubenswrapper[4986]: I1203 13:18:01.985065 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/dc9fa065-2e88-4181-ab5a-be64336bed7d-horizon-secret-key\") pod \"horizon-76c69b8dbf-kzlpf\" (UID: \"dc9fa065-2e88-4181-ab5a-be64336bed7d\") " pod="openstack/horizon-76c69b8dbf-kzlpf" Dec 03 13:18:02 crc kubenswrapper[4986]: I1203 13:18:01.986975 4986 generic.go:334] "Generic (PLEG): container finished" podID="3a885abf-a7cf-4659-b2a9-3e67e924b67d" containerID="579623730bd74c24a30f76b117795af0b3e6d1d9cce3fc7b664a88fd24983934" exitCode=0 Dec 03 13:18:02 crc kubenswrapper[4986]: I1203 13:18:01.987746 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-ttjvc" Dec 03 13:18:02 crc kubenswrapper[4986]: I1203 13:18:01.987801 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-ttjvc" event={"ID":"3a885abf-a7cf-4659-b2a9-3e67e924b67d","Type":"ContainerDied","Data":"579623730bd74c24a30f76b117795af0b3e6d1d9cce3fc7b664a88fd24983934"} Dec 03 13:18:02 crc kubenswrapper[4986]: I1203 13:18:01.987837 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-ttjvc" event={"ID":"3a885abf-a7cf-4659-b2a9-3e67e924b67d","Type":"ContainerDied","Data":"f87b5fbb3ce592211d0c11e1b02a46e486eeb207287cb939aa9eda3de08a8ab2"} Dec 03 13:18:02 crc kubenswrapper[4986]: I1203 13:18:01.987858 4986 scope.go:117] "RemoveContainer" containerID="579623730bd74c24a30f76b117795af0b3e6d1d9cce3fc7b664a88fd24983934" Dec 03 13:18:02 crc kubenswrapper[4986]: I1203 13:18:01.990471 4986 generic.go:334] "Generic (PLEG): container finished" podID="115fd2e3-2e0e-4a87-9fe2-15c1f00bcf4c" containerID="5c523452464fb3afd42d05191716a35db857e2751c5600df858f7fa28ee91e90" exitCode=0 Dec 03 13:18:02 crc kubenswrapper[4986]: I1203 13:18:01.990528 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55fff446b9-f5m5w" event={"ID":"115fd2e3-2e0e-4a87-9fe2-15c1f00bcf4c","Type":"ContainerDied","Data":"07de9b1223b69fb1a6a5f349f6e4e4ac15c6510a75571553f811d1dd7462f1cf"} Dec 03 13:18:02 crc kubenswrapper[4986]: I1203 13:18:01.990562 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55fff446b9-f5m5w" event={"ID":"115fd2e3-2e0e-4a87-9fe2-15c1f00bcf4c","Type":"ContainerDied","Data":"5c523452464fb3afd42d05191716a35db857e2751c5600df858f7fa28ee91e90"} Dec 03 13:18:02 crc kubenswrapper[4986]: I1203 13:18:01.990629 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55fff446b9-f5m5w" Dec 03 13:18:02 crc kubenswrapper[4986]: I1203 13:18:02.005224 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-4s25c" event={"ID":"14924968-6b9f-4a33-b504-dbfd64956b30","Type":"ContainerStarted","Data":"b86fc780bc92a3f4d38f408567bda5883068934dad3ebc7618aec704e587797b"} Dec 03 13:18:02 crc kubenswrapper[4986]: I1203 13:18:02.011703 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-765d896c9f-95hgf" event={"ID":"1fddf481-1472-4685-b105-65c87a3b045a","Type":"ContainerStarted","Data":"c26a3b1aa1c6f556fac33f83d625ed3c82227337b1ca7583d6fc1b3a1cb2e038"} Dec 03 13:18:02 crc kubenswrapper[4986]: I1203 13:18:02.015147 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2n7mh\" (UniqueName: \"kubernetes.io/projected/dc9fa065-2e88-4181-ab5a-be64336bed7d-kube-api-access-2n7mh\") pod \"horizon-76c69b8dbf-kzlpf\" (UID: \"dc9fa065-2e88-4181-ab5a-be64336bed7d\") " pod="openstack/horizon-76c69b8dbf-kzlpf" Dec 03 13:18:02 crc kubenswrapper[4986]: I1203 13:18:02.031163 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-4s25c" podStartSLOduration=3.031145339 podStartE2EDuration="3.031145339s" podCreationTimestamp="2025-12-03 13:17:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:18:02.023273387 +0000 UTC m=+1341.489704578" watchObservedRunningTime="2025-12-03 13:18:02.031145339 +0000 UTC m=+1341.497576530" Dec 03 13:18:02 crc kubenswrapper[4986]: I1203 13:18:02.035692 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vtwjw" event={"ID":"88108c63-cf5e-4373-85d1-80d5803bb7c9","Type":"ContainerStarted","Data":"a7998adde4658342c5c5e5589478470b48450fa38842fde0977d1e5d2b60bb60"} Dec 03 13:18:02 crc kubenswrapper[4986]: I1203 13:18:02.060777 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-vtwjw" podStartSLOduration=3.060762778 podStartE2EDuration="3.060762778s" podCreationTimestamp="2025-12-03 13:17:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:18:02.058052855 +0000 UTC m=+1341.524484056" watchObservedRunningTime="2025-12-03 13:18:02.060762778 +0000 UTC m=+1341.527193969" Dec 03 13:18:02 crc kubenswrapper[4986]: I1203 13:18:02.080153 4986 scope.go:117] "RemoveContainer" containerID="7f5d5779fab6946efccc683d37071c8c366acfed4789b8871425a84ed54bf747" Dec 03 13:18:02 crc kubenswrapper[4986]: I1203 13:18:02.129355 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-76c69b8dbf-kzlpf" Dec 03 13:18:02 crc kubenswrapper[4986]: I1203 13:18:02.134503 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-ttjvc"] Dec 03 13:18:02 crc kubenswrapper[4986]: I1203 13:18:02.144320 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-ttjvc"] Dec 03 13:18:02 crc kubenswrapper[4986]: I1203 13:18:02.167180 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55fff446b9-f5m5w"] Dec 03 13:18:02 crc kubenswrapper[4986]: I1203 13:18:02.176548 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55fff446b9-f5m5w"] Dec 03 13:18:02 crc kubenswrapper[4986]: I1203 13:18:02.311543 4986 scope.go:117] "RemoveContainer" containerID="579623730bd74c24a30f76b117795af0b3e6d1d9cce3fc7b664a88fd24983934" Dec 03 13:18:02 crc kubenswrapper[4986]: E1203 13:18:02.312511 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"579623730bd74c24a30f76b117795af0b3e6d1d9cce3fc7b664a88fd24983934\": container with ID starting with 579623730bd74c24a30f76b117795af0b3e6d1d9cce3fc7b664a88fd24983934 not found: ID does not exist" containerID="579623730bd74c24a30f76b117795af0b3e6d1d9cce3fc7b664a88fd24983934" Dec 03 13:18:02 crc kubenswrapper[4986]: I1203 13:18:02.312575 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"579623730bd74c24a30f76b117795af0b3e6d1d9cce3fc7b664a88fd24983934"} err="failed to get container status \"579623730bd74c24a30f76b117795af0b3e6d1d9cce3fc7b664a88fd24983934\": rpc error: code = NotFound desc = could not find container \"579623730bd74c24a30f76b117795af0b3e6d1d9cce3fc7b664a88fd24983934\": container with ID starting with 579623730bd74c24a30f76b117795af0b3e6d1d9cce3fc7b664a88fd24983934 not found: ID does not exist" Dec 03 13:18:02 crc kubenswrapper[4986]: I1203 13:18:02.312596 4986 scope.go:117] "RemoveContainer" containerID="7f5d5779fab6946efccc683d37071c8c366acfed4789b8871425a84ed54bf747" Dec 03 13:18:02 crc kubenswrapper[4986]: E1203 13:18:02.313052 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f5d5779fab6946efccc683d37071c8c366acfed4789b8871425a84ed54bf747\": container with ID starting with 7f5d5779fab6946efccc683d37071c8c366acfed4789b8871425a84ed54bf747 not found: ID does not exist" containerID="7f5d5779fab6946efccc683d37071c8c366acfed4789b8871425a84ed54bf747" Dec 03 13:18:02 crc kubenswrapper[4986]: I1203 13:18:02.313113 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f5d5779fab6946efccc683d37071c8c366acfed4789b8871425a84ed54bf747"} err="failed to get container status \"7f5d5779fab6946efccc683d37071c8c366acfed4789b8871425a84ed54bf747\": rpc error: code = NotFound desc = could not find container \"7f5d5779fab6946efccc683d37071c8c366acfed4789b8871425a84ed54bf747\": container with ID starting with 7f5d5779fab6946efccc683d37071c8c366acfed4789b8871425a84ed54bf747 not found: ID does not exist" Dec 03 13:18:02 crc kubenswrapper[4986]: I1203 13:18:02.313133 4986 scope.go:117] "RemoveContainer" containerID="5c523452464fb3afd42d05191716a35db857e2751c5600df858f7fa28ee91e90" Dec 03 13:18:02 crc kubenswrapper[4986]: I1203 13:18:02.719340 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-76c69b8dbf-kzlpf"] Dec 03 13:18:02 crc kubenswrapper[4986]: I1203 13:18:02.766862 4986 scope.go:117] "RemoveContainer" containerID="5c523452464fb3afd42d05191716a35db857e2751c5600df858f7fa28ee91e90" Dec 03 13:18:02 crc kubenswrapper[4986]: E1203 13:18:02.767245 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c523452464fb3afd42d05191716a35db857e2751c5600df858f7fa28ee91e90\": container with ID starting with 5c523452464fb3afd42d05191716a35db857e2751c5600df858f7fa28ee91e90 not found: ID does not exist" containerID="5c523452464fb3afd42d05191716a35db857e2751c5600df858f7fa28ee91e90" Dec 03 13:18:02 crc kubenswrapper[4986]: I1203 13:18:02.767305 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c523452464fb3afd42d05191716a35db857e2751c5600df858f7fa28ee91e90"} err="failed to get container status \"5c523452464fb3afd42d05191716a35db857e2751c5600df858f7fa28ee91e90\": rpc error: code = NotFound desc = could not find container \"5c523452464fb3afd42d05191716a35db857e2751c5600df858f7fa28ee91e90\": container with ID starting with 5c523452464fb3afd42d05191716a35db857e2751c5600df858f7fa28ee91e90 not found: ID does not exist" Dec 03 13:18:02 crc kubenswrapper[4986]: W1203 13:18:02.772459 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddc9fa065_2e88_4181_ab5a_be64336bed7d.slice/crio-03befa4d5d3220caac2aee464f42be47b89fefdef2249da797bdd68538229b9f WatchSource:0}: Error finding container 03befa4d5d3220caac2aee464f42be47b89fefdef2249da797bdd68538229b9f: Status 404 returned error can't find the container with id 03befa4d5d3220caac2aee464f42be47b89fefdef2249da797bdd68538229b9f Dec 03 13:18:02 crc kubenswrapper[4986]: I1203 13:18:02.954655 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="115fd2e3-2e0e-4a87-9fe2-15c1f00bcf4c" path="/var/lib/kubelet/pods/115fd2e3-2e0e-4a87-9fe2-15c1f00bcf4c/volumes" Dec 03 13:18:02 crc kubenswrapper[4986]: I1203 13:18:02.955363 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a885abf-a7cf-4659-b2a9-3e67e924b67d" path="/var/lib/kubelet/pods/3a885abf-a7cf-4659-b2a9-3e67e924b67d/volumes" Dec 03 13:18:03 crc kubenswrapper[4986]: I1203 13:18:03.050642 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-76c69b8dbf-kzlpf" event={"ID":"dc9fa065-2e88-4181-ab5a-be64336bed7d","Type":"ContainerStarted","Data":"03befa4d5d3220caac2aee464f42be47b89fefdef2249da797bdd68538229b9f"} Dec 03 13:18:03 crc kubenswrapper[4986]: I1203 13:18:03.054122 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76fcf4b695-sxknq" event={"ID":"d09ba1cc-e27c-4ea3-9f4a-2f7791afce95","Type":"ContainerStarted","Data":"dd7880f3d2c0a7ebdd3f37e30bf52b79dfaa6ea479dbf58d3aa1b70b5d2a8121"} Dec 03 13:18:03 crc kubenswrapper[4986]: I1203 13:18:03.054461 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-76fcf4b695-sxknq" Dec 03 13:18:03 crc kubenswrapper[4986]: I1203 13:18:03.490839 4986 patch_prober.go:28] interesting pod/machine-config-daemon-xggpj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 13:18:03 crc kubenswrapper[4986]: I1203 13:18:03.490884 4986 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 13:18:04 crc kubenswrapper[4986]: I1203 13:18:04.968012 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-76fcf4b695-sxknq" podStartSLOduration=5.967993757 podStartE2EDuration="5.967993757s" podCreationTimestamp="2025-12-03 13:17:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:18:03.077343215 +0000 UTC m=+1342.543774416" watchObservedRunningTime="2025-12-03 13:18:04.967993757 +0000 UTC m=+1344.434424948" Dec 03 13:18:05 crc kubenswrapper[4986]: I1203 13:18:05.080706 4986 generic.go:334] "Generic (PLEG): container finished" podID="88108c63-cf5e-4373-85d1-80d5803bb7c9" containerID="a7998adde4658342c5c5e5589478470b48450fa38842fde0977d1e5d2b60bb60" exitCode=0 Dec 03 13:18:05 crc kubenswrapper[4986]: I1203 13:18:05.080768 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vtwjw" event={"ID":"88108c63-cf5e-4373-85d1-80d5803bb7c9","Type":"ContainerDied","Data":"a7998adde4658342c5c5e5589478470b48450fa38842fde0977d1e5d2b60bb60"} Dec 03 13:18:08 crc kubenswrapper[4986]: I1203 13:18:08.764652 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-765d896c9f-95hgf"] Dec 03 13:18:08 crc kubenswrapper[4986]: I1203 13:18:08.789513 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-cc774c568-phcpp"] Dec 03 13:18:08 crc kubenswrapper[4986]: I1203 13:18:08.790814 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-cc774c568-phcpp" Dec 03 13:18:08 crc kubenswrapper[4986]: I1203 13:18:08.797428 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Dec 03 13:18:08 crc kubenswrapper[4986]: I1203 13:18:08.807212 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-cc774c568-phcpp"] Dec 03 13:18:08 crc kubenswrapper[4986]: I1203 13:18:08.859346 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-76c69b8dbf-kzlpf"] Dec 03 13:18:08 crc kubenswrapper[4986]: I1203 13:18:08.881628 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7797f969d4-6c2wn"] Dec 03 13:18:08 crc kubenswrapper[4986]: I1203 13:18:08.883386 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7797f969d4-6c2wn" Dec 03 13:18:08 crc kubenswrapper[4986]: I1203 13:18:08.896997 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7797f969d4-6c2wn"] Dec 03 13:18:08 crc kubenswrapper[4986]: I1203 13:18:08.939126 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cee5d9b6-d11e-4bad-b013-196a4f401404-scripts\") pod \"horizon-cc774c568-phcpp\" (UID: \"cee5d9b6-d11e-4bad-b013-196a4f401404\") " pod="openstack/horizon-cc774c568-phcpp" Dec 03 13:18:08 crc kubenswrapper[4986]: I1203 13:18:08.939214 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cee5d9b6-d11e-4bad-b013-196a4f401404-logs\") pod \"horizon-cc774c568-phcpp\" (UID: \"cee5d9b6-d11e-4bad-b013-196a4f401404\") " pod="openstack/horizon-cc774c568-phcpp" Dec 03 13:18:08 crc kubenswrapper[4986]: I1203 13:18:08.939252 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cee5d9b6-d11e-4bad-b013-196a4f401404-combined-ca-bundle\") pod \"horizon-cc774c568-phcpp\" (UID: \"cee5d9b6-d11e-4bad-b013-196a4f401404\") " pod="openstack/horizon-cc774c568-phcpp" Dec 03 13:18:08 crc kubenswrapper[4986]: I1203 13:18:08.939505 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67nzv\" (UniqueName: \"kubernetes.io/projected/cee5d9b6-d11e-4bad-b013-196a4f401404-kube-api-access-67nzv\") pod \"horizon-cc774c568-phcpp\" (UID: \"cee5d9b6-d11e-4bad-b013-196a4f401404\") " pod="openstack/horizon-cc774c568-phcpp" Dec 03 13:18:08 crc kubenswrapper[4986]: I1203 13:18:08.939596 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/cee5d9b6-d11e-4bad-b013-196a4f401404-horizon-tls-certs\") pod \"horizon-cc774c568-phcpp\" (UID: \"cee5d9b6-d11e-4bad-b013-196a4f401404\") " pod="openstack/horizon-cc774c568-phcpp" Dec 03 13:18:08 crc kubenswrapper[4986]: I1203 13:18:08.939628 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cee5d9b6-d11e-4bad-b013-196a4f401404-config-data\") pod \"horizon-cc774c568-phcpp\" (UID: \"cee5d9b6-d11e-4bad-b013-196a4f401404\") " pod="openstack/horizon-cc774c568-phcpp" Dec 03 13:18:08 crc kubenswrapper[4986]: I1203 13:18:08.939817 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/cee5d9b6-d11e-4bad-b013-196a4f401404-horizon-secret-key\") pod \"horizon-cc774c568-phcpp\" (UID: \"cee5d9b6-d11e-4bad-b013-196a4f401404\") " pod="openstack/horizon-cc774c568-phcpp" Dec 03 13:18:09 crc kubenswrapper[4986]: I1203 13:18:09.041357 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d67e23b-bda4-42d5-81b6-be58c643861d-combined-ca-bundle\") pod \"horizon-7797f969d4-6c2wn\" (UID: \"4d67e23b-bda4-42d5-81b6-be58c643861d\") " pod="openstack/horizon-7797f969d4-6c2wn" Dec 03 13:18:09 crc kubenswrapper[4986]: I1203 13:18:09.041471 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/cee5d9b6-d11e-4bad-b013-196a4f401404-horizon-secret-key\") pod \"horizon-cc774c568-phcpp\" (UID: \"cee5d9b6-d11e-4bad-b013-196a4f401404\") " pod="openstack/horizon-cc774c568-phcpp" Dec 03 13:18:09 crc kubenswrapper[4986]: I1203 13:18:09.041493 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4d67e23b-bda4-42d5-81b6-be58c643861d-scripts\") pod \"horizon-7797f969d4-6c2wn\" (UID: \"4d67e23b-bda4-42d5-81b6-be58c643861d\") " pod="openstack/horizon-7797f969d4-6c2wn" Dec 03 13:18:09 crc kubenswrapper[4986]: I1203 13:18:09.041510 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvzhx\" (UniqueName: \"kubernetes.io/projected/4d67e23b-bda4-42d5-81b6-be58c643861d-kube-api-access-zvzhx\") pod \"horizon-7797f969d4-6c2wn\" (UID: \"4d67e23b-bda4-42d5-81b6-be58c643861d\") " pod="openstack/horizon-7797f969d4-6c2wn" Dec 03 13:18:09 crc kubenswrapper[4986]: I1203 13:18:09.041552 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4d67e23b-bda4-42d5-81b6-be58c643861d-horizon-secret-key\") pod \"horizon-7797f969d4-6c2wn\" (UID: \"4d67e23b-bda4-42d5-81b6-be58c643861d\") " pod="openstack/horizon-7797f969d4-6c2wn" Dec 03 13:18:09 crc kubenswrapper[4986]: I1203 13:18:09.041616 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4d67e23b-bda4-42d5-81b6-be58c643861d-logs\") pod \"horizon-7797f969d4-6c2wn\" (UID: \"4d67e23b-bda4-42d5-81b6-be58c643861d\") " pod="openstack/horizon-7797f969d4-6c2wn" Dec 03 13:18:09 crc kubenswrapper[4986]: I1203 13:18:09.041639 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cee5d9b6-d11e-4bad-b013-196a4f401404-scripts\") pod \"horizon-cc774c568-phcpp\" (UID: \"cee5d9b6-d11e-4bad-b013-196a4f401404\") " pod="openstack/horizon-cc774c568-phcpp" Dec 03 13:18:09 crc kubenswrapper[4986]: I1203 13:18:09.041671 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cee5d9b6-d11e-4bad-b013-196a4f401404-logs\") pod \"horizon-cc774c568-phcpp\" (UID: \"cee5d9b6-d11e-4bad-b013-196a4f401404\") " pod="openstack/horizon-cc774c568-phcpp" Dec 03 13:18:09 crc kubenswrapper[4986]: I1203 13:18:09.041695 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cee5d9b6-d11e-4bad-b013-196a4f401404-combined-ca-bundle\") pod \"horizon-cc774c568-phcpp\" (UID: \"cee5d9b6-d11e-4bad-b013-196a4f401404\") " pod="openstack/horizon-cc774c568-phcpp" Dec 03 13:18:09 crc kubenswrapper[4986]: I1203 13:18:09.041712 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d67e23b-bda4-42d5-81b6-be58c643861d-horizon-tls-certs\") pod \"horizon-7797f969d4-6c2wn\" (UID: \"4d67e23b-bda4-42d5-81b6-be58c643861d\") " pod="openstack/horizon-7797f969d4-6c2wn" Dec 03 13:18:09 crc kubenswrapper[4986]: I1203 13:18:09.041745 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67nzv\" (UniqueName: \"kubernetes.io/projected/cee5d9b6-d11e-4bad-b013-196a4f401404-kube-api-access-67nzv\") pod \"horizon-cc774c568-phcpp\" (UID: \"cee5d9b6-d11e-4bad-b013-196a4f401404\") " pod="openstack/horizon-cc774c568-phcpp" Dec 03 13:18:09 crc kubenswrapper[4986]: I1203 13:18:09.041775 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/cee5d9b6-d11e-4bad-b013-196a4f401404-horizon-tls-certs\") pod \"horizon-cc774c568-phcpp\" (UID: \"cee5d9b6-d11e-4bad-b013-196a4f401404\") " pod="openstack/horizon-cc774c568-phcpp" Dec 03 13:18:09 crc kubenswrapper[4986]: I1203 13:18:09.041791 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4d67e23b-bda4-42d5-81b6-be58c643861d-config-data\") pod \"horizon-7797f969d4-6c2wn\" (UID: \"4d67e23b-bda4-42d5-81b6-be58c643861d\") " pod="openstack/horizon-7797f969d4-6c2wn" Dec 03 13:18:09 crc kubenswrapper[4986]: I1203 13:18:09.041808 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cee5d9b6-d11e-4bad-b013-196a4f401404-config-data\") pod \"horizon-cc774c568-phcpp\" (UID: \"cee5d9b6-d11e-4bad-b013-196a4f401404\") " pod="openstack/horizon-cc774c568-phcpp" Dec 03 13:18:09 crc kubenswrapper[4986]: I1203 13:18:09.042674 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cee5d9b6-d11e-4bad-b013-196a4f401404-scripts\") pod \"horizon-cc774c568-phcpp\" (UID: \"cee5d9b6-d11e-4bad-b013-196a4f401404\") " pod="openstack/horizon-cc774c568-phcpp" Dec 03 13:18:09 crc kubenswrapper[4986]: I1203 13:18:09.043424 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cee5d9b6-d11e-4bad-b013-196a4f401404-config-data\") pod \"horizon-cc774c568-phcpp\" (UID: \"cee5d9b6-d11e-4bad-b013-196a4f401404\") " pod="openstack/horizon-cc774c568-phcpp" Dec 03 13:18:09 crc kubenswrapper[4986]: I1203 13:18:09.044928 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cee5d9b6-d11e-4bad-b013-196a4f401404-logs\") pod \"horizon-cc774c568-phcpp\" (UID: \"cee5d9b6-d11e-4bad-b013-196a4f401404\") " pod="openstack/horizon-cc774c568-phcpp" Dec 03 13:18:09 crc kubenswrapper[4986]: I1203 13:18:09.048703 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/cee5d9b6-d11e-4bad-b013-196a4f401404-horizon-secret-key\") pod \"horizon-cc774c568-phcpp\" (UID: \"cee5d9b6-d11e-4bad-b013-196a4f401404\") " pod="openstack/horizon-cc774c568-phcpp" Dec 03 13:18:09 crc kubenswrapper[4986]: I1203 13:18:09.049654 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cee5d9b6-d11e-4bad-b013-196a4f401404-combined-ca-bundle\") pod \"horizon-cc774c568-phcpp\" (UID: \"cee5d9b6-d11e-4bad-b013-196a4f401404\") " pod="openstack/horizon-cc774c568-phcpp" Dec 03 13:18:09 crc kubenswrapper[4986]: I1203 13:18:09.058791 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/cee5d9b6-d11e-4bad-b013-196a4f401404-horizon-tls-certs\") pod \"horizon-cc774c568-phcpp\" (UID: \"cee5d9b6-d11e-4bad-b013-196a4f401404\") " pod="openstack/horizon-cc774c568-phcpp" Dec 03 13:18:09 crc kubenswrapper[4986]: I1203 13:18:09.061795 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67nzv\" (UniqueName: \"kubernetes.io/projected/cee5d9b6-d11e-4bad-b013-196a4f401404-kube-api-access-67nzv\") pod \"horizon-cc774c568-phcpp\" (UID: \"cee5d9b6-d11e-4bad-b013-196a4f401404\") " pod="openstack/horizon-cc774c568-phcpp" Dec 03 13:18:09 crc kubenswrapper[4986]: I1203 13:18:09.121295 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-cc774c568-phcpp" Dec 03 13:18:09 crc kubenswrapper[4986]: I1203 13:18:09.142976 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4d67e23b-bda4-42d5-81b6-be58c643861d-config-data\") pod \"horizon-7797f969d4-6c2wn\" (UID: \"4d67e23b-bda4-42d5-81b6-be58c643861d\") " pod="openstack/horizon-7797f969d4-6c2wn" Dec 03 13:18:09 crc kubenswrapper[4986]: I1203 13:18:09.143028 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d67e23b-bda4-42d5-81b6-be58c643861d-combined-ca-bundle\") pod \"horizon-7797f969d4-6c2wn\" (UID: \"4d67e23b-bda4-42d5-81b6-be58c643861d\") " pod="openstack/horizon-7797f969d4-6c2wn" Dec 03 13:18:09 crc kubenswrapper[4986]: I1203 13:18:09.143087 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4d67e23b-bda4-42d5-81b6-be58c643861d-scripts\") pod \"horizon-7797f969d4-6c2wn\" (UID: \"4d67e23b-bda4-42d5-81b6-be58c643861d\") " pod="openstack/horizon-7797f969d4-6c2wn" Dec 03 13:18:09 crc kubenswrapper[4986]: I1203 13:18:09.143108 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvzhx\" (UniqueName: \"kubernetes.io/projected/4d67e23b-bda4-42d5-81b6-be58c643861d-kube-api-access-zvzhx\") pod \"horizon-7797f969d4-6c2wn\" (UID: \"4d67e23b-bda4-42d5-81b6-be58c643861d\") " pod="openstack/horizon-7797f969d4-6c2wn" Dec 03 13:18:09 crc kubenswrapper[4986]: I1203 13:18:09.143164 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4d67e23b-bda4-42d5-81b6-be58c643861d-horizon-secret-key\") pod \"horizon-7797f969d4-6c2wn\" (UID: \"4d67e23b-bda4-42d5-81b6-be58c643861d\") " pod="openstack/horizon-7797f969d4-6c2wn" Dec 03 13:18:09 crc kubenswrapper[4986]: I1203 13:18:09.143321 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4d67e23b-bda4-42d5-81b6-be58c643861d-logs\") pod \"horizon-7797f969d4-6c2wn\" (UID: \"4d67e23b-bda4-42d5-81b6-be58c643861d\") " pod="openstack/horizon-7797f969d4-6c2wn" Dec 03 13:18:09 crc kubenswrapper[4986]: I1203 13:18:09.143371 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d67e23b-bda4-42d5-81b6-be58c643861d-horizon-tls-certs\") pod \"horizon-7797f969d4-6c2wn\" (UID: \"4d67e23b-bda4-42d5-81b6-be58c643861d\") " pod="openstack/horizon-7797f969d4-6c2wn" Dec 03 13:18:09 crc kubenswrapper[4986]: I1203 13:18:09.179334 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4d67e23b-bda4-42d5-81b6-be58c643861d-logs\") pod \"horizon-7797f969d4-6c2wn\" (UID: \"4d67e23b-bda4-42d5-81b6-be58c643861d\") " pod="openstack/horizon-7797f969d4-6c2wn" Dec 03 13:18:09 crc kubenswrapper[4986]: I1203 13:18:09.179468 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4d67e23b-bda4-42d5-81b6-be58c643861d-scripts\") pod \"horizon-7797f969d4-6c2wn\" (UID: \"4d67e23b-bda4-42d5-81b6-be58c643861d\") " pod="openstack/horizon-7797f969d4-6c2wn" Dec 03 13:18:09 crc kubenswrapper[4986]: I1203 13:18:09.182704 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4d67e23b-bda4-42d5-81b6-be58c643861d-horizon-secret-key\") pod \"horizon-7797f969d4-6c2wn\" (UID: \"4d67e23b-bda4-42d5-81b6-be58c643861d\") " pod="openstack/horizon-7797f969d4-6c2wn" Dec 03 13:18:09 crc kubenswrapper[4986]: I1203 13:18:09.183678 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d67e23b-bda4-42d5-81b6-be58c643861d-horizon-tls-certs\") pod \"horizon-7797f969d4-6c2wn\" (UID: \"4d67e23b-bda4-42d5-81b6-be58c643861d\") " pod="openstack/horizon-7797f969d4-6c2wn" Dec 03 13:18:09 crc kubenswrapper[4986]: I1203 13:18:09.183985 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d67e23b-bda4-42d5-81b6-be58c643861d-combined-ca-bundle\") pod \"horizon-7797f969d4-6c2wn\" (UID: \"4d67e23b-bda4-42d5-81b6-be58c643861d\") " pod="openstack/horizon-7797f969d4-6c2wn" Dec 03 13:18:09 crc kubenswrapper[4986]: I1203 13:18:09.184494 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4d67e23b-bda4-42d5-81b6-be58c643861d-config-data\") pod \"horizon-7797f969d4-6c2wn\" (UID: \"4d67e23b-bda4-42d5-81b6-be58c643861d\") " pod="openstack/horizon-7797f969d4-6c2wn" Dec 03 13:18:09 crc kubenswrapper[4986]: I1203 13:18:09.185444 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvzhx\" (UniqueName: \"kubernetes.io/projected/4d67e23b-bda4-42d5-81b6-be58c643861d-kube-api-access-zvzhx\") pod \"horizon-7797f969d4-6c2wn\" (UID: \"4d67e23b-bda4-42d5-81b6-be58c643861d\") " pod="openstack/horizon-7797f969d4-6c2wn" Dec 03 13:18:09 crc kubenswrapper[4986]: I1203 13:18:09.206420 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7797f969d4-6c2wn" Dec 03 13:18:10 crc kubenswrapper[4986]: I1203 13:18:10.439770 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-76fcf4b695-sxknq" Dec 03 13:18:10 crc kubenswrapper[4986]: I1203 13:18:10.495323 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-gfndf"] Dec 03 13:18:10 crc kubenswrapper[4986]: I1203 13:18:10.495612 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-gfndf" podUID="d9339da5-40ba-490f-8309-389ec66fd0d2" containerName="dnsmasq-dns" containerID="cri-o://e792b52c823c49082b555ed5bd65bd798bd2827b627ae09894b673d79d462ec0" gracePeriod=10 Dec 03 13:18:11 crc kubenswrapper[4986]: I1203 13:18:11.132890 4986 generic.go:334] "Generic (PLEG): container finished" podID="d9339da5-40ba-490f-8309-389ec66fd0d2" containerID="e792b52c823c49082b555ed5bd65bd798bd2827b627ae09894b673d79d462ec0" exitCode=0 Dec 03 13:18:11 crc kubenswrapper[4986]: I1203 13:18:11.133278 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-gfndf" event={"ID":"d9339da5-40ba-490f-8309-389ec66fd0d2","Type":"ContainerDied","Data":"e792b52c823c49082b555ed5bd65bd798bd2827b627ae09894b673d79d462ec0"} Dec 03 13:18:13 crc kubenswrapper[4986]: I1203 13:18:13.830778 4986 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-gfndf" podUID="d9339da5-40ba-490f-8309-389ec66fd0d2" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.110:5353: connect: connection refused" Dec 03 13:18:15 crc kubenswrapper[4986]: E1203 13:18:15.431189 4986 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Dec 03 13:18:15 crc kubenswrapper[4986]: E1203 13:18:15.431496 4986 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n7fh575h689h8h56fhf5hc5h57fh4h67ch58dh55bh589hbfh68h5dfh667h68fh58ch68fh677h578h597h87h76h5cdhc8h559h594h54dhdbh59bq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v52th,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-765d896c9f-95hgf_openstack(1fddf481-1472-4685-b105-65c87a3b045a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 13:18:15 crc kubenswrapper[4986]: E1203 13:18:15.434360 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-765d896c9f-95hgf" podUID="1fddf481-1472-4685-b105-65c87a3b045a" Dec 03 13:18:15 crc kubenswrapper[4986]: E1203 13:18:15.471370 4986 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Dec 03 13:18:15 crc kubenswrapper[4986]: E1203 13:18:15.471819 4986 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n646h56h5bfh8fh58ch56dhb4h687h698hd8h666h688h5d8hcch5fh645h5bfh89h5ch584h667h589h5bch5fdh685h665h56dhc4h76h658h5dbh5d7q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wfnrd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-6568745857-wp84p_openstack(86290538-808d-4761-a2c6-d9b9509ec364): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 13:18:15 crc kubenswrapper[4986]: E1203 13:18:15.476681 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-6568745857-wp84p" podUID="86290538-808d-4761-a2c6-d9b9509ec364" Dec 03 13:18:15 crc kubenswrapper[4986]: I1203 13:18:15.518466 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vtwjw" Dec 03 13:18:15 crc kubenswrapper[4986]: I1203 13:18:15.663492 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88108c63-cf5e-4373-85d1-80d5803bb7c9-config-data\") pod \"88108c63-cf5e-4373-85d1-80d5803bb7c9\" (UID: \"88108c63-cf5e-4373-85d1-80d5803bb7c9\") " Dec 03 13:18:15 crc kubenswrapper[4986]: I1203 13:18:15.663576 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88108c63-cf5e-4373-85d1-80d5803bb7c9-scripts\") pod \"88108c63-cf5e-4373-85d1-80d5803bb7c9\" (UID: \"88108c63-cf5e-4373-85d1-80d5803bb7c9\") " Dec 03 13:18:15 crc kubenswrapper[4986]: I1203 13:18:15.663613 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/88108c63-cf5e-4373-85d1-80d5803bb7c9-credential-keys\") pod \"88108c63-cf5e-4373-85d1-80d5803bb7c9\" (UID: \"88108c63-cf5e-4373-85d1-80d5803bb7c9\") " Dec 03 13:18:15 crc kubenswrapper[4986]: I1203 13:18:15.663649 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/88108c63-cf5e-4373-85d1-80d5803bb7c9-fernet-keys\") pod \"88108c63-cf5e-4373-85d1-80d5803bb7c9\" (UID: \"88108c63-cf5e-4373-85d1-80d5803bb7c9\") " Dec 03 13:18:15 crc kubenswrapper[4986]: I1203 13:18:15.663743 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88108c63-cf5e-4373-85d1-80d5803bb7c9-combined-ca-bundle\") pod \"88108c63-cf5e-4373-85d1-80d5803bb7c9\" (UID: \"88108c63-cf5e-4373-85d1-80d5803bb7c9\") " Dec 03 13:18:15 crc kubenswrapper[4986]: I1203 13:18:15.663807 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bt5rp\" (UniqueName: \"kubernetes.io/projected/88108c63-cf5e-4373-85d1-80d5803bb7c9-kube-api-access-bt5rp\") pod \"88108c63-cf5e-4373-85d1-80d5803bb7c9\" (UID: \"88108c63-cf5e-4373-85d1-80d5803bb7c9\") " Dec 03 13:18:15 crc kubenswrapper[4986]: I1203 13:18:15.669362 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88108c63-cf5e-4373-85d1-80d5803bb7c9-scripts" (OuterVolumeSpecName: "scripts") pod "88108c63-cf5e-4373-85d1-80d5803bb7c9" (UID: "88108c63-cf5e-4373-85d1-80d5803bb7c9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:18:15 crc kubenswrapper[4986]: I1203 13:18:15.670736 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88108c63-cf5e-4373-85d1-80d5803bb7c9-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "88108c63-cf5e-4373-85d1-80d5803bb7c9" (UID: "88108c63-cf5e-4373-85d1-80d5803bb7c9"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:18:15 crc kubenswrapper[4986]: I1203 13:18:15.683162 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88108c63-cf5e-4373-85d1-80d5803bb7c9-kube-api-access-bt5rp" (OuterVolumeSpecName: "kube-api-access-bt5rp") pod "88108c63-cf5e-4373-85d1-80d5803bb7c9" (UID: "88108c63-cf5e-4373-85d1-80d5803bb7c9"). InnerVolumeSpecName "kube-api-access-bt5rp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:18:15 crc kubenswrapper[4986]: I1203 13:18:15.683319 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88108c63-cf5e-4373-85d1-80d5803bb7c9-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "88108c63-cf5e-4373-85d1-80d5803bb7c9" (UID: "88108c63-cf5e-4373-85d1-80d5803bb7c9"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:18:15 crc kubenswrapper[4986]: I1203 13:18:15.697987 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88108c63-cf5e-4373-85d1-80d5803bb7c9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "88108c63-cf5e-4373-85d1-80d5803bb7c9" (UID: "88108c63-cf5e-4373-85d1-80d5803bb7c9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:18:15 crc kubenswrapper[4986]: I1203 13:18:15.699456 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88108c63-cf5e-4373-85d1-80d5803bb7c9-config-data" (OuterVolumeSpecName: "config-data") pod "88108c63-cf5e-4373-85d1-80d5803bb7c9" (UID: "88108c63-cf5e-4373-85d1-80d5803bb7c9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:18:15 crc kubenswrapper[4986]: I1203 13:18:15.765418 4986 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88108c63-cf5e-4373-85d1-80d5803bb7c9-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 13:18:15 crc kubenswrapper[4986]: I1203 13:18:15.765448 4986 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/88108c63-cf5e-4373-85d1-80d5803bb7c9-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 03 13:18:15 crc kubenswrapper[4986]: I1203 13:18:15.765476 4986 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/88108c63-cf5e-4373-85d1-80d5803bb7c9-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 03 13:18:15 crc kubenswrapper[4986]: I1203 13:18:15.765485 4986 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88108c63-cf5e-4373-85d1-80d5803bb7c9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 13:18:15 crc kubenswrapper[4986]: I1203 13:18:15.765495 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bt5rp\" (UniqueName: \"kubernetes.io/projected/88108c63-cf5e-4373-85d1-80d5803bb7c9-kube-api-access-bt5rp\") on node \"crc\" DevicePath \"\"" Dec 03 13:18:15 crc kubenswrapper[4986]: I1203 13:18:15.765503 4986 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88108c63-cf5e-4373-85d1-80d5803bb7c9-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 13:18:16 crc kubenswrapper[4986]: I1203 13:18:16.178429 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vtwjw" event={"ID":"88108c63-cf5e-4373-85d1-80d5803bb7c9","Type":"ContainerDied","Data":"f3484d8fa78fdc5944d47fb6394b80187caa2d7f0058ea310860c06240172979"} Dec 03 13:18:16 crc kubenswrapper[4986]: I1203 13:18:16.178494 4986 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3484d8fa78fdc5944d47fb6394b80187caa2d7f0058ea310860c06240172979" Dec 03 13:18:16 crc kubenswrapper[4986]: I1203 13:18:16.178509 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vtwjw" Dec 03 13:18:16 crc kubenswrapper[4986]: I1203 13:18:16.597168 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-vtwjw"] Dec 03 13:18:16 crc kubenswrapper[4986]: I1203 13:18:16.603954 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-vtwjw"] Dec 03 13:18:16 crc kubenswrapper[4986]: I1203 13:18:16.710302 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-wrbjn"] Dec 03 13:18:16 crc kubenswrapper[4986]: E1203 13:18:16.710733 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88108c63-cf5e-4373-85d1-80d5803bb7c9" containerName="keystone-bootstrap" Dec 03 13:18:16 crc kubenswrapper[4986]: I1203 13:18:16.710759 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="88108c63-cf5e-4373-85d1-80d5803bb7c9" containerName="keystone-bootstrap" Dec 03 13:18:16 crc kubenswrapper[4986]: I1203 13:18:16.710971 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="88108c63-cf5e-4373-85d1-80d5803bb7c9" containerName="keystone-bootstrap" Dec 03 13:18:16 crc kubenswrapper[4986]: I1203 13:18:16.711595 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wrbjn" Dec 03 13:18:16 crc kubenswrapper[4986]: I1203 13:18:16.714577 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 03 13:18:16 crc kubenswrapper[4986]: I1203 13:18:16.714618 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 03 13:18:16 crc kubenswrapper[4986]: I1203 13:18:16.714640 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-mtjcz" Dec 03 13:18:16 crc kubenswrapper[4986]: I1203 13:18:16.714744 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 03 13:18:16 crc kubenswrapper[4986]: I1203 13:18:16.714872 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 03 13:18:16 crc kubenswrapper[4986]: I1203 13:18:16.717905 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-wrbjn"] Dec 03 13:18:16 crc kubenswrapper[4986]: I1203 13:18:16.785227 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e71caade-d3ac-45c2-8369-2c2a1d896370-scripts\") pod \"keystone-bootstrap-wrbjn\" (UID: \"e71caade-d3ac-45c2-8369-2c2a1d896370\") " pod="openstack/keystone-bootstrap-wrbjn" Dec 03 13:18:16 crc kubenswrapper[4986]: I1203 13:18:16.785314 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ttz7\" (UniqueName: \"kubernetes.io/projected/e71caade-d3ac-45c2-8369-2c2a1d896370-kube-api-access-5ttz7\") pod \"keystone-bootstrap-wrbjn\" (UID: \"e71caade-d3ac-45c2-8369-2c2a1d896370\") " pod="openstack/keystone-bootstrap-wrbjn" Dec 03 13:18:16 crc kubenswrapper[4986]: I1203 13:18:16.785374 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e71caade-d3ac-45c2-8369-2c2a1d896370-config-data\") pod \"keystone-bootstrap-wrbjn\" (UID: \"e71caade-d3ac-45c2-8369-2c2a1d896370\") " pod="openstack/keystone-bootstrap-wrbjn" Dec 03 13:18:16 crc kubenswrapper[4986]: I1203 13:18:16.785397 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e71caade-d3ac-45c2-8369-2c2a1d896370-credential-keys\") pod \"keystone-bootstrap-wrbjn\" (UID: \"e71caade-d3ac-45c2-8369-2c2a1d896370\") " pod="openstack/keystone-bootstrap-wrbjn" Dec 03 13:18:16 crc kubenswrapper[4986]: I1203 13:18:16.785420 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e71caade-d3ac-45c2-8369-2c2a1d896370-fernet-keys\") pod \"keystone-bootstrap-wrbjn\" (UID: \"e71caade-d3ac-45c2-8369-2c2a1d896370\") " pod="openstack/keystone-bootstrap-wrbjn" Dec 03 13:18:16 crc kubenswrapper[4986]: I1203 13:18:16.785442 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e71caade-d3ac-45c2-8369-2c2a1d896370-combined-ca-bundle\") pod \"keystone-bootstrap-wrbjn\" (UID: \"e71caade-d3ac-45c2-8369-2c2a1d896370\") " pod="openstack/keystone-bootstrap-wrbjn" Dec 03 13:18:16 crc kubenswrapper[4986]: I1203 13:18:16.887213 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e71caade-d3ac-45c2-8369-2c2a1d896370-config-data\") pod \"keystone-bootstrap-wrbjn\" (UID: \"e71caade-d3ac-45c2-8369-2c2a1d896370\") " pod="openstack/keystone-bootstrap-wrbjn" Dec 03 13:18:16 crc kubenswrapper[4986]: I1203 13:18:16.887263 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e71caade-d3ac-45c2-8369-2c2a1d896370-credential-keys\") pod \"keystone-bootstrap-wrbjn\" (UID: \"e71caade-d3ac-45c2-8369-2c2a1d896370\") " pod="openstack/keystone-bootstrap-wrbjn" Dec 03 13:18:16 crc kubenswrapper[4986]: I1203 13:18:16.887305 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e71caade-d3ac-45c2-8369-2c2a1d896370-fernet-keys\") pod \"keystone-bootstrap-wrbjn\" (UID: \"e71caade-d3ac-45c2-8369-2c2a1d896370\") " pod="openstack/keystone-bootstrap-wrbjn" Dec 03 13:18:16 crc kubenswrapper[4986]: I1203 13:18:16.887345 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e71caade-d3ac-45c2-8369-2c2a1d896370-combined-ca-bundle\") pod \"keystone-bootstrap-wrbjn\" (UID: \"e71caade-d3ac-45c2-8369-2c2a1d896370\") " pod="openstack/keystone-bootstrap-wrbjn" Dec 03 13:18:16 crc kubenswrapper[4986]: I1203 13:18:16.887445 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e71caade-d3ac-45c2-8369-2c2a1d896370-scripts\") pod \"keystone-bootstrap-wrbjn\" (UID: \"e71caade-d3ac-45c2-8369-2c2a1d896370\") " pod="openstack/keystone-bootstrap-wrbjn" Dec 03 13:18:16 crc kubenswrapper[4986]: I1203 13:18:16.887497 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ttz7\" (UniqueName: \"kubernetes.io/projected/e71caade-d3ac-45c2-8369-2c2a1d896370-kube-api-access-5ttz7\") pod \"keystone-bootstrap-wrbjn\" (UID: \"e71caade-d3ac-45c2-8369-2c2a1d896370\") " pod="openstack/keystone-bootstrap-wrbjn" Dec 03 13:18:16 crc kubenswrapper[4986]: I1203 13:18:16.891658 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e71caade-d3ac-45c2-8369-2c2a1d896370-credential-keys\") pod \"keystone-bootstrap-wrbjn\" (UID: \"e71caade-d3ac-45c2-8369-2c2a1d896370\") " pod="openstack/keystone-bootstrap-wrbjn" Dec 03 13:18:16 crc kubenswrapper[4986]: I1203 13:18:16.892262 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e71caade-d3ac-45c2-8369-2c2a1d896370-fernet-keys\") pod \"keystone-bootstrap-wrbjn\" (UID: \"e71caade-d3ac-45c2-8369-2c2a1d896370\") " pod="openstack/keystone-bootstrap-wrbjn" Dec 03 13:18:16 crc kubenswrapper[4986]: I1203 13:18:16.893358 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e71caade-d3ac-45c2-8369-2c2a1d896370-scripts\") pod \"keystone-bootstrap-wrbjn\" (UID: \"e71caade-d3ac-45c2-8369-2c2a1d896370\") " pod="openstack/keystone-bootstrap-wrbjn" Dec 03 13:18:16 crc kubenswrapper[4986]: I1203 13:18:16.903971 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ttz7\" (UniqueName: \"kubernetes.io/projected/e71caade-d3ac-45c2-8369-2c2a1d896370-kube-api-access-5ttz7\") pod \"keystone-bootstrap-wrbjn\" (UID: \"e71caade-d3ac-45c2-8369-2c2a1d896370\") " pod="openstack/keystone-bootstrap-wrbjn" Dec 03 13:18:16 crc kubenswrapper[4986]: I1203 13:18:16.904237 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e71caade-d3ac-45c2-8369-2c2a1d896370-combined-ca-bundle\") pod \"keystone-bootstrap-wrbjn\" (UID: \"e71caade-d3ac-45c2-8369-2c2a1d896370\") " pod="openstack/keystone-bootstrap-wrbjn" Dec 03 13:18:16 crc kubenswrapper[4986]: I1203 13:18:16.911747 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e71caade-d3ac-45c2-8369-2c2a1d896370-config-data\") pod \"keystone-bootstrap-wrbjn\" (UID: \"e71caade-d3ac-45c2-8369-2c2a1d896370\") " pod="openstack/keystone-bootstrap-wrbjn" Dec 03 13:18:16 crc kubenswrapper[4986]: I1203 13:18:16.957608 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88108c63-cf5e-4373-85d1-80d5803bb7c9" path="/var/lib/kubelet/pods/88108c63-cf5e-4373-85d1-80d5803bb7c9/volumes" Dec 03 13:18:17 crc kubenswrapper[4986]: I1203 13:18:17.051420 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wrbjn" Dec 03 13:18:17 crc kubenswrapper[4986]: I1203 13:18:17.288114 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-765d896c9f-95hgf" Dec 03 13:18:17 crc kubenswrapper[4986]: I1203 13:18:17.396530 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1fddf481-1472-4685-b105-65c87a3b045a-scripts\") pod \"1fddf481-1472-4685-b105-65c87a3b045a\" (UID: \"1fddf481-1472-4685-b105-65c87a3b045a\") " Dec 03 13:18:17 crc kubenswrapper[4986]: I1203 13:18:17.396689 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1fddf481-1472-4685-b105-65c87a3b045a-horizon-secret-key\") pod \"1fddf481-1472-4685-b105-65c87a3b045a\" (UID: \"1fddf481-1472-4685-b105-65c87a3b045a\") " Dec 03 13:18:17 crc kubenswrapper[4986]: I1203 13:18:17.396760 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1fddf481-1472-4685-b105-65c87a3b045a-config-data\") pod \"1fddf481-1472-4685-b105-65c87a3b045a\" (UID: \"1fddf481-1472-4685-b105-65c87a3b045a\") " Dec 03 13:18:17 crc kubenswrapper[4986]: I1203 13:18:17.396818 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1fddf481-1472-4685-b105-65c87a3b045a-logs\") pod \"1fddf481-1472-4685-b105-65c87a3b045a\" (UID: \"1fddf481-1472-4685-b105-65c87a3b045a\") " Dec 03 13:18:17 crc kubenswrapper[4986]: I1203 13:18:17.396882 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v52th\" (UniqueName: \"kubernetes.io/projected/1fddf481-1472-4685-b105-65c87a3b045a-kube-api-access-v52th\") pod \"1fddf481-1472-4685-b105-65c87a3b045a\" (UID: \"1fddf481-1472-4685-b105-65c87a3b045a\") " Dec 03 13:18:17 crc kubenswrapper[4986]: I1203 13:18:17.397136 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1fddf481-1472-4685-b105-65c87a3b045a-scripts" (OuterVolumeSpecName: "scripts") pod "1fddf481-1472-4685-b105-65c87a3b045a" (UID: "1fddf481-1472-4685-b105-65c87a3b045a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:18:17 crc kubenswrapper[4986]: I1203 13:18:17.397368 4986 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1fddf481-1472-4685-b105-65c87a3b045a-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 13:18:17 crc kubenswrapper[4986]: I1203 13:18:17.397567 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1fddf481-1472-4685-b105-65c87a3b045a-logs" (OuterVolumeSpecName: "logs") pod "1fddf481-1472-4685-b105-65c87a3b045a" (UID: "1fddf481-1472-4685-b105-65c87a3b045a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 13:18:17 crc kubenswrapper[4986]: I1203 13:18:17.397745 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1fddf481-1472-4685-b105-65c87a3b045a-config-data" (OuterVolumeSpecName: "config-data") pod "1fddf481-1472-4685-b105-65c87a3b045a" (UID: "1fddf481-1472-4685-b105-65c87a3b045a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:18:17 crc kubenswrapper[4986]: I1203 13:18:17.402878 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fddf481-1472-4685-b105-65c87a3b045a-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "1fddf481-1472-4685-b105-65c87a3b045a" (UID: "1fddf481-1472-4685-b105-65c87a3b045a"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:18:17 crc kubenswrapper[4986]: I1203 13:18:17.403265 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fddf481-1472-4685-b105-65c87a3b045a-kube-api-access-v52th" (OuterVolumeSpecName: "kube-api-access-v52th") pod "1fddf481-1472-4685-b105-65c87a3b045a" (UID: "1fddf481-1472-4685-b105-65c87a3b045a"). InnerVolumeSpecName "kube-api-access-v52th". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:18:17 crc kubenswrapper[4986]: I1203 13:18:17.499342 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v52th\" (UniqueName: \"kubernetes.io/projected/1fddf481-1472-4685-b105-65c87a3b045a-kube-api-access-v52th\") on node \"crc\" DevicePath \"\"" Dec 03 13:18:17 crc kubenswrapper[4986]: I1203 13:18:17.499389 4986 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1fddf481-1472-4685-b105-65c87a3b045a-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 03 13:18:17 crc kubenswrapper[4986]: I1203 13:18:17.499402 4986 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1fddf481-1472-4685-b105-65c87a3b045a-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 13:18:17 crc kubenswrapper[4986]: I1203 13:18:17.499413 4986 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1fddf481-1472-4685-b105-65c87a3b045a-logs\") on node \"crc\" DevicePath \"\"" Dec 03 13:18:18 crc kubenswrapper[4986]: I1203 13:18:18.192595 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-765d896c9f-95hgf" event={"ID":"1fddf481-1472-4685-b105-65c87a3b045a","Type":"ContainerDied","Data":"c26a3b1aa1c6f556fac33f83d625ed3c82227337b1ca7583d6fc1b3a1cb2e038"} Dec 03 13:18:18 crc kubenswrapper[4986]: I1203 13:18:18.192643 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-765d896c9f-95hgf" Dec 03 13:18:18 crc kubenswrapper[4986]: I1203 13:18:18.252834 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-765d896c9f-95hgf"] Dec 03 13:18:18 crc kubenswrapper[4986]: I1203 13:18:18.270441 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-765d896c9f-95hgf"] Dec 03 13:18:18 crc kubenswrapper[4986]: I1203 13:18:18.830969 4986 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-gfndf" podUID="d9339da5-40ba-490f-8309-389ec66fd0d2" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.110:5353: connect: connection refused" Dec 03 13:18:18 crc kubenswrapper[4986]: I1203 13:18:18.960065 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1fddf481-1472-4685-b105-65c87a3b045a" path="/var/lib/kubelet/pods/1fddf481-1472-4685-b105-65c87a3b045a/volumes" Dec 03 13:18:20 crc kubenswrapper[4986]: E1203 13:18:20.286216 4986 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Dec 03 13:18:20 crc kubenswrapper[4986]: E1203 13:18:20.286668 4986 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8r7v6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-2k7gk_openstack(d560e389-543d-4341-a450-e6cb0f2a3057): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 13:18:20 crc kubenswrapper[4986]: E1203 13:18:20.287868 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-2k7gk" podUID="d560e389-543d-4341-a450-e6cb0f2a3057" Dec 03 13:18:20 crc kubenswrapper[4986]: I1203 13:18:20.482076 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6568745857-wp84p" Dec 03 13:18:20 crc kubenswrapper[4986]: I1203 13:18:20.554474 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wfnrd\" (UniqueName: \"kubernetes.io/projected/86290538-808d-4761-a2c6-d9b9509ec364-kube-api-access-wfnrd\") pod \"86290538-808d-4761-a2c6-d9b9509ec364\" (UID: \"86290538-808d-4761-a2c6-d9b9509ec364\") " Dec 03 13:18:20 crc kubenswrapper[4986]: I1203 13:18:20.554669 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/86290538-808d-4761-a2c6-d9b9509ec364-scripts\") pod \"86290538-808d-4761-a2c6-d9b9509ec364\" (UID: \"86290538-808d-4761-a2c6-d9b9509ec364\") " Dec 03 13:18:20 crc kubenswrapper[4986]: I1203 13:18:20.554822 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/86290538-808d-4761-a2c6-d9b9509ec364-logs\") pod \"86290538-808d-4761-a2c6-d9b9509ec364\" (UID: \"86290538-808d-4761-a2c6-d9b9509ec364\") " Dec 03 13:18:20 crc kubenswrapper[4986]: I1203 13:18:20.554894 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/86290538-808d-4761-a2c6-d9b9509ec364-horizon-secret-key\") pod \"86290538-808d-4761-a2c6-d9b9509ec364\" (UID: \"86290538-808d-4761-a2c6-d9b9509ec364\") " Dec 03 13:18:20 crc kubenswrapper[4986]: I1203 13:18:20.555049 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/86290538-808d-4761-a2c6-d9b9509ec364-config-data\") pod \"86290538-808d-4761-a2c6-d9b9509ec364\" (UID: \"86290538-808d-4761-a2c6-d9b9509ec364\") " Dec 03 13:18:20 crc kubenswrapper[4986]: I1203 13:18:20.555180 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86290538-808d-4761-a2c6-d9b9509ec364-logs" (OuterVolumeSpecName: "logs") pod "86290538-808d-4761-a2c6-d9b9509ec364" (UID: "86290538-808d-4761-a2c6-d9b9509ec364"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 13:18:20 crc kubenswrapper[4986]: I1203 13:18:20.555607 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86290538-808d-4761-a2c6-d9b9509ec364-scripts" (OuterVolumeSpecName: "scripts") pod "86290538-808d-4761-a2c6-d9b9509ec364" (UID: "86290538-808d-4761-a2c6-d9b9509ec364"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:18:20 crc kubenswrapper[4986]: I1203 13:18:20.555722 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86290538-808d-4761-a2c6-d9b9509ec364-config-data" (OuterVolumeSpecName: "config-data") pod "86290538-808d-4761-a2c6-d9b9509ec364" (UID: "86290538-808d-4761-a2c6-d9b9509ec364"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:18:20 crc kubenswrapper[4986]: I1203 13:18:20.556428 4986 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/86290538-808d-4761-a2c6-d9b9509ec364-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 13:18:20 crc kubenswrapper[4986]: I1203 13:18:20.556484 4986 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/86290538-808d-4761-a2c6-d9b9509ec364-logs\") on node \"crc\" DevicePath \"\"" Dec 03 13:18:20 crc kubenswrapper[4986]: I1203 13:18:20.556498 4986 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/86290538-808d-4761-a2c6-d9b9509ec364-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 13:18:20 crc kubenswrapper[4986]: I1203 13:18:20.560978 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86290538-808d-4761-a2c6-d9b9509ec364-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "86290538-808d-4761-a2c6-d9b9509ec364" (UID: "86290538-808d-4761-a2c6-d9b9509ec364"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:18:20 crc kubenswrapper[4986]: I1203 13:18:20.570510 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86290538-808d-4761-a2c6-d9b9509ec364-kube-api-access-wfnrd" (OuterVolumeSpecName: "kube-api-access-wfnrd") pod "86290538-808d-4761-a2c6-d9b9509ec364" (UID: "86290538-808d-4761-a2c6-d9b9509ec364"). InnerVolumeSpecName "kube-api-access-wfnrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:18:20 crc kubenswrapper[4986]: I1203 13:18:20.658272 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wfnrd\" (UniqueName: \"kubernetes.io/projected/86290538-808d-4761-a2c6-d9b9509ec364-kube-api-access-wfnrd\") on node \"crc\" DevicePath \"\"" Dec 03 13:18:20 crc kubenswrapper[4986]: I1203 13:18:20.658332 4986 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/86290538-808d-4761-a2c6-d9b9509ec364-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 03 13:18:21 crc kubenswrapper[4986]: I1203 13:18:21.217670 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6568745857-wp84p" event={"ID":"86290538-808d-4761-a2c6-d9b9509ec364","Type":"ContainerDied","Data":"062cc3ccc9913526e0e4393adbf8eba9e080ae7e5ad8c81a6966328d3ff3e189"} Dec 03 13:18:21 crc kubenswrapper[4986]: I1203 13:18:21.217821 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6568745857-wp84p" Dec 03 13:18:21 crc kubenswrapper[4986]: E1203 13:18:21.219239 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-2k7gk" podUID="d560e389-543d-4341-a450-e6cb0f2a3057" Dec 03 13:18:21 crc kubenswrapper[4986]: I1203 13:18:21.280717 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6568745857-wp84p"] Dec 03 13:18:21 crc kubenswrapper[4986]: I1203 13:18:21.285368 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6568745857-wp84p"] Dec 03 13:18:22 crc kubenswrapper[4986]: I1203 13:18:22.951714 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86290538-808d-4761-a2c6-d9b9509ec364" path="/var/lib/kubelet/pods/86290538-808d-4761-a2c6-d9b9509ec364/volumes" Dec 03 13:18:27 crc kubenswrapper[4986]: I1203 13:18:27.601258 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-gfndf" Dec 03 13:18:27 crc kubenswrapper[4986]: I1203 13:18:27.785306 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d9339da5-40ba-490f-8309-389ec66fd0d2-dns-svc\") pod \"d9339da5-40ba-490f-8309-389ec66fd0d2\" (UID: \"d9339da5-40ba-490f-8309-389ec66fd0d2\") " Dec 03 13:18:27 crc kubenswrapper[4986]: I1203 13:18:27.785793 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9339da5-40ba-490f-8309-389ec66fd0d2-config\") pod \"d9339da5-40ba-490f-8309-389ec66fd0d2\" (UID: \"d9339da5-40ba-490f-8309-389ec66fd0d2\") " Dec 03 13:18:27 crc kubenswrapper[4986]: I1203 13:18:27.785857 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d9339da5-40ba-490f-8309-389ec66fd0d2-ovsdbserver-nb\") pod \"d9339da5-40ba-490f-8309-389ec66fd0d2\" (UID: \"d9339da5-40ba-490f-8309-389ec66fd0d2\") " Dec 03 13:18:27 crc kubenswrapper[4986]: I1203 13:18:27.785888 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d9339da5-40ba-490f-8309-389ec66fd0d2-ovsdbserver-sb\") pod \"d9339da5-40ba-490f-8309-389ec66fd0d2\" (UID: \"d9339da5-40ba-490f-8309-389ec66fd0d2\") " Dec 03 13:18:27 crc kubenswrapper[4986]: I1203 13:18:27.785911 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rc57t\" (UniqueName: \"kubernetes.io/projected/d9339da5-40ba-490f-8309-389ec66fd0d2-kube-api-access-rc57t\") pod \"d9339da5-40ba-490f-8309-389ec66fd0d2\" (UID: \"d9339da5-40ba-490f-8309-389ec66fd0d2\") " Dec 03 13:18:27 crc kubenswrapper[4986]: I1203 13:18:27.790725 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9339da5-40ba-490f-8309-389ec66fd0d2-kube-api-access-rc57t" (OuterVolumeSpecName: "kube-api-access-rc57t") pod "d9339da5-40ba-490f-8309-389ec66fd0d2" (UID: "d9339da5-40ba-490f-8309-389ec66fd0d2"). InnerVolumeSpecName "kube-api-access-rc57t". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:18:27 crc kubenswrapper[4986]: I1203 13:18:27.829488 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9339da5-40ba-490f-8309-389ec66fd0d2-config" (OuterVolumeSpecName: "config") pod "d9339da5-40ba-490f-8309-389ec66fd0d2" (UID: "d9339da5-40ba-490f-8309-389ec66fd0d2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:18:27 crc kubenswrapper[4986]: I1203 13:18:27.834377 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9339da5-40ba-490f-8309-389ec66fd0d2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d9339da5-40ba-490f-8309-389ec66fd0d2" (UID: "d9339da5-40ba-490f-8309-389ec66fd0d2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:18:27 crc kubenswrapper[4986]: I1203 13:18:27.836217 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9339da5-40ba-490f-8309-389ec66fd0d2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d9339da5-40ba-490f-8309-389ec66fd0d2" (UID: "d9339da5-40ba-490f-8309-389ec66fd0d2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:18:27 crc kubenswrapper[4986]: I1203 13:18:27.837706 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9339da5-40ba-490f-8309-389ec66fd0d2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d9339da5-40ba-490f-8309-389ec66fd0d2" (UID: "d9339da5-40ba-490f-8309-389ec66fd0d2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:18:27 crc kubenswrapper[4986]: I1203 13:18:27.887715 4986 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9339da5-40ba-490f-8309-389ec66fd0d2-config\") on node \"crc\" DevicePath \"\"" Dec 03 13:18:27 crc kubenswrapper[4986]: I1203 13:18:27.887751 4986 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d9339da5-40ba-490f-8309-389ec66fd0d2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 03 13:18:27 crc kubenswrapper[4986]: I1203 13:18:27.887790 4986 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d9339da5-40ba-490f-8309-389ec66fd0d2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 03 13:18:27 crc kubenswrapper[4986]: I1203 13:18:27.887804 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rc57t\" (UniqueName: \"kubernetes.io/projected/d9339da5-40ba-490f-8309-389ec66fd0d2-kube-api-access-rc57t\") on node \"crc\" DevicePath \"\"" Dec 03 13:18:27 crc kubenswrapper[4986]: I1203 13:18:27.887816 4986 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d9339da5-40ba-490f-8309-389ec66fd0d2-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 03 13:18:28 crc kubenswrapper[4986]: I1203 13:18:28.273264 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-gfndf" event={"ID":"d9339da5-40ba-490f-8309-389ec66fd0d2","Type":"ContainerDied","Data":"ea8799acf594edee25c399c2f65f836227181468d75147bde47a3b094705362a"} Dec 03 13:18:28 crc kubenswrapper[4986]: I1203 13:18:28.273345 4986 scope.go:117] "RemoveContainer" containerID="e792b52c823c49082b555ed5bd65bd798bd2827b627ae09894b673d79d462ec0" Dec 03 13:18:28 crc kubenswrapper[4986]: I1203 13:18:28.273372 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-gfndf" Dec 03 13:18:28 crc kubenswrapper[4986]: I1203 13:18:28.337179 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-gfndf"] Dec 03 13:18:28 crc kubenswrapper[4986]: I1203 13:18:28.347459 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-gfndf"] Dec 03 13:18:28 crc kubenswrapper[4986]: I1203 13:18:28.831118 4986 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-gfndf" podUID="d9339da5-40ba-490f-8309-389ec66fd0d2" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.110:5353: i/o timeout" Dec 03 13:18:28 crc kubenswrapper[4986]: I1203 13:18:28.912509 4986 scope.go:117] "RemoveContainer" containerID="72781ace6ad04f6f7b3ed5bc8a4b671dca3f4355392598ed5f1698bc62dbdeda" Dec 03 13:18:28 crc kubenswrapper[4986]: E1203 13:18:28.925579 4986 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Dec 03 13:18:28 crc kubenswrapper[4986]: E1203 13:18:28.925901 4986 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7nxn8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-qpv6c_openstack(7b8ed42a-89ce-4098-9489-5291e678bf18): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 13:18:28 crc kubenswrapper[4986]: E1203 13:18:28.930469 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-qpv6c" podUID="7b8ed42a-89ce-4098-9489-5291e678bf18" Dec 03 13:18:28 crc kubenswrapper[4986]: I1203 13:18:28.964937 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9339da5-40ba-490f-8309-389ec66fd0d2" path="/var/lib/kubelet/pods/d9339da5-40ba-490f-8309-389ec66fd0d2/volumes" Dec 03 13:18:29 crc kubenswrapper[4986]: I1203 13:18:29.301996 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-8v24s" event={"ID":"9ca26c94-4546-4d5d-8600-30257c724198","Type":"ContainerStarted","Data":"dd577315d509f9c14bcfc2c833af575a89f8978df77e1e1cbb64fa674057668c"} Dec 03 13:18:29 crc kubenswrapper[4986]: I1203 13:18:29.308804 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7797f969d4-6c2wn"] Dec 03 13:18:29 crc kubenswrapper[4986]: I1203 13:18:29.311769 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"143eb6df-5711-4616-baaa-42417113bfed","Type":"ContainerStarted","Data":"31623ea22a4e33dc46e24291324cbd81da55ddc262f3a20d8b8d61d022e9f667"} Dec 03 13:18:29 crc kubenswrapper[4986]: I1203 13:18:29.315369 4986 generic.go:334] "Generic (PLEG): container finished" podID="14924968-6b9f-4a33-b504-dbfd64956b30" containerID="b86fc780bc92a3f4d38f408567bda5883068934dad3ebc7618aec704e587797b" exitCode=0 Dec 03 13:18:29 crc kubenswrapper[4986]: I1203 13:18:29.315381 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-4s25c" event={"ID":"14924968-6b9f-4a33-b504-dbfd64956b30","Type":"ContainerDied","Data":"b86fc780bc92a3f4d38f408567bda5883068934dad3ebc7618aec704e587797b"} Dec 03 13:18:29 crc kubenswrapper[4986]: I1203 13:18:29.320826 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-76c69b8dbf-kzlpf" event={"ID":"dc9fa065-2e88-4181-ab5a-be64336bed7d","Type":"ContainerStarted","Data":"1159886bb01a7d0dde71c42912ecd0c27e1490067409676edf00d3431f8fe430"} Dec 03 13:18:29 crc kubenswrapper[4986]: E1203 13:18:29.322328 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-qpv6c" podUID="7b8ed42a-89ce-4098-9489-5291e678bf18" Dec 03 13:18:29 crc kubenswrapper[4986]: I1203 13:18:29.336040 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-8v24s" podStartSLOduration=2.302963949 podStartE2EDuration="30.33601843s" podCreationTimestamp="2025-12-03 13:17:59 +0000 UTC" firstStartedPulling="2025-12-03 13:18:00.828690181 +0000 UTC m=+1340.295121372" lastFinishedPulling="2025-12-03 13:18:28.861744662 +0000 UTC m=+1368.328175853" observedRunningTime="2025-12-03 13:18:29.319753012 +0000 UTC m=+1368.786184203" watchObservedRunningTime="2025-12-03 13:18:29.33601843 +0000 UTC m=+1368.802449621" Dec 03 13:18:29 crc kubenswrapper[4986]: W1203 13:18:29.340752 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4d67e23b_bda4_42d5_81b6_be58c643861d.slice/crio-fe1470f4d4e3b6ce51ad2fdda0a21ca48c7b609e54ac88a84658831479a3662d WatchSource:0}: Error finding container fe1470f4d4e3b6ce51ad2fdda0a21ca48c7b609e54ac88a84658831479a3662d: Status 404 returned error can't find the container with id fe1470f4d4e3b6ce51ad2fdda0a21ca48c7b609e54ac88a84658831479a3662d Dec 03 13:18:29 crc kubenswrapper[4986]: I1203 13:18:29.381857 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-wrbjn"] Dec 03 13:18:29 crc kubenswrapper[4986]: I1203 13:18:29.429824 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-cc774c568-phcpp"] Dec 03 13:18:30 crc kubenswrapper[4986]: I1203 13:18:30.362062 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7797f969d4-6c2wn" event={"ID":"4d67e23b-bda4-42d5-81b6-be58c643861d","Type":"ContainerStarted","Data":"b47dcc4be740d32c8698661ded5c7a7e2f72bf57847528f4cf1df88f6efdacb0"} Dec 03 13:18:30 crc kubenswrapper[4986]: I1203 13:18:30.362734 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7797f969d4-6c2wn" event={"ID":"4d67e23b-bda4-42d5-81b6-be58c643861d","Type":"ContainerStarted","Data":"98e83fcfe46bdd7d1d725086532625bcc5259a59beb3c858cb3db1970c6da08a"} Dec 03 13:18:30 crc kubenswrapper[4986]: I1203 13:18:30.362756 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7797f969d4-6c2wn" event={"ID":"4d67e23b-bda4-42d5-81b6-be58c643861d","Type":"ContainerStarted","Data":"fe1470f4d4e3b6ce51ad2fdda0a21ca48c7b609e54ac88a84658831479a3662d"} Dec 03 13:18:30 crc kubenswrapper[4986]: I1203 13:18:30.369407 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-cc774c568-phcpp" event={"ID":"cee5d9b6-d11e-4bad-b013-196a4f401404","Type":"ContainerStarted","Data":"b3a510936fd6eb78b8694de1aa0785b123dd535c543cc43a92b46a0c5eb19968"} Dec 03 13:18:30 crc kubenswrapper[4986]: I1203 13:18:30.369461 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-cc774c568-phcpp" event={"ID":"cee5d9b6-d11e-4bad-b013-196a4f401404","Type":"ContainerStarted","Data":"d7d30e426c52ae0b45fc8f0203633e1d5f3166da333a8d50729b7513d223357e"} Dec 03 13:18:30 crc kubenswrapper[4986]: I1203 13:18:30.369476 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-cc774c568-phcpp" event={"ID":"cee5d9b6-d11e-4bad-b013-196a4f401404","Type":"ContainerStarted","Data":"2e58cb3d381ed3dce5a24d1d0fcdeacc052821d334ca39b2b325f2950db053f4"} Dec 03 13:18:30 crc kubenswrapper[4986]: I1203 13:18:30.373021 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wrbjn" event={"ID":"e71caade-d3ac-45c2-8369-2c2a1d896370","Type":"ContainerStarted","Data":"dbd6cfcaed1a911968e997f6a395fdd03f3717d39b70c037e0fa75aadf9e7f0d"} Dec 03 13:18:30 crc kubenswrapper[4986]: I1203 13:18:30.373064 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wrbjn" event={"ID":"e71caade-d3ac-45c2-8369-2c2a1d896370","Type":"ContainerStarted","Data":"9a8ddf760577ff7ab3ece19b74c8e166c8fd47c6539f81ad40d0ef94db4870ae"} Dec 03 13:18:30 crc kubenswrapper[4986]: I1203 13:18:30.377127 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-76c69b8dbf-kzlpf" event={"ID":"dc9fa065-2e88-4181-ab5a-be64336bed7d","Type":"ContainerStarted","Data":"d9d2a9780ee6c796b5953539bd3cdbf854011c09938f122a7a2b048d2fe551c3"} Dec 03 13:18:30 crc kubenswrapper[4986]: I1203 13:18:30.377182 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-76c69b8dbf-kzlpf" podUID="dc9fa065-2e88-4181-ab5a-be64336bed7d" containerName="horizon-log" containerID="cri-o://1159886bb01a7d0dde71c42912ecd0c27e1490067409676edf00d3431f8fe430" gracePeriod=30 Dec 03 13:18:30 crc kubenswrapper[4986]: I1203 13:18:30.377216 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-76c69b8dbf-kzlpf" podUID="dc9fa065-2e88-4181-ab5a-be64336bed7d" containerName="horizon" containerID="cri-o://d9d2a9780ee6c796b5953539bd3cdbf854011c09938f122a7a2b048d2fe551c3" gracePeriod=30 Dec 03 13:18:30 crc kubenswrapper[4986]: I1203 13:18:30.383497 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-8f5bt" event={"ID":"0874137d-06da-450a-9e93-ad53257c5115","Type":"ContainerStarted","Data":"eb1cbefc6c37e055dbcce2b749fbdf02cc41bb51b8fcdfd46e94ad0193235add"} Dec 03 13:18:30 crc kubenswrapper[4986]: I1203 13:18:30.388315 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7797f969d4-6c2wn" podStartSLOduration=22.388277034 podStartE2EDuration="22.388277034s" podCreationTimestamp="2025-12-03 13:18:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:18:30.383012242 +0000 UTC m=+1369.849443433" watchObservedRunningTime="2025-12-03 13:18:30.388277034 +0000 UTC m=+1369.854708225" Dec 03 13:18:30 crc kubenswrapper[4986]: I1203 13:18:30.414163 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-cc774c568-phcpp" podStartSLOduration=22.414143412 podStartE2EDuration="22.414143412s" podCreationTimestamp="2025-12-03 13:18:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:18:30.410575916 +0000 UTC m=+1369.877007117" watchObservedRunningTime="2025-12-03 13:18:30.414143412 +0000 UTC m=+1369.880574613" Dec 03 13:18:30 crc kubenswrapper[4986]: I1203 13:18:30.438530 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-wrbjn" podStartSLOduration=14.4385076 podStartE2EDuration="14.4385076s" podCreationTimestamp="2025-12-03 13:18:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:18:30.431747067 +0000 UTC m=+1369.898178278" watchObservedRunningTime="2025-12-03 13:18:30.4385076 +0000 UTC m=+1369.904938791" Dec 03 13:18:30 crc kubenswrapper[4986]: I1203 13:18:30.459325 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-76c69b8dbf-kzlpf" podStartSLOduration=3.371694268 podStartE2EDuration="29.45930283s" podCreationTimestamp="2025-12-03 13:18:01 +0000 UTC" firstStartedPulling="2025-12-03 13:18:02.782969019 +0000 UTC m=+1342.249400210" lastFinishedPulling="2025-12-03 13:18:28.870577581 +0000 UTC m=+1368.337008772" observedRunningTime="2025-12-03 13:18:30.450965956 +0000 UTC m=+1369.917397167" watchObservedRunningTime="2025-12-03 13:18:30.45930283 +0000 UTC m=+1369.925734021" Dec 03 13:18:30 crc kubenswrapper[4986]: I1203 13:18:30.477177 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-8f5bt" podStartSLOduration=2.750472265 podStartE2EDuration="1m3.477154792s" podCreationTimestamp="2025-12-03 13:17:27 +0000 UTC" firstStartedPulling="2025-12-03 13:17:28.145072536 +0000 UTC m=+1307.611503727" lastFinishedPulling="2025-12-03 13:18:28.871755063 +0000 UTC m=+1368.338186254" observedRunningTime="2025-12-03 13:18:30.468744075 +0000 UTC m=+1369.935175276" watchObservedRunningTime="2025-12-03 13:18:30.477154792 +0000 UTC m=+1369.943585983" Dec 03 13:18:31 crc kubenswrapper[4986]: I1203 13:18:31.944607 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-4s25c" Dec 03 13:18:32 crc kubenswrapper[4986]: I1203 13:18:32.098817 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/14924968-6b9f-4a33-b504-dbfd64956b30-config\") pod \"14924968-6b9f-4a33-b504-dbfd64956b30\" (UID: \"14924968-6b9f-4a33-b504-dbfd64956b30\") " Dec 03 13:18:32 crc kubenswrapper[4986]: I1203 13:18:32.099296 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-plvns\" (UniqueName: \"kubernetes.io/projected/14924968-6b9f-4a33-b504-dbfd64956b30-kube-api-access-plvns\") pod \"14924968-6b9f-4a33-b504-dbfd64956b30\" (UID: \"14924968-6b9f-4a33-b504-dbfd64956b30\") " Dec 03 13:18:32 crc kubenswrapper[4986]: I1203 13:18:32.099505 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14924968-6b9f-4a33-b504-dbfd64956b30-combined-ca-bundle\") pod \"14924968-6b9f-4a33-b504-dbfd64956b30\" (UID: \"14924968-6b9f-4a33-b504-dbfd64956b30\") " Dec 03 13:18:32 crc kubenswrapper[4986]: I1203 13:18:32.106535 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14924968-6b9f-4a33-b504-dbfd64956b30-kube-api-access-plvns" (OuterVolumeSpecName: "kube-api-access-plvns") pod "14924968-6b9f-4a33-b504-dbfd64956b30" (UID: "14924968-6b9f-4a33-b504-dbfd64956b30"). InnerVolumeSpecName "kube-api-access-plvns". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:18:32 crc kubenswrapper[4986]: I1203 13:18:32.127233 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14924968-6b9f-4a33-b504-dbfd64956b30-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "14924968-6b9f-4a33-b504-dbfd64956b30" (UID: "14924968-6b9f-4a33-b504-dbfd64956b30"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:18:32 crc kubenswrapper[4986]: I1203 13:18:32.130618 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-76c69b8dbf-kzlpf" Dec 03 13:18:32 crc kubenswrapper[4986]: I1203 13:18:32.133774 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14924968-6b9f-4a33-b504-dbfd64956b30-config" (OuterVolumeSpecName: "config") pod "14924968-6b9f-4a33-b504-dbfd64956b30" (UID: "14924968-6b9f-4a33-b504-dbfd64956b30"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:18:32 crc kubenswrapper[4986]: I1203 13:18:32.201882 4986 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/14924968-6b9f-4a33-b504-dbfd64956b30-config\") on node \"crc\" DevicePath \"\"" Dec 03 13:18:32 crc kubenswrapper[4986]: I1203 13:18:32.201919 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-plvns\" (UniqueName: \"kubernetes.io/projected/14924968-6b9f-4a33-b504-dbfd64956b30-kube-api-access-plvns\") on node \"crc\" DevicePath \"\"" Dec 03 13:18:32 crc kubenswrapper[4986]: I1203 13:18:32.201929 4986 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14924968-6b9f-4a33-b504-dbfd64956b30-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 13:18:32 crc kubenswrapper[4986]: I1203 13:18:32.398981 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"143eb6df-5711-4616-baaa-42417113bfed","Type":"ContainerStarted","Data":"8fd2e1e494f36abbefe9886a1586a47aa16686c32a62d8f2e95942c2e0f8f466"} Dec 03 13:18:32 crc kubenswrapper[4986]: I1203 13:18:32.400864 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-4s25c" event={"ID":"14924968-6b9f-4a33-b504-dbfd64956b30","Type":"ContainerDied","Data":"5bc3e9d9c76461fabb0eb2dbcdf2d4f0df70792e9726568391782a93af3d1428"} Dec 03 13:18:32 crc kubenswrapper[4986]: I1203 13:18:32.400903 4986 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5bc3e9d9c76461fabb0eb2dbcdf2d4f0df70792e9726568391782a93af3d1428" Dec 03 13:18:32 crc kubenswrapper[4986]: I1203 13:18:32.400915 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-4s25c" Dec 03 13:18:33 crc kubenswrapper[4986]: I1203 13:18:33.132117 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-65965d6475-4jj4z"] Dec 03 13:18:33 crc kubenswrapper[4986]: E1203 13:18:33.132889 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14924968-6b9f-4a33-b504-dbfd64956b30" containerName="neutron-db-sync" Dec 03 13:18:33 crc kubenswrapper[4986]: I1203 13:18:33.132907 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="14924968-6b9f-4a33-b504-dbfd64956b30" containerName="neutron-db-sync" Dec 03 13:18:33 crc kubenswrapper[4986]: E1203 13:18:33.132921 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9339da5-40ba-490f-8309-389ec66fd0d2" containerName="init" Dec 03 13:18:33 crc kubenswrapper[4986]: I1203 13:18:33.132927 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9339da5-40ba-490f-8309-389ec66fd0d2" containerName="init" Dec 03 13:18:33 crc kubenswrapper[4986]: E1203 13:18:33.132958 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9339da5-40ba-490f-8309-389ec66fd0d2" containerName="dnsmasq-dns" Dec 03 13:18:33 crc kubenswrapper[4986]: I1203 13:18:33.132967 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9339da5-40ba-490f-8309-389ec66fd0d2" containerName="dnsmasq-dns" Dec 03 13:18:33 crc kubenswrapper[4986]: I1203 13:18:33.133170 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="14924968-6b9f-4a33-b504-dbfd64956b30" containerName="neutron-db-sync" Dec 03 13:18:33 crc kubenswrapper[4986]: I1203 13:18:33.133198 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9339da5-40ba-490f-8309-389ec66fd0d2" containerName="dnsmasq-dns" Dec 03 13:18:33 crc kubenswrapper[4986]: I1203 13:18:33.134268 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65965d6475-4jj4z" Dec 03 13:18:33 crc kubenswrapper[4986]: I1203 13:18:33.166507 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-65965d6475-4jj4z"] Dec 03 13:18:33 crc kubenswrapper[4986]: I1203 13:18:33.219490 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0379ff65-3714-49fe-8f61-22fa65b88922-ovsdbserver-nb\") pod \"dnsmasq-dns-65965d6475-4jj4z\" (UID: \"0379ff65-3714-49fe-8f61-22fa65b88922\") " pod="openstack/dnsmasq-dns-65965d6475-4jj4z" Dec 03 13:18:33 crc kubenswrapper[4986]: I1203 13:18:33.219591 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6x4g8\" (UniqueName: \"kubernetes.io/projected/0379ff65-3714-49fe-8f61-22fa65b88922-kube-api-access-6x4g8\") pod \"dnsmasq-dns-65965d6475-4jj4z\" (UID: \"0379ff65-3714-49fe-8f61-22fa65b88922\") " pod="openstack/dnsmasq-dns-65965d6475-4jj4z" Dec 03 13:18:33 crc kubenswrapper[4986]: I1203 13:18:33.219616 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0379ff65-3714-49fe-8f61-22fa65b88922-dns-swift-storage-0\") pod \"dnsmasq-dns-65965d6475-4jj4z\" (UID: \"0379ff65-3714-49fe-8f61-22fa65b88922\") " pod="openstack/dnsmasq-dns-65965d6475-4jj4z" Dec 03 13:18:33 crc kubenswrapper[4986]: I1203 13:18:33.219889 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0379ff65-3714-49fe-8f61-22fa65b88922-config\") pod \"dnsmasq-dns-65965d6475-4jj4z\" (UID: \"0379ff65-3714-49fe-8f61-22fa65b88922\") " pod="openstack/dnsmasq-dns-65965d6475-4jj4z" Dec 03 13:18:33 crc kubenswrapper[4986]: I1203 13:18:33.219955 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0379ff65-3714-49fe-8f61-22fa65b88922-ovsdbserver-sb\") pod \"dnsmasq-dns-65965d6475-4jj4z\" (UID: \"0379ff65-3714-49fe-8f61-22fa65b88922\") " pod="openstack/dnsmasq-dns-65965d6475-4jj4z" Dec 03 13:18:33 crc kubenswrapper[4986]: I1203 13:18:33.219994 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0379ff65-3714-49fe-8f61-22fa65b88922-dns-svc\") pod \"dnsmasq-dns-65965d6475-4jj4z\" (UID: \"0379ff65-3714-49fe-8f61-22fa65b88922\") " pod="openstack/dnsmasq-dns-65965d6475-4jj4z" Dec 03 13:18:33 crc kubenswrapper[4986]: I1203 13:18:33.241944 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5dc7cf89f4-56jhq"] Dec 03 13:18:33 crc kubenswrapper[4986]: I1203 13:18:33.243880 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5dc7cf89f4-56jhq" Dec 03 13:18:33 crc kubenswrapper[4986]: I1203 13:18:33.250552 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5dc7cf89f4-56jhq"] Dec 03 13:18:33 crc kubenswrapper[4986]: I1203 13:18:33.251010 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Dec 03 13:18:33 crc kubenswrapper[4986]: I1203 13:18:33.251036 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Dec 03 13:18:33 crc kubenswrapper[4986]: I1203 13:18:33.251216 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-gclqm" Dec 03 13:18:33 crc kubenswrapper[4986]: I1203 13:18:33.252468 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Dec 03 13:18:33 crc kubenswrapper[4986]: I1203 13:18:33.323757 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0379ff65-3714-49fe-8f61-22fa65b88922-ovsdbserver-nb\") pod \"dnsmasq-dns-65965d6475-4jj4z\" (UID: \"0379ff65-3714-49fe-8f61-22fa65b88922\") " pod="openstack/dnsmasq-dns-65965d6475-4jj4z" Dec 03 13:18:33 crc kubenswrapper[4986]: I1203 13:18:33.323859 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6x4g8\" (UniqueName: \"kubernetes.io/projected/0379ff65-3714-49fe-8f61-22fa65b88922-kube-api-access-6x4g8\") pod \"dnsmasq-dns-65965d6475-4jj4z\" (UID: \"0379ff65-3714-49fe-8f61-22fa65b88922\") " pod="openstack/dnsmasq-dns-65965d6475-4jj4z" Dec 03 13:18:33 crc kubenswrapper[4986]: I1203 13:18:33.323885 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0379ff65-3714-49fe-8f61-22fa65b88922-dns-swift-storage-0\") pod \"dnsmasq-dns-65965d6475-4jj4z\" (UID: \"0379ff65-3714-49fe-8f61-22fa65b88922\") " pod="openstack/dnsmasq-dns-65965d6475-4jj4z" Dec 03 13:18:33 crc kubenswrapper[4986]: I1203 13:18:33.323939 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0379ff65-3714-49fe-8f61-22fa65b88922-config\") pod \"dnsmasq-dns-65965d6475-4jj4z\" (UID: \"0379ff65-3714-49fe-8f61-22fa65b88922\") " pod="openstack/dnsmasq-dns-65965d6475-4jj4z" Dec 03 13:18:33 crc kubenswrapper[4986]: I1203 13:18:33.324125 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0379ff65-3714-49fe-8f61-22fa65b88922-ovsdbserver-sb\") pod \"dnsmasq-dns-65965d6475-4jj4z\" (UID: \"0379ff65-3714-49fe-8f61-22fa65b88922\") " pod="openstack/dnsmasq-dns-65965d6475-4jj4z" Dec 03 13:18:33 crc kubenswrapper[4986]: I1203 13:18:33.324149 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0379ff65-3714-49fe-8f61-22fa65b88922-dns-svc\") pod \"dnsmasq-dns-65965d6475-4jj4z\" (UID: \"0379ff65-3714-49fe-8f61-22fa65b88922\") " pod="openstack/dnsmasq-dns-65965d6475-4jj4z" Dec 03 13:18:33 crc kubenswrapper[4986]: I1203 13:18:33.326909 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0379ff65-3714-49fe-8f61-22fa65b88922-dns-swift-storage-0\") pod \"dnsmasq-dns-65965d6475-4jj4z\" (UID: \"0379ff65-3714-49fe-8f61-22fa65b88922\") " pod="openstack/dnsmasq-dns-65965d6475-4jj4z" Dec 03 13:18:33 crc kubenswrapper[4986]: I1203 13:18:33.327840 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0379ff65-3714-49fe-8f61-22fa65b88922-ovsdbserver-nb\") pod \"dnsmasq-dns-65965d6475-4jj4z\" (UID: \"0379ff65-3714-49fe-8f61-22fa65b88922\") " pod="openstack/dnsmasq-dns-65965d6475-4jj4z" Dec 03 13:18:33 crc kubenswrapper[4986]: I1203 13:18:33.327890 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0379ff65-3714-49fe-8f61-22fa65b88922-config\") pod \"dnsmasq-dns-65965d6475-4jj4z\" (UID: \"0379ff65-3714-49fe-8f61-22fa65b88922\") " pod="openstack/dnsmasq-dns-65965d6475-4jj4z" Dec 03 13:18:33 crc kubenswrapper[4986]: I1203 13:18:33.330876 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0379ff65-3714-49fe-8f61-22fa65b88922-ovsdbserver-sb\") pod \"dnsmasq-dns-65965d6475-4jj4z\" (UID: \"0379ff65-3714-49fe-8f61-22fa65b88922\") " pod="openstack/dnsmasq-dns-65965d6475-4jj4z" Dec 03 13:18:33 crc kubenswrapper[4986]: I1203 13:18:33.331198 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0379ff65-3714-49fe-8f61-22fa65b88922-dns-svc\") pod \"dnsmasq-dns-65965d6475-4jj4z\" (UID: \"0379ff65-3714-49fe-8f61-22fa65b88922\") " pod="openstack/dnsmasq-dns-65965d6475-4jj4z" Dec 03 13:18:33 crc kubenswrapper[4986]: I1203 13:18:33.345100 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6x4g8\" (UniqueName: \"kubernetes.io/projected/0379ff65-3714-49fe-8f61-22fa65b88922-kube-api-access-6x4g8\") pod \"dnsmasq-dns-65965d6475-4jj4z\" (UID: \"0379ff65-3714-49fe-8f61-22fa65b88922\") " pod="openstack/dnsmasq-dns-65965d6475-4jj4z" Dec 03 13:18:33 crc kubenswrapper[4986]: I1203 13:18:33.413030 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-2k7gk" event={"ID":"d560e389-543d-4341-a450-e6cb0f2a3057","Type":"ContainerStarted","Data":"83c8b6807b6a9e6d3bfae00be0405ade99cc8d834fdaef0b74ab5b47706fa3a4"} Dec 03 13:18:33 crc kubenswrapper[4986]: I1203 13:18:33.426165 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/db075f00-81d9-4306-b054-968590fecd46-httpd-config\") pod \"neutron-5dc7cf89f4-56jhq\" (UID: \"db075f00-81d9-4306-b054-968590fecd46\") " pod="openstack/neutron-5dc7cf89f4-56jhq" Dec 03 13:18:33 crc kubenswrapper[4986]: I1203 13:18:33.426266 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/db075f00-81d9-4306-b054-968590fecd46-ovndb-tls-certs\") pod \"neutron-5dc7cf89f4-56jhq\" (UID: \"db075f00-81d9-4306-b054-968590fecd46\") " pod="openstack/neutron-5dc7cf89f4-56jhq" Dec 03 13:18:33 crc kubenswrapper[4986]: I1203 13:18:33.426362 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6sv6\" (UniqueName: \"kubernetes.io/projected/db075f00-81d9-4306-b054-968590fecd46-kube-api-access-b6sv6\") pod \"neutron-5dc7cf89f4-56jhq\" (UID: \"db075f00-81d9-4306-b054-968590fecd46\") " pod="openstack/neutron-5dc7cf89f4-56jhq" Dec 03 13:18:33 crc kubenswrapper[4986]: I1203 13:18:33.426487 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db075f00-81d9-4306-b054-968590fecd46-combined-ca-bundle\") pod \"neutron-5dc7cf89f4-56jhq\" (UID: \"db075f00-81d9-4306-b054-968590fecd46\") " pod="openstack/neutron-5dc7cf89f4-56jhq" Dec 03 13:18:33 crc kubenswrapper[4986]: I1203 13:18:33.426647 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/db075f00-81d9-4306-b054-968590fecd46-config\") pod \"neutron-5dc7cf89f4-56jhq\" (UID: \"db075f00-81d9-4306-b054-968590fecd46\") " pod="openstack/neutron-5dc7cf89f4-56jhq" Dec 03 13:18:33 crc kubenswrapper[4986]: I1203 13:18:33.438731 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-2k7gk" podStartSLOduration=2.811864121 podStartE2EDuration="34.438716446s" podCreationTimestamp="2025-12-03 13:17:59 +0000 UTC" firstStartedPulling="2025-12-03 13:18:00.831938329 +0000 UTC m=+1340.298369520" lastFinishedPulling="2025-12-03 13:18:32.458790654 +0000 UTC m=+1371.925221845" observedRunningTime="2025-12-03 13:18:33.428793488 +0000 UTC m=+1372.895224689" watchObservedRunningTime="2025-12-03 13:18:33.438716446 +0000 UTC m=+1372.905147637" Dec 03 13:18:33 crc kubenswrapper[4986]: I1203 13:18:33.449799 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65965d6475-4jj4z" Dec 03 13:18:33 crc kubenswrapper[4986]: I1203 13:18:33.490949 4986 patch_prober.go:28] interesting pod/machine-config-daemon-xggpj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 13:18:33 crc kubenswrapper[4986]: I1203 13:18:33.491022 4986 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 13:18:33 crc kubenswrapper[4986]: I1203 13:18:33.491074 4986 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" Dec 03 13:18:33 crc kubenswrapper[4986]: I1203 13:18:33.491815 4986 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c5f89c3f3b886e65cc9822e8ac97f262b26f801c62b7d2e9ded96fcd903af71d"} pod="openshift-machine-config-operator/machine-config-daemon-xggpj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 13:18:33 crc kubenswrapper[4986]: I1203 13:18:33.491885 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" containerName="machine-config-daemon" containerID="cri-o://c5f89c3f3b886e65cc9822e8ac97f262b26f801c62b7d2e9ded96fcd903af71d" gracePeriod=600 Dec 03 13:18:33 crc kubenswrapper[4986]: I1203 13:18:33.528696 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db075f00-81d9-4306-b054-968590fecd46-combined-ca-bundle\") pod \"neutron-5dc7cf89f4-56jhq\" (UID: \"db075f00-81d9-4306-b054-968590fecd46\") " pod="openstack/neutron-5dc7cf89f4-56jhq" Dec 03 13:18:33 crc kubenswrapper[4986]: I1203 13:18:33.528806 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/db075f00-81d9-4306-b054-968590fecd46-config\") pod \"neutron-5dc7cf89f4-56jhq\" (UID: \"db075f00-81d9-4306-b054-968590fecd46\") " pod="openstack/neutron-5dc7cf89f4-56jhq" Dec 03 13:18:33 crc kubenswrapper[4986]: I1203 13:18:33.528892 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/db075f00-81d9-4306-b054-968590fecd46-httpd-config\") pod \"neutron-5dc7cf89f4-56jhq\" (UID: \"db075f00-81d9-4306-b054-968590fecd46\") " pod="openstack/neutron-5dc7cf89f4-56jhq" Dec 03 13:18:33 crc kubenswrapper[4986]: I1203 13:18:33.528934 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/db075f00-81d9-4306-b054-968590fecd46-ovndb-tls-certs\") pod \"neutron-5dc7cf89f4-56jhq\" (UID: \"db075f00-81d9-4306-b054-968590fecd46\") " pod="openstack/neutron-5dc7cf89f4-56jhq" Dec 03 13:18:33 crc kubenswrapper[4986]: I1203 13:18:33.528994 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6sv6\" (UniqueName: \"kubernetes.io/projected/db075f00-81d9-4306-b054-968590fecd46-kube-api-access-b6sv6\") pod \"neutron-5dc7cf89f4-56jhq\" (UID: \"db075f00-81d9-4306-b054-968590fecd46\") " pod="openstack/neutron-5dc7cf89f4-56jhq" Dec 03 13:18:33 crc kubenswrapper[4986]: I1203 13:18:33.537498 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/db075f00-81d9-4306-b054-968590fecd46-httpd-config\") pod \"neutron-5dc7cf89f4-56jhq\" (UID: \"db075f00-81d9-4306-b054-968590fecd46\") " pod="openstack/neutron-5dc7cf89f4-56jhq" Dec 03 13:18:33 crc kubenswrapper[4986]: I1203 13:18:33.539328 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/db075f00-81d9-4306-b054-968590fecd46-config\") pod \"neutron-5dc7cf89f4-56jhq\" (UID: \"db075f00-81d9-4306-b054-968590fecd46\") " pod="openstack/neutron-5dc7cf89f4-56jhq" Dec 03 13:18:33 crc kubenswrapper[4986]: I1203 13:18:33.540019 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db075f00-81d9-4306-b054-968590fecd46-combined-ca-bundle\") pod \"neutron-5dc7cf89f4-56jhq\" (UID: \"db075f00-81d9-4306-b054-968590fecd46\") " pod="openstack/neutron-5dc7cf89f4-56jhq" Dec 03 13:18:33 crc kubenswrapper[4986]: I1203 13:18:33.541236 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/db075f00-81d9-4306-b054-968590fecd46-ovndb-tls-certs\") pod \"neutron-5dc7cf89f4-56jhq\" (UID: \"db075f00-81d9-4306-b054-968590fecd46\") " pod="openstack/neutron-5dc7cf89f4-56jhq" Dec 03 13:18:33 crc kubenswrapper[4986]: I1203 13:18:33.566115 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6sv6\" (UniqueName: \"kubernetes.io/projected/db075f00-81d9-4306-b054-968590fecd46-kube-api-access-b6sv6\") pod \"neutron-5dc7cf89f4-56jhq\" (UID: \"db075f00-81d9-4306-b054-968590fecd46\") " pod="openstack/neutron-5dc7cf89f4-56jhq" Dec 03 13:18:33 crc kubenswrapper[4986]: I1203 13:18:33.866788 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5dc7cf89f4-56jhq" Dec 03 13:18:34 crc kubenswrapper[4986]: I1203 13:18:34.121301 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-65965d6475-4jj4z"] Dec 03 13:18:34 crc kubenswrapper[4986]: I1203 13:18:34.447731 4986 generic.go:334] "Generic (PLEG): container finished" podID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" containerID="c5f89c3f3b886e65cc9822e8ac97f262b26f801c62b7d2e9ded96fcd903af71d" exitCode=0 Dec 03 13:18:34 crc kubenswrapper[4986]: I1203 13:18:34.448046 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" event={"ID":"f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c","Type":"ContainerDied","Data":"c5f89c3f3b886e65cc9822e8ac97f262b26f801c62b7d2e9ded96fcd903af71d"} Dec 03 13:18:34 crc kubenswrapper[4986]: I1203 13:18:34.448072 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" event={"ID":"f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c","Type":"ContainerStarted","Data":"3ba053774b99510def9eb1096d5851d891d9961deee4a7966a8cfbf9b5fbf159"} Dec 03 13:18:34 crc kubenswrapper[4986]: I1203 13:18:34.448087 4986 scope.go:117] "RemoveContainer" containerID="ab3578e12fb223968075bdef0a7259f6a8a78f54ad544fd0e2d4be111538db67" Dec 03 13:18:34 crc kubenswrapper[4986]: I1203 13:18:34.495430 4986 generic.go:334] "Generic (PLEG): container finished" podID="e71caade-d3ac-45c2-8369-2c2a1d896370" containerID="dbd6cfcaed1a911968e997f6a395fdd03f3717d39b70c037e0fa75aadf9e7f0d" exitCode=0 Dec 03 13:18:34 crc kubenswrapper[4986]: I1203 13:18:34.495518 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wrbjn" event={"ID":"e71caade-d3ac-45c2-8369-2c2a1d896370","Type":"ContainerDied","Data":"dbd6cfcaed1a911968e997f6a395fdd03f3717d39b70c037e0fa75aadf9e7f0d"} Dec 03 13:18:34 crc kubenswrapper[4986]: I1203 13:18:34.511125 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65965d6475-4jj4z" event={"ID":"0379ff65-3714-49fe-8f61-22fa65b88922","Type":"ContainerStarted","Data":"05be4e92340dc3294644f4f5c83cbad6e4d10644880ea592e4c33c2601ebf24b"} Dec 03 13:18:34 crc kubenswrapper[4986]: I1203 13:18:34.514660 4986 generic.go:334] "Generic (PLEG): container finished" podID="9ca26c94-4546-4d5d-8600-30257c724198" containerID="dd577315d509f9c14bcfc2c833af575a89f8978df77e1e1cbb64fa674057668c" exitCode=0 Dec 03 13:18:34 crc kubenswrapper[4986]: I1203 13:18:34.514685 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-8v24s" event={"ID":"9ca26c94-4546-4d5d-8600-30257c724198","Type":"ContainerDied","Data":"dd577315d509f9c14bcfc2c833af575a89f8978df77e1e1cbb64fa674057668c"} Dec 03 13:18:34 crc kubenswrapper[4986]: I1203 13:18:34.622465 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5dc7cf89f4-56jhq"] Dec 03 13:18:35 crc kubenswrapper[4986]: I1203 13:18:35.528027 4986 generic.go:334] "Generic (PLEG): container finished" podID="0379ff65-3714-49fe-8f61-22fa65b88922" containerID="db3dc22832047931f86bf6218091734d04161dc8d7dde4f969f6a0483e6f489e" exitCode=0 Dec 03 13:18:35 crc kubenswrapper[4986]: I1203 13:18:35.528129 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65965d6475-4jj4z" event={"ID":"0379ff65-3714-49fe-8f61-22fa65b88922","Type":"ContainerDied","Data":"db3dc22832047931f86bf6218091734d04161dc8d7dde4f969f6a0483e6f489e"} Dec 03 13:18:35 crc kubenswrapper[4986]: I1203 13:18:35.530123 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5dc7cf89f4-56jhq" event={"ID":"db075f00-81d9-4306-b054-968590fecd46","Type":"ContainerStarted","Data":"e30eb661b49ebecd2db2d2fbfe82cf1580347da1a929b604b8f10d6570737631"} Dec 03 13:18:35 crc kubenswrapper[4986]: I1203 13:18:35.530152 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5dc7cf89f4-56jhq" event={"ID":"db075f00-81d9-4306-b054-968590fecd46","Type":"ContainerStarted","Data":"1c60ba0b171ea799ca25d0b61f1cb785fac05bd48caf6803e642c78db42cc520"} Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.015655 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-59f49d79c7-qt4rk"] Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.021534 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-59f49d79c7-qt4rk" Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.028189 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.028439 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.049868 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-59f49d79c7-qt4rk"] Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.102573 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crz4x\" (UniqueName: \"kubernetes.io/projected/5a350421-5f01-4d60-92b8-edc85e4ef3c5-kube-api-access-crz4x\") pod \"neutron-59f49d79c7-qt4rk\" (UID: \"5a350421-5f01-4d60-92b8-edc85e4ef3c5\") " pod="openstack/neutron-59f49d79c7-qt4rk" Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.102641 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a350421-5f01-4d60-92b8-edc85e4ef3c5-ovndb-tls-certs\") pod \"neutron-59f49d79c7-qt4rk\" (UID: \"5a350421-5f01-4d60-92b8-edc85e4ef3c5\") " pod="openstack/neutron-59f49d79c7-qt4rk" Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.102670 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5a350421-5f01-4d60-92b8-edc85e4ef3c5-config\") pod \"neutron-59f49d79c7-qt4rk\" (UID: \"5a350421-5f01-4d60-92b8-edc85e4ef3c5\") " pod="openstack/neutron-59f49d79c7-qt4rk" Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.102700 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a350421-5f01-4d60-92b8-edc85e4ef3c5-internal-tls-certs\") pod \"neutron-59f49d79c7-qt4rk\" (UID: \"5a350421-5f01-4d60-92b8-edc85e4ef3c5\") " pod="openstack/neutron-59f49d79c7-qt4rk" Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.102760 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5a350421-5f01-4d60-92b8-edc85e4ef3c5-httpd-config\") pod \"neutron-59f49d79c7-qt4rk\" (UID: \"5a350421-5f01-4d60-92b8-edc85e4ef3c5\") " pod="openstack/neutron-59f49d79c7-qt4rk" Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.102846 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a350421-5f01-4d60-92b8-edc85e4ef3c5-public-tls-certs\") pod \"neutron-59f49d79c7-qt4rk\" (UID: \"5a350421-5f01-4d60-92b8-edc85e4ef3c5\") " pod="openstack/neutron-59f49d79c7-qt4rk" Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.102901 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a350421-5f01-4d60-92b8-edc85e4ef3c5-combined-ca-bundle\") pod \"neutron-59f49d79c7-qt4rk\" (UID: \"5a350421-5f01-4d60-92b8-edc85e4ef3c5\") " pod="openstack/neutron-59f49d79c7-qt4rk" Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.209449 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a350421-5f01-4d60-92b8-edc85e4ef3c5-public-tls-certs\") pod \"neutron-59f49d79c7-qt4rk\" (UID: \"5a350421-5f01-4d60-92b8-edc85e4ef3c5\") " pod="openstack/neutron-59f49d79c7-qt4rk" Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.209500 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a350421-5f01-4d60-92b8-edc85e4ef3c5-combined-ca-bundle\") pod \"neutron-59f49d79c7-qt4rk\" (UID: \"5a350421-5f01-4d60-92b8-edc85e4ef3c5\") " pod="openstack/neutron-59f49d79c7-qt4rk" Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.209565 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crz4x\" (UniqueName: \"kubernetes.io/projected/5a350421-5f01-4d60-92b8-edc85e4ef3c5-kube-api-access-crz4x\") pod \"neutron-59f49d79c7-qt4rk\" (UID: \"5a350421-5f01-4d60-92b8-edc85e4ef3c5\") " pod="openstack/neutron-59f49d79c7-qt4rk" Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.209588 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a350421-5f01-4d60-92b8-edc85e4ef3c5-ovndb-tls-certs\") pod \"neutron-59f49d79c7-qt4rk\" (UID: \"5a350421-5f01-4d60-92b8-edc85e4ef3c5\") " pod="openstack/neutron-59f49d79c7-qt4rk" Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.209606 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5a350421-5f01-4d60-92b8-edc85e4ef3c5-config\") pod \"neutron-59f49d79c7-qt4rk\" (UID: \"5a350421-5f01-4d60-92b8-edc85e4ef3c5\") " pod="openstack/neutron-59f49d79c7-qt4rk" Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.209624 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a350421-5f01-4d60-92b8-edc85e4ef3c5-internal-tls-certs\") pod \"neutron-59f49d79c7-qt4rk\" (UID: \"5a350421-5f01-4d60-92b8-edc85e4ef3c5\") " pod="openstack/neutron-59f49d79c7-qt4rk" Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.209660 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5a350421-5f01-4d60-92b8-edc85e4ef3c5-httpd-config\") pod \"neutron-59f49d79c7-qt4rk\" (UID: \"5a350421-5f01-4d60-92b8-edc85e4ef3c5\") " pod="openstack/neutron-59f49d79c7-qt4rk" Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.221928 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a350421-5f01-4d60-92b8-edc85e4ef3c5-combined-ca-bundle\") pod \"neutron-59f49d79c7-qt4rk\" (UID: \"5a350421-5f01-4d60-92b8-edc85e4ef3c5\") " pod="openstack/neutron-59f49d79c7-qt4rk" Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.221953 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a350421-5f01-4d60-92b8-edc85e4ef3c5-public-tls-certs\") pod \"neutron-59f49d79c7-qt4rk\" (UID: \"5a350421-5f01-4d60-92b8-edc85e4ef3c5\") " pod="openstack/neutron-59f49d79c7-qt4rk" Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.222404 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a350421-5f01-4d60-92b8-edc85e4ef3c5-ovndb-tls-certs\") pod \"neutron-59f49d79c7-qt4rk\" (UID: \"5a350421-5f01-4d60-92b8-edc85e4ef3c5\") " pod="openstack/neutron-59f49d79c7-qt4rk" Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.222750 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5a350421-5f01-4d60-92b8-edc85e4ef3c5-httpd-config\") pod \"neutron-59f49d79c7-qt4rk\" (UID: \"5a350421-5f01-4d60-92b8-edc85e4ef3c5\") " pod="openstack/neutron-59f49d79c7-qt4rk" Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.225459 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a350421-5f01-4d60-92b8-edc85e4ef3c5-internal-tls-certs\") pod \"neutron-59f49d79c7-qt4rk\" (UID: \"5a350421-5f01-4d60-92b8-edc85e4ef3c5\") " pod="openstack/neutron-59f49d79c7-qt4rk" Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.226190 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/5a350421-5f01-4d60-92b8-edc85e4ef3c5-config\") pod \"neutron-59f49d79c7-qt4rk\" (UID: \"5a350421-5f01-4d60-92b8-edc85e4ef3c5\") " pod="openstack/neutron-59f49d79c7-qt4rk" Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.233238 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crz4x\" (UniqueName: \"kubernetes.io/projected/5a350421-5f01-4d60-92b8-edc85e4ef3c5-kube-api-access-crz4x\") pod \"neutron-59f49d79c7-qt4rk\" (UID: \"5a350421-5f01-4d60-92b8-edc85e4ef3c5\") " pod="openstack/neutron-59f49d79c7-qt4rk" Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.320743 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wrbjn" Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.327155 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-8v24s" Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.374682 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-59f49d79c7-qt4rk" Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.413098 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e71caade-d3ac-45c2-8369-2c2a1d896370-scripts\") pod \"e71caade-d3ac-45c2-8369-2c2a1d896370\" (UID: \"e71caade-d3ac-45c2-8369-2c2a1d896370\") " Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.413177 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e71caade-d3ac-45c2-8369-2c2a1d896370-credential-keys\") pod \"e71caade-d3ac-45c2-8369-2c2a1d896370\" (UID: \"e71caade-d3ac-45c2-8369-2c2a1d896370\") " Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.413261 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-47cfw\" (UniqueName: \"kubernetes.io/projected/9ca26c94-4546-4d5d-8600-30257c724198-kube-api-access-47cfw\") pod \"9ca26c94-4546-4d5d-8600-30257c724198\" (UID: \"9ca26c94-4546-4d5d-8600-30257c724198\") " Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.413314 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5ttz7\" (UniqueName: \"kubernetes.io/projected/e71caade-d3ac-45c2-8369-2c2a1d896370-kube-api-access-5ttz7\") pod \"e71caade-d3ac-45c2-8369-2c2a1d896370\" (UID: \"e71caade-d3ac-45c2-8369-2c2a1d896370\") " Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.413355 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ca26c94-4546-4d5d-8600-30257c724198-combined-ca-bundle\") pod \"9ca26c94-4546-4d5d-8600-30257c724198\" (UID: \"9ca26c94-4546-4d5d-8600-30257c724198\") " Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.413396 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e71caade-d3ac-45c2-8369-2c2a1d896370-combined-ca-bundle\") pod \"e71caade-d3ac-45c2-8369-2c2a1d896370\" (UID: \"e71caade-d3ac-45c2-8369-2c2a1d896370\") " Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.413429 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ca26c94-4546-4d5d-8600-30257c724198-logs\") pod \"9ca26c94-4546-4d5d-8600-30257c724198\" (UID: \"9ca26c94-4546-4d5d-8600-30257c724198\") " Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.413470 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e71caade-d3ac-45c2-8369-2c2a1d896370-config-data\") pod \"e71caade-d3ac-45c2-8369-2c2a1d896370\" (UID: \"e71caade-d3ac-45c2-8369-2c2a1d896370\") " Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.413528 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ca26c94-4546-4d5d-8600-30257c724198-scripts\") pod \"9ca26c94-4546-4d5d-8600-30257c724198\" (UID: \"9ca26c94-4546-4d5d-8600-30257c724198\") " Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.413564 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ca26c94-4546-4d5d-8600-30257c724198-config-data\") pod \"9ca26c94-4546-4d5d-8600-30257c724198\" (UID: \"9ca26c94-4546-4d5d-8600-30257c724198\") " Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.413632 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e71caade-d3ac-45c2-8369-2c2a1d896370-fernet-keys\") pod \"e71caade-d3ac-45c2-8369-2c2a1d896370\" (UID: \"e71caade-d3ac-45c2-8369-2c2a1d896370\") " Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.421640 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ca26c94-4546-4d5d-8600-30257c724198-logs" (OuterVolumeSpecName: "logs") pod "9ca26c94-4546-4d5d-8600-30257c724198" (UID: "9ca26c94-4546-4d5d-8600-30257c724198"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.427030 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ca26c94-4546-4d5d-8600-30257c724198-scripts" (OuterVolumeSpecName: "scripts") pod "9ca26c94-4546-4d5d-8600-30257c724198" (UID: "9ca26c94-4546-4d5d-8600-30257c724198"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.429478 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e71caade-d3ac-45c2-8369-2c2a1d896370-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "e71caade-d3ac-45c2-8369-2c2a1d896370" (UID: "e71caade-d3ac-45c2-8369-2c2a1d896370"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.429514 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e71caade-d3ac-45c2-8369-2c2a1d896370-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "e71caade-d3ac-45c2-8369-2c2a1d896370" (UID: "e71caade-d3ac-45c2-8369-2c2a1d896370"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.429523 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e71caade-d3ac-45c2-8369-2c2a1d896370-kube-api-access-5ttz7" (OuterVolumeSpecName: "kube-api-access-5ttz7") pod "e71caade-d3ac-45c2-8369-2c2a1d896370" (UID: "e71caade-d3ac-45c2-8369-2c2a1d896370"). InnerVolumeSpecName "kube-api-access-5ttz7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.429605 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ca26c94-4546-4d5d-8600-30257c724198-kube-api-access-47cfw" (OuterVolumeSpecName: "kube-api-access-47cfw") pod "9ca26c94-4546-4d5d-8600-30257c724198" (UID: "9ca26c94-4546-4d5d-8600-30257c724198"). InnerVolumeSpecName "kube-api-access-47cfw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.429785 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e71caade-d3ac-45c2-8369-2c2a1d896370-scripts" (OuterVolumeSpecName: "scripts") pod "e71caade-d3ac-45c2-8369-2c2a1d896370" (UID: "e71caade-d3ac-45c2-8369-2c2a1d896370"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.450851 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ca26c94-4546-4d5d-8600-30257c724198-config-data" (OuterVolumeSpecName: "config-data") pod "9ca26c94-4546-4d5d-8600-30257c724198" (UID: "9ca26c94-4546-4d5d-8600-30257c724198"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.456437 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ca26c94-4546-4d5d-8600-30257c724198-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9ca26c94-4546-4d5d-8600-30257c724198" (UID: "9ca26c94-4546-4d5d-8600-30257c724198"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.470432 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e71caade-d3ac-45c2-8369-2c2a1d896370-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e71caade-d3ac-45c2-8369-2c2a1d896370" (UID: "e71caade-d3ac-45c2-8369-2c2a1d896370"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.477831 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e71caade-d3ac-45c2-8369-2c2a1d896370-config-data" (OuterVolumeSpecName: "config-data") pod "e71caade-d3ac-45c2-8369-2c2a1d896370" (UID: "e71caade-d3ac-45c2-8369-2c2a1d896370"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.515916 4986 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e71caade-d3ac-45c2-8369-2c2a1d896370-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.515954 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-47cfw\" (UniqueName: \"kubernetes.io/projected/9ca26c94-4546-4d5d-8600-30257c724198-kube-api-access-47cfw\") on node \"crc\" DevicePath \"\"" Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.515967 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5ttz7\" (UniqueName: \"kubernetes.io/projected/e71caade-d3ac-45c2-8369-2c2a1d896370-kube-api-access-5ttz7\") on node \"crc\" DevicePath \"\"" Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.515975 4986 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ca26c94-4546-4d5d-8600-30257c724198-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.515984 4986 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e71caade-d3ac-45c2-8369-2c2a1d896370-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.515992 4986 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ca26c94-4546-4d5d-8600-30257c724198-logs\") on node \"crc\" DevicePath \"\"" Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.516000 4986 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e71caade-d3ac-45c2-8369-2c2a1d896370-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.516008 4986 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ca26c94-4546-4d5d-8600-30257c724198-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.516017 4986 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ca26c94-4546-4d5d-8600-30257c724198-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.516025 4986 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e71caade-d3ac-45c2-8369-2c2a1d896370-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.516033 4986 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e71caade-d3ac-45c2-8369-2c2a1d896370-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.551694 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5dc7cf89f4-56jhq" event={"ID":"db075f00-81d9-4306-b054-968590fecd46","Type":"ContainerStarted","Data":"ba96fbeff228dff458eed02a6d65b1dd391f097349bb77b73501b560c516fc52"} Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.552922 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5dc7cf89f4-56jhq" Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.570616 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-8v24s" Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.572156 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-8v24s" event={"ID":"9ca26c94-4546-4d5d-8600-30257c724198","Type":"ContainerDied","Data":"34f83e5f4fba54c270f509beb4fd02081bec03ebc17a39ba64c58ff8a2d48e88"} Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.572206 4986 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34f83e5f4fba54c270f509beb4fd02081bec03ebc17a39ba64c58ff8a2d48e88" Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.589676 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wrbjn" event={"ID":"e71caade-d3ac-45c2-8369-2c2a1d896370","Type":"ContainerDied","Data":"9a8ddf760577ff7ab3ece19b74c8e166c8fd47c6539f81ad40d0ef94db4870ae"} Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.589720 4986 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a8ddf760577ff7ab3ece19b74c8e166c8fd47c6539f81ad40d0ef94db4870ae" Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.589784 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wrbjn" Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.602608 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5dc7cf89f4-56jhq" podStartSLOduration=3.602585049 podStartE2EDuration="3.602585049s" podCreationTimestamp="2025-12-03 13:18:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:18:36.578815557 +0000 UTC m=+1376.045246748" watchObservedRunningTime="2025-12-03 13:18:36.602585049 +0000 UTC m=+1376.069016240" Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.671357 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-868bb78845-npjxs"] Dec 03 13:18:36 crc kubenswrapper[4986]: E1203 13:18:36.672346 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ca26c94-4546-4d5d-8600-30257c724198" containerName="placement-db-sync" Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.672370 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ca26c94-4546-4d5d-8600-30257c724198" containerName="placement-db-sync" Dec 03 13:18:36 crc kubenswrapper[4986]: E1203 13:18:36.672428 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e71caade-d3ac-45c2-8369-2c2a1d896370" containerName="keystone-bootstrap" Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.672437 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="e71caade-d3ac-45c2-8369-2c2a1d896370" containerName="keystone-bootstrap" Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.673327 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ca26c94-4546-4d5d-8600-30257c724198" containerName="placement-db-sync" Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.673361 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="e71caade-d3ac-45c2-8369-2c2a1d896370" containerName="keystone-bootstrap" Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.674424 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-868bb78845-npjxs" Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.685900 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.686529 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.689201 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-mtjcz" Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.689419 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.689793 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.690235 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.704647 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-868bb78845-npjxs"] Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.765958 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db8cd8b46-ffl2g"] Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.774103 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db8cd8b46-ffl2g" Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.774372 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db8cd8b46-ffl2g"] Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.789794 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.791558 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.792346 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.792481 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.792624 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-gbnrj" Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.835687 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgmrg\" (UniqueName: \"kubernetes.io/projected/652fbb43-2a64-471b-9123-cd6734de8993-kube-api-access-kgmrg\") pod \"keystone-868bb78845-npjxs\" (UID: \"652fbb43-2a64-471b-9123-cd6734de8993\") " pod="openstack/keystone-868bb78845-npjxs" Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.835736 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/652fbb43-2a64-471b-9123-cd6734de8993-config-data\") pod \"keystone-868bb78845-npjxs\" (UID: \"652fbb43-2a64-471b-9123-cd6734de8993\") " pod="openstack/keystone-868bb78845-npjxs" Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.835753 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/652fbb43-2a64-471b-9123-cd6734de8993-fernet-keys\") pod \"keystone-868bb78845-npjxs\" (UID: \"652fbb43-2a64-471b-9123-cd6734de8993\") " pod="openstack/keystone-868bb78845-npjxs" Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.835897 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/652fbb43-2a64-471b-9123-cd6734de8993-combined-ca-bundle\") pod \"keystone-868bb78845-npjxs\" (UID: \"652fbb43-2a64-471b-9123-cd6734de8993\") " pod="openstack/keystone-868bb78845-npjxs" Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.835953 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/652fbb43-2a64-471b-9123-cd6734de8993-credential-keys\") pod \"keystone-868bb78845-npjxs\" (UID: \"652fbb43-2a64-471b-9123-cd6734de8993\") " pod="openstack/keystone-868bb78845-npjxs" Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.835991 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/652fbb43-2a64-471b-9123-cd6734de8993-internal-tls-certs\") pod \"keystone-868bb78845-npjxs\" (UID: \"652fbb43-2a64-471b-9123-cd6734de8993\") " pod="openstack/keystone-868bb78845-npjxs" Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.836056 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/652fbb43-2a64-471b-9123-cd6734de8993-scripts\") pod \"keystone-868bb78845-npjxs\" (UID: \"652fbb43-2a64-471b-9123-cd6734de8993\") " pod="openstack/keystone-868bb78845-npjxs" Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.836079 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/652fbb43-2a64-471b-9123-cd6734de8993-public-tls-certs\") pod \"keystone-868bb78845-npjxs\" (UID: \"652fbb43-2a64-471b-9123-cd6734de8993\") " pod="openstack/keystone-868bb78845-npjxs" Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.938079 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/652fbb43-2a64-471b-9123-cd6734de8993-combined-ca-bundle\") pod \"keystone-868bb78845-npjxs\" (UID: \"652fbb43-2a64-471b-9123-cd6734de8993\") " pod="openstack/keystone-868bb78845-npjxs" Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.938135 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c622f06b-5b3c-45e4-890e-9f7ba2283ab3-combined-ca-bundle\") pod \"placement-db8cd8b46-ffl2g\" (UID: \"c622f06b-5b3c-45e4-890e-9f7ba2283ab3\") " pod="openstack/placement-db8cd8b46-ffl2g" Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.938172 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/652fbb43-2a64-471b-9123-cd6734de8993-credential-keys\") pod \"keystone-868bb78845-npjxs\" (UID: \"652fbb43-2a64-471b-9123-cd6734de8993\") " pod="openstack/keystone-868bb78845-npjxs" Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.938214 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/652fbb43-2a64-471b-9123-cd6734de8993-internal-tls-certs\") pod \"keystone-868bb78845-npjxs\" (UID: \"652fbb43-2a64-471b-9123-cd6734de8993\") " pod="openstack/keystone-868bb78845-npjxs" Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.938240 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c622f06b-5b3c-45e4-890e-9f7ba2283ab3-scripts\") pod \"placement-db8cd8b46-ffl2g\" (UID: \"c622f06b-5b3c-45e4-890e-9f7ba2283ab3\") " pod="openstack/placement-db8cd8b46-ffl2g" Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.938262 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c622f06b-5b3c-45e4-890e-9f7ba2283ab3-config-data\") pod \"placement-db8cd8b46-ffl2g\" (UID: \"c622f06b-5b3c-45e4-890e-9f7ba2283ab3\") " pod="openstack/placement-db8cd8b46-ffl2g" Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.938306 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/652fbb43-2a64-471b-9123-cd6734de8993-scripts\") pod \"keystone-868bb78845-npjxs\" (UID: \"652fbb43-2a64-471b-9123-cd6734de8993\") " pod="openstack/keystone-868bb78845-npjxs" Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.938327 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/652fbb43-2a64-471b-9123-cd6734de8993-public-tls-certs\") pod \"keystone-868bb78845-npjxs\" (UID: \"652fbb43-2a64-471b-9123-cd6734de8993\") " pod="openstack/keystone-868bb78845-npjxs" Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.938358 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c622f06b-5b3c-45e4-890e-9f7ba2283ab3-public-tls-certs\") pod \"placement-db8cd8b46-ffl2g\" (UID: \"c622f06b-5b3c-45e4-890e-9f7ba2283ab3\") " pod="openstack/placement-db8cd8b46-ffl2g" Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.938375 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgmrg\" (UniqueName: \"kubernetes.io/projected/652fbb43-2a64-471b-9123-cd6734de8993-kube-api-access-kgmrg\") pod \"keystone-868bb78845-npjxs\" (UID: \"652fbb43-2a64-471b-9123-cd6734de8993\") " pod="openstack/keystone-868bb78845-npjxs" Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.938396 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/652fbb43-2a64-471b-9123-cd6734de8993-config-data\") pod \"keystone-868bb78845-npjxs\" (UID: \"652fbb43-2a64-471b-9123-cd6734de8993\") " pod="openstack/keystone-868bb78845-npjxs" Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.938445 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/652fbb43-2a64-471b-9123-cd6734de8993-fernet-keys\") pod \"keystone-868bb78845-npjxs\" (UID: \"652fbb43-2a64-471b-9123-cd6734de8993\") " pod="openstack/keystone-868bb78845-npjxs" Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.938465 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c622f06b-5b3c-45e4-890e-9f7ba2283ab3-logs\") pod \"placement-db8cd8b46-ffl2g\" (UID: \"c622f06b-5b3c-45e4-890e-9f7ba2283ab3\") " pod="openstack/placement-db8cd8b46-ffl2g" Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.938506 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c622f06b-5b3c-45e4-890e-9f7ba2283ab3-internal-tls-certs\") pod \"placement-db8cd8b46-ffl2g\" (UID: \"c622f06b-5b3c-45e4-890e-9f7ba2283ab3\") " pod="openstack/placement-db8cd8b46-ffl2g" Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.938526 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4g7wr\" (UniqueName: \"kubernetes.io/projected/c622f06b-5b3c-45e4-890e-9f7ba2283ab3-kube-api-access-4g7wr\") pod \"placement-db8cd8b46-ffl2g\" (UID: \"c622f06b-5b3c-45e4-890e-9f7ba2283ab3\") " pod="openstack/placement-db8cd8b46-ffl2g" Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.943843 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/652fbb43-2a64-471b-9123-cd6734de8993-credential-keys\") pod \"keystone-868bb78845-npjxs\" (UID: \"652fbb43-2a64-471b-9123-cd6734de8993\") " pod="openstack/keystone-868bb78845-npjxs" Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.945019 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/652fbb43-2a64-471b-9123-cd6734de8993-scripts\") pod \"keystone-868bb78845-npjxs\" (UID: \"652fbb43-2a64-471b-9123-cd6734de8993\") " pod="openstack/keystone-868bb78845-npjxs" Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.945935 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/652fbb43-2a64-471b-9123-cd6734de8993-combined-ca-bundle\") pod \"keystone-868bb78845-npjxs\" (UID: \"652fbb43-2a64-471b-9123-cd6734de8993\") " pod="openstack/keystone-868bb78845-npjxs" Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.945967 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/652fbb43-2a64-471b-9123-cd6734de8993-config-data\") pod \"keystone-868bb78845-npjxs\" (UID: \"652fbb43-2a64-471b-9123-cd6734de8993\") " pod="openstack/keystone-868bb78845-npjxs" Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.946395 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/652fbb43-2a64-471b-9123-cd6734de8993-internal-tls-certs\") pod \"keystone-868bb78845-npjxs\" (UID: \"652fbb43-2a64-471b-9123-cd6734de8993\") " pod="openstack/keystone-868bb78845-npjxs" Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.947963 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/652fbb43-2a64-471b-9123-cd6734de8993-fernet-keys\") pod \"keystone-868bb78845-npjxs\" (UID: \"652fbb43-2a64-471b-9123-cd6734de8993\") " pod="openstack/keystone-868bb78845-npjxs" Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.948668 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/652fbb43-2a64-471b-9123-cd6734de8993-public-tls-certs\") pod \"keystone-868bb78845-npjxs\" (UID: \"652fbb43-2a64-471b-9123-cd6734de8993\") " pod="openstack/keystone-868bb78845-npjxs" Dec 03 13:18:36 crc kubenswrapper[4986]: I1203 13:18:36.967353 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgmrg\" (UniqueName: \"kubernetes.io/projected/652fbb43-2a64-471b-9123-cd6734de8993-kube-api-access-kgmrg\") pod \"keystone-868bb78845-npjxs\" (UID: \"652fbb43-2a64-471b-9123-cd6734de8993\") " pod="openstack/keystone-868bb78845-npjxs" Dec 03 13:18:37 crc kubenswrapper[4986]: I1203 13:18:37.028270 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-868bb78845-npjxs" Dec 03 13:18:37 crc kubenswrapper[4986]: I1203 13:18:37.044245 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c622f06b-5b3c-45e4-890e-9f7ba2283ab3-internal-tls-certs\") pod \"placement-db8cd8b46-ffl2g\" (UID: \"c622f06b-5b3c-45e4-890e-9f7ba2283ab3\") " pod="openstack/placement-db8cd8b46-ffl2g" Dec 03 13:18:37 crc kubenswrapper[4986]: I1203 13:18:37.044307 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4g7wr\" (UniqueName: \"kubernetes.io/projected/c622f06b-5b3c-45e4-890e-9f7ba2283ab3-kube-api-access-4g7wr\") pod \"placement-db8cd8b46-ffl2g\" (UID: \"c622f06b-5b3c-45e4-890e-9f7ba2283ab3\") " pod="openstack/placement-db8cd8b46-ffl2g" Dec 03 13:18:37 crc kubenswrapper[4986]: I1203 13:18:37.044396 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c622f06b-5b3c-45e4-890e-9f7ba2283ab3-combined-ca-bundle\") pod \"placement-db8cd8b46-ffl2g\" (UID: \"c622f06b-5b3c-45e4-890e-9f7ba2283ab3\") " pod="openstack/placement-db8cd8b46-ffl2g" Dec 03 13:18:37 crc kubenswrapper[4986]: I1203 13:18:37.044450 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c622f06b-5b3c-45e4-890e-9f7ba2283ab3-scripts\") pod \"placement-db8cd8b46-ffl2g\" (UID: \"c622f06b-5b3c-45e4-890e-9f7ba2283ab3\") " pod="openstack/placement-db8cd8b46-ffl2g" Dec 03 13:18:37 crc kubenswrapper[4986]: I1203 13:18:37.044466 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c622f06b-5b3c-45e4-890e-9f7ba2283ab3-config-data\") pod \"placement-db8cd8b46-ffl2g\" (UID: \"c622f06b-5b3c-45e4-890e-9f7ba2283ab3\") " pod="openstack/placement-db8cd8b46-ffl2g" Dec 03 13:18:37 crc kubenswrapper[4986]: I1203 13:18:37.044517 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c622f06b-5b3c-45e4-890e-9f7ba2283ab3-public-tls-certs\") pod \"placement-db8cd8b46-ffl2g\" (UID: \"c622f06b-5b3c-45e4-890e-9f7ba2283ab3\") " pod="openstack/placement-db8cd8b46-ffl2g" Dec 03 13:18:37 crc kubenswrapper[4986]: I1203 13:18:37.044548 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c622f06b-5b3c-45e4-890e-9f7ba2283ab3-logs\") pod \"placement-db8cd8b46-ffl2g\" (UID: \"c622f06b-5b3c-45e4-890e-9f7ba2283ab3\") " pod="openstack/placement-db8cd8b46-ffl2g" Dec 03 13:18:37 crc kubenswrapper[4986]: I1203 13:18:37.045121 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c622f06b-5b3c-45e4-890e-9f7ba2283ab3-logs\") pod \"placement-db8cd8b46-ffl2g\" (UID: \"c622f06b-5b3c-45e4-890e-9f7ba2283ab3\") " pod="openstack/placement-db8cd8b46-ffl2g" Dec 03 13:18:37 crc kubenswrapper[4986]: I1203 13:18:37.048203 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c622f06b-5b3c-45e4-890e-9f7ba2283ab3-internal-tls-certs\") pod \"placement-db8cd8b46-ffl2g\" (UID: \"c622f06b-5b3c-45e4-890e-9f7ba2283ab3\") " pod="openstack/placement-db8cd8b46-ffl2g" Dec 03 13:18:37 crc kubenswrapper[4986]: I1203 13:18:37.065000 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c622f06b-5b3c-45e4-890e-9f7ba2283ab3-combined-ca-bundle\") pod \"placement-db8cd8b46-ffl2g\" (UID: \"c622f06b-5b3c-45e4-890e-9f7ba2283ab3\") " pod="openstack/placement-db8cd8b46-ffl2g" Dec 03 13:18:37 crc kubenswrapper[4986]: I1203 13:18:37.070540 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c622f06b-5b3c-45e4-890e-9f7ba2283ab3-config-data\") pod \"placement-db8cd8b46-ffl2g\" (UID: \"c622f06b-5b3c-45e4-890e-9f7ba2283ab3\") " pod="openstack/placement-db8cd8b46-ffl2g" Dec 03 13:18:37 crc kubenswrapper[4986]: I1203 13:18:37.070667 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c622f06b-5b3c-45e4-890e-9f7ba2283ab3-scripts\") pod \"placement-db8cd8b46-ffl2g\" (UID: \"c622f06b-5b3c-45e4-890e-9f7ba2283ab3\") " pod="openstack/placement-db8cd8b46-ffl2g" Dec 03 13:18:37 crc kubenswrapper[4986]: I1203 13:18:37.071104 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c622f06b-5b3c-45e4-890e-9f7ba2283ab3-public-tls-certs\") pod \"placement-db8cd8b46-ffl2g\" (UID: \"c622f06b-5b3c-45e4-890e-9f7ba2283ab3\") " pod="openstack/placement-db8cd8b46-ffl2g" Dec 03 13:18:37 crc kubenswrapper[4986]: I1203 13:18:37.072910 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4g7wr\" (UniqueName: \"kubernetes.io/projected/c622f06b-5b3c-45e4-890e-9f7ba2283ab3-kube-api-access-4g7wr\") pod \"placement-db8cd8b46-ffl2g\" (UID: \"c622f06b-5b3c-45e4-890e-9f7ba2283ab3\") " pod="openstack/placement-db8cd8b46-ffl2g" Dec 03 13:18:37 crc kubenswrapper[4986]: I1203 13:18:37.116236 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db8cd8b46-ffl2g" Dec 03 13:18:38 crc kubenswrapper[4986]: I1203 13:18:38.624734 4986 generic.go:334] "Generic (PLEG): container finished" podID="d560e389-543d-4341-a450-e6cb0f2a3057" containerID="83c8b6807b6a9e6d3bfae00be0405ade99cc8d834fdaef0b74ab5b47706fa3a4" exitCode=0 Dec 03 13:18:38 crc kubenswrapper[4986]: I1203 13:18:38.624830 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-2k7gk" event={"ID":"d560e389-543d-4341-a450-e6cb0f2a3057","Type":"ContainerDied","Data":"83c8b6807b6a9e6d3bfae00be0405ade99cc8d834fdaef0b74ab5b47706fa3a4"} Dec 03 13:18:39 crc kubenswrapper[4986]: I1203 13:18:39.121673 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-cc774c568-phcpp" Dec 03 13:18:39 crc kubenswrapper[4986]: I1203 13:18:39.121725 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-cc774c568-phcpp" Dec 03 13:18:39 crc kubenswrapper[4986]: I1203 13:18:39.124263 4986 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-cc774c568-phcpp" podUID="cee5d9b6-d11e-4bad-b013-196a4f401404" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.141:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.141:8443: connect: connection refused" Dec 03 13:18:39 crc kubenswrapper[4986]: I1203 13:18:39.207451 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7797f969d4-6c2wn" Dec 03 13:18:39 crc kubenswrapper[4986]: I1203 13:18:39.207508 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7797f969d4-6c2wn" Dec 03 13:18:39 crc kubenswrapper[4986]: I1203 13:18:39.210729 4986 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7797f969d4-6c2wn" podUID="4d67e23b-bda4-42d5-81b6-be58c643861d" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.142:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.142:8443: connect: connection refused" Dec 03 13:18:40 crc kubenswrapper[4986]: I1203 13:18:40.620794 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-2k7gk" Dec 03 13:18:40 crc kubenswrapper[4986]: I1203 13:18:40.676854 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-2k7gk" event={"ID":"d560e389-543d-4341-a450-e6cb0f2a3057","Type":"ContainerDied","Data":"23bbf32e12786750f1d781ccd316c1cdfffc74e4889c112a7e3227663ac4706b"} Dec 03 13:18:40 crc kubenswrapper[4986]: I1203 13:18:40.676894 4986 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="23bbf32e12786750f1d781ccd316c1cdfffc74e4889c112a7e3227663ac4706b" Dec 03 13:18:40 crc kubenswrapper[4986]: I1203 13:18:40.676942 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-2k7gk" Dec 03 13:18:40 crc kubenswrapper[4986]: I1203 13:18:40.756446 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8r7v6\" (UniqueName: \"kubernetes.io/projected/d560e389-543d-4341-a450-e6cb0f2a3057-kube-api-access-8r7v6\") pod \"d560e389-543d-4341-a450-e6cb0f2a3057\" (UID: \"d560e389-543d-4341-a450-e6cb0f2a3057\") " Dec 03 13:18:40 crc kubenswrapper[4986]: I1203 13:18:40.756510 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d560e389-543d-4341-a450-e6cb0f2a3057-combined-ca-bundle\") pod \"d560e389-543d-4341-a450-e6cb0f2a3057\" (UID: \"d560e389-543d-4341-a450-e6cb0f2a3057\") " Dec 03 13:18:40 crc kubenswrapper[4986]: I1203 13:18:40.756543 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d560e389-543d-4341-a450-e6cb0f2a3057-db-sync-config-data\") pod \"d560e389-543d-4341-a450-e6cb0f2a3057\" (UID: \"d560e389-543d-4341-a450-e6cb0f2a3057\") " Dec 03 13:18:40 crc kubenswrapper[4986]: I1203 13:18:40.763534 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d560e389-543d-4341-a450-e6cb0f2a3057-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "d560e389-543d-4341-a450-e6cb0f2a3057" (UID: "d560e389-543d-4341-a450-e6cb0f2a3057"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:18:40 crc kubenswrapper[4986]: I1203 13:18:40.767552 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d560e389-543d-4341-a450-e6cb0f2a3057-kube-api-access-8r7v6" (OuterVolumeSpecName: "kube-api-access-8r7v6") pod "d560e389-543d-4341-a450-e6cb0f2a3057" (UID: "d560e389-543d-4341-a450-e6cb0f2a3057"). InnerVolumeSpecName "kube-api-access-8r7v6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:18:40 crc kubenswrapper[4986]: I1203 13:18:40.826537 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d560e389-543d-4341-a450-e6cb0f2a3057-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d560e389-543d-4341-a450-e6cb0f2a3057" (UID: "d560e389-543d-4341-a450-e6cb0f2a3057"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:18:40 crc kubenswrapper[4986]: I1203 13:18:40.858180 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8r7v6\" (UniqueName: \"kubernetes.io/projected/d560e389-543d-4341-a450-e6cb0f2a3057-kube-api-access-8r7v6\") on node \"crc\" DevicePath \"\"" Dec 03 13:18:40 crc kubenswrapper[4986]: I1203 13:18:40.858216 4986 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d560e389-543d-4341-a450-e6cb0f2a3057-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 13:18:40 crc kubenswrapper[4986]: I1203 13:18:40.858227 4986 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d560e389-543d-4341-a450-e6cb0f2a3057-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 13:18:40 crc kubenswrapper[4986]: I1203 13:18:40.938744 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-747896b766-b8kzr"] Dec 03 13:18:40 crc kubenswrapper[4986]: E1203 13:18:40.939103 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d560e389-543d-4341-a450-e6cb0f2a3057" containerName="barbican-db-sync" Dec 03 13:18:40 crc kubenswrapper[4986]: I1203 13:18:40.939120 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="d560e389-543d-4341-a450-e6cb0f2a3057" containerName="barbican-db-sync" Dec 03 13:18:40 crc kubenswrapper[4986]: I1203 13:18:40.939326 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="d560e389-543d-4341-a450-e6cb0f2a3057" containerName="barbican-db-sync" Dec 03 13:18:40 crc kubenswrapper[4986]: I1203 13:18:40.941711 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-747896b766-b8kzr" Dec 03 13:18:40 crc kubenswrapper[4986]: I1203 13:18:40.944569 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Dec 03 13:18:41 crc kubenswrapper[4986]: I1203 13:18:41.001132 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-787dc78df5-jtcv6"] Dec 03 13:18:41 crc kubenswrapper[4986]: I1203 13:18:41.002878 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-787dc78df5-jtcv6" Dec 03 13:18:41 crc kubenswrapper[4986]: I1203 13:18:41.012842 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Dec 03 13:18:41 crc kubenswrapper[4986]: I1203 13:18:41.014809 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-747896b766-b8kzr"] Dec 03 13:18:41 crc kubenswrapper[4986]: I1203 13:18:41.034349 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-787dc78df5-jtcv6"] Dec 03 13:18:41 crc kubenswrapper[4986]: I1203 13:18:41.059677 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-65965d6475-4jj4z"] Dec 03 13:18:41 crc kubenswrapper[4986]: I1203 13:18:41.063515 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ebfa5ced-0a56-44db-ba24-d5f663d65920-logs\") pod \"barbican-keystone-listener-747896b766-b8kzr\" (UID: \"ebfa5ced-0a56-44db-ba24-d5f663d65920\") " pod="openstack/barbican-keystone-listener-747896b766-b8kzr" Dec 03 13:18:41 crc kubenswrapper[4986]: I1203 13:18:41.063690 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdhtj\" (UniqueName: \"kubernetes.io/projected/ebfa5ced-0a56-44db-ba24-d5f663d65920-kube-api-access-bdhtj\") pod \"barbican-keystone-listener-747896b766-b8kzr\" (UID: \"ebfa5ced-0a56-44db-ba24-d5f663d65920\") " pod="openstack/barbican-keystone-listener-747896b766-b8kzr" Dec 03 13:18:41 crc kubenswrapper[4986]: I1203 13:18:41.063721 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ebfa5ced-0a56-44db-ba24-d5f663d65920-config-data-custom\") pod \"barbican-keystone-listener-747896b766-b8kzr\" (UID: \"ebfa5ced-0a56-44db-ba24-d5f663d65920\") " pod="openstack/barbican-keystone-listener-747896b766-b8kzr" Dec 03 13:18:41 crc kubenswrapper[4986]: I1203 13:18:41.063870 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebfa5ced-0a56-44db-ba24-d5f663d65920-config-data\") pod \"barbican-keystone-listener-747896b766-b8kzr\" (UID: \"ebfa5ced-0a56-44db-ba24-d5f663d65920\") " pod="openstack/barbican-keystone-listener-747896b766-b8kzr" Dec 03 13:18:41 crc kubenswrapper[4986]: I1203 13:18:41.063974 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebfa5ced-0a56-44db-ba24-d5f663d65920-combined-ca-bundle\") pod \"barbican-keystone-listener-747896b766-b8kzr\" (UID: \"ebfa5ced-0a56-44db-ba24-d5f663d65920\") " pod="openstack/barbican-keystone-listener-747896b766-b8kzr" Dec 03 13:18:41 crc kubenswrapper[4986]: I1203 13:18:41.082412 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-789c5c5cb7-4722d"] Dec 03 13:18:41 crc kubenswrapper[4986]: I1203 13:18:41.086438 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-789c5c5cb7-4722d" Dec 03 13:18:41 crc kubenswrapper[4986]: I1203 13:18:41.104394 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-789c5c5cb7-4722d"] Dec 03 13:18:41 crc kubenswrapper[4986]: I1203 13:18:41.122448 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-868bb78845-npjxs"] Dec 03 13:18:41 crc kubenswrapper[4986]: I1203 13:18:41.136366 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db8cd8b46-ffl2g"] Dec 03 13:18:41 crc kubenswrapper[4986]: I1203 13:18:41.149066 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-59f49d79c7-qt4rk"] Dec 03 13:18:41 crc kubenswrapper[4986]: I1203 13:18:41.165963 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17f8ed93-1afd-41c1-a52b-addefeb38ab0-logs\") pod \"barbican-worker-787dc78df5-jtcv6\" (UID: \"17f8ed93-1afd-41c1-a52b-addefeb38ab0\") " pod="openstack/barbican-worker-787dc78df5-jtcv6" Dec 03 13:18:41 crc kubenswrapper[4986]: I1203 13:18:41.166025 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdhtj\" (UniqueName: \"kubernetes.io/projected/ebfa5ced-0a56-44db-ba24-d5f663d65920-kube-api-access-bdhtj\") pod \"barbican-keystone-listener-747896b766-b8kzr\" (UID: \"ebfa5ced-0a56-44db-ba24-d5f663d65920\") " pod="openstack/barbican-keystone-listener-747896b766-b8kzr" Dec 03 13:18:41 crc kubenswrapper[4986]: I1203 13:18:41.166048 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ebfa5ced-0a56-44db-ba24-d5f663d65920-config-data-custom\") pod \"barbican-keystone-listener-747896b766-b8kzr\" (UID: \"ebfa5ced-0a56-44db-ba24-d5f663d65920\") " pod="openstack/barbican-keystone-listener-747896b766-b8kzr" Dec 03 13:18:41 crc kubenswrapper[4986]: I1203 13:18:41.166094 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/17f8ed93-1afd-41c1-a52b-addefeb38ab0-config-data-custom\") pod \"barbican-worker-787dc78df5-jtcv6\" (UID: \"17f8ed93-1afd-41c1-a52b-addefeb38ab0\") " pod="openstack/barbican-worker-787dc78df5-jtcv6" Dec 03 13:18:41 crc kubenswrapper[4986]: I1203 13:18:41.166117 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebfa5ced-0a56-44db-ba24-d5f663d65920-config-data\") pod \"barbican-keystone-listener-747896b766-b8kzr\" (UID: \"ebfa5ced-0a56-44db-ba24-d5f663d65920\") " pod="openstack/barbican-keystone-listener-747896b766-b8kzr" Dec 03 13:18:41 crc kubenswrapper[4986]: I1203 13:18:41.166140 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17f8ed93-1afd-41c1-a52b-addefeb38ab0-combined-ca-bundle\") pod \"barbican-worker-787dc78df5-jtcv6\" (UID: \"17f8ed93-1afd-41c1-a52b-addefeb38ab0\") " pod="openstack/barbican-worker-787dc78df5-jtcv6" Dec 03 13:18:41 crc kubenswrapper[4986]: I1203 13:18:41.166163 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjnkw\" (UniqueName: \"kubernetes.io/projected/17f8ed93-1afd-41c1-a52b-addefeb38ab0-kube-api-access-tjnkw\") pod \"barbican-worker-787dc78df5-jtcv6\" (UID: \"17f8ed93-1afd-41c1-a52b-addefeb38ab0\") " pod="openstack/barbican-worker-787dc78df5-jtcv6" Dec 03 13:18:41 crc kubenswrapper[4986]: I1203 13:18:41.166187 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebfa5ced-0a56-44db-ba24-d5f663d65920-combined-ca-bundle\") pod \"barbican-keystone-listener-747896b766-b8kzr\" (UID: \"ebfa5ced-0a56-44db-ba24-d5f663d65920\") " pod="openstack/barbican-keystone-listener-747896b766-b8kzr" Dec 03 13:18:41 crc kubenswrapper[4986]: I1203 13:18:41.166216 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17f8ed93-1afd-41c1-a52b-addefeb38ab0-config-data\") pod \"barbican-worker-787dc78df5-jtcv6\" (UID: \"17f8ed93-1afd-41c1-a52b-addefeb38ab0\") " pod="openstack/barbican-worker-787dc78df5-jtcv6" Dec 03 13:18:41 crc kubenswrapper[4986]: I1203 13:18:41.166232 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ebfa5ced-0a56-44db-ba24-d5f663d65920-logs\") pod \"barbican-keystone-listener-747896b766-b8kzr\" (UID: \"ebfa5ced-0a56-44db-ba24-d5f663d65920\") " pod="openstack/barbican-keystone-listener-747896b766-b8kzr" Dec 03 13:18:41 crc kubenswrapper[4986]: I1203 13:18:41.166627 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ebfa5ced-0a56-44db-ba24-d5f663d65920-logs\") pod \"barbican-keystone-listener-747896b766-b8kzr\" (UID: \"ebfa5ced-0a56-44db-ba24-d5f663d65920\") " pod="openstack/barbican-keystone-listener-747896b766-b8kzr" Dec 03 13:18:41 crc kubenswrapper[4986]: I1203 13:18:41.185061 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebfa5ced-0a56-44db-ba24-d5f663d65920-combined-ca-bundle\") pod \"barbican-keystone-listener-747896b766-b8kzr\" (UID: \"ebfa5ced-0a56-44db-ba24-d5f663d65920\") " pod="openstack/barbican-keystone-listener-747896b766-b8kzr" Dec 03 13:18:41 crc kubenswrapper[4986]: I1203 13:18:41.185124 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7bc8976f54-hn8pf"] Dec 03 13:18:41 crc kubenswrapper[4986]: I1203 13:18:41.187245 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7bc8976f54-hn8pf" Dec 03 13:18:41 crc kubenswrapper[4986]: I1203 13:18:41.189149 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebfa5ced-0a56-44db-ba24-d5f663d65920-config-data\") pod \"barbican-keystone-listener-747896b766-b8kzr\" (UID: \"ebfa5ced-0a56-44db-ba24-d5f663d65920\") " pod="openstack/barbican-keystone-listener-747896b766-b8kzr" Dec 03 13:18:41 crc kubenswrapper[4986]: I1203 13:18:41.195322 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ebfa5ced-0a56-44db-ba24-d5f663d65920-config-data-custom\") pod \"barbican-keystone-listener-747896b766-b8kzr\" (UID: \"ebfa5ced-0a56-44db-ba24-d5f663d65920\") " pod="openstack/barbican-keystone-listener-747896b766-b8kzr" Dec 03 13:18:41 crc kubenswrapper[4986]: I1203 13:18:41.195625 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Dec 03 13:18:41 crc kubenswrapper[4986]: I1203 13:18:41.196268 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7bc8976f54-hn8pf"] Dec 03 13:18:41 crc kubenswrapper[4986]: I1203 13:18:41.249153 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdhtj\" (UniqueName: \"kubernetes.io/projected/ebfa5ced-0a56-44db-ba24-d5f663d65920-kube-api-access-bdhtj\") pod \"barbican-keystone-listener-747896b766-b8kzr\" (UID: \"ebfa5ced-0a56-44db-ba24-d5f663d65920\") " pod="openstack/barbican-keystone-listener-747896b766-b8kzr" Dec 03 13:18:41 crc kubenswrapper[4986]: I1203 13:18:41.272189 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28zkr\" (UniqueName: \"kubernetes.io/projected/b1a7aa7a-cf10-4bc1-8e84-fc83ea863d28-kube-api-access-28zkr\") pod \"dnsmasq-dns-789c5c5cb7-4722d\" (UID: \"b1a7aa7a-cf10-4bc1-8e84-fc83ea863d28\") " pod="openstack/dnsmasq-dns-789c5c5cb7-4722d" Dec 03 13:18:41 crc kubenswrapper[4986]: I1203 13:18:41.272259 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/17f8ed93-1afd-41c1-a52b-addefeb38ab0-config-data-custom\") pod \"barbican-worker-787dc78df5-jtcv6\" (UID: \"17f8ed93-1afd-41c1-a52b-addefeb38ab0\") " pod="openstack/barbican-worker-787dc78df5-jtcv6" Dec 03 13:18:41 crc kubenswrapper[4986]: I1203 13:18:41.272308 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17f8ed93-1afd-41c1-a52b-addefeb38ab0-combined-ca-bundle\") pod \"barbican-worker-787dc78df5-jtcv6\" (UID: \"17f8ed93-1afd-41c1-a52b-addefeb38ab0\") " pod="openstack/barbican-worker-787dc78df5-jtcv6" Dec 03 13:18:41 crc kubenswrapper[4986]: I1203 13:18:41.272332 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjnkw\" (UniqueName: \"kubernetes.io/projected/17f8ed93-1afd-41c1-a52b-addefeb38ab0-kube-api-access-tjnkw\") pod \"barbican-worker-787dc78df5-jtcv6\" (UID: \"17f8ed93-1afd-41c1-a52b-addefeb38ab0\") " pod="openstack/barbican-worker-787dc78df5-jtcv6" Dec 03 13:18:41 crc kubenswrapper[4986]: I1203 13:18:41.272350 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b1a7aa7a-cf10-4bc1-8e84-fc83ea863d28-ovsdbserver-nb\") pod \"dnsmasq-dns-789c5c5cb7-4722d\" (UID: \"b1a7aa7a-cf10-4bc1-8e84-fc83ea863d28\") " pod="openstack/dnsmasq-dns-789c5c5cb7-4722d" Dec 03 13:18:41 crc kubenswrapper[4986]: I1203 13:18:41.272377 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1a7aa7a-cf10-4bc1-8e84-fc83ea863d28-config\") pod \"dnsmasq-dns-789c5c5cb7-4722d\" (UID: \"b1a7aa7a-cf10-4bc1-8e84-fc83ea863d28\") " pod="openstack/dnsmasq-dns-789c5c5cb7-4722d" Dec 03 13:18:41 crc kubenswrapper[4986]: I1203 13:18:41.272392 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b1a7aa7a-cf10-4bc1-8e84-fc83ea863d28-dns-svc\") pod \"dnsmasq-dns-789c5c5cb7-4722d\" (UID: \"b1a7aa7a-cf10-4bc1-8e84-fc83ea863d28\") " pod="openstack/dnsmasq-dns-789c5c5cb7-4722d" Dec 03 13:18:41 crc kubenswrapper[4986]: I1203 13:18:41.272417 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b1a7aa7a-cf10-4bc1-8e84-fc83ea863d28-dns-swift-storage-0\") pod \"dnsmasq-dns-789c5c5cb7-4722d\" (UID: \"b1a7aa7a-cf10-4bc1-8e84-fc83ea863d28\") " pod="openstack/dnsmasq-dns-789c5c5cb7-4722d" Dec 03 13:18:41 crc kubenswrapper[4986]: I1203 13:18:41.272439 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17f8ed93-1afd-41c1-a52b-addefeb38ab0-config-data\") pod \"barbican-worker-787dc78df5-jtcv6\" (UID: \"17f8ed93-1afd-41c1-a52b-addefeb38ab0\") " pod="openstack/barbican-worker-787dc78df5-jtcv6" Dec 03 13:18:41 crc kubenswrapper[4986]: I1203 13:18:41.272464 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b1a7aa7a-cf10-4bc1-8e84-fc83ea863d28-ovsdbserver-sb\") pod \"dnsmasq-dns-789c5c5cb7-4722d\" (UID: \"b1a7aa7a-cf10-4bc1-8e84-fc83ea863d28\") " pod="openstack/dnsmasq-dns-789c5c5cb7-4722d" Dec 03 13:18:41 crc kubenswrapper[4986]: I1203 13:18:41.272495 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17f8ed93-1afd-41c1-a52b-addefeb38ab0-logs\") pod \"barbican-worker-787dc78df5-jtcv6\" (UID: \"17f8ed93-1afd-41c1-a52b-addefeb38ab0\") " pod="openstack/barbican-worker-787dc78df5-jtcv6" Dec 03 13:18:41 crc kubenswrapper[4986]: I1203 13:18:41.272925 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17f8ed93-1afd-41c1-a52b-addefeb38ab0-logs\") pod \"barbican-worker-787dc78df5-jtcv6\" (UID: \"17f8ed93-1afd-41c1-a52b-addefeb38ab0\") " pod="openstack/barbican-worker-787dc78df5-jtcv6" Dec 03 13:18:41 crc kubenswrapper[4986]: I1203 13:18:41.282126 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-747896b766-b8kzr" Dec 03 13:18:41 crc kubenswrapper[4986]: I1203 13:18:41.295870 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/17f8ed93-1afd-41c1-a52b-addefeb38ab0-config-data-custom\") pod \"barbican-worker-787dc78df5-jtcv6\" (UID: \"17f8ed93-1afd-41c1-a52b-addefeb38ab0\") " pod="openstack/barbican-worker-787dc78df5-jtcv6" Dec 03 13:18:41 crc kubenswrapper[4986]: I1203 13:18:41.302593 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17f8ed93-1afd-41c1-a52b-addefeb38ab0-config-data\") pod \"barbican-worker-787dc78df5-jtcv6\" (UID: \"17f8ed93-1afd-41c1-a52b-addefeb38ab0\") " pod="openstack/barbican-worker-787dc78df5-jtcv6" Dec 03 13:18:41 crc kubenswrapper[4986]: I1203 13:18:41.306674 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjnkw\" (UniqueName: \"kubernetes.io/projected/17f8ed93-1afd-41c1-a52b-addefeb38ab0-kube-api-access-tjnkw\") pod \"barbican-worker-787dc78df5-jtcv6\" (UID: \"17f8ed93-1afd-41c1-a52b-addefeb38ab0\") " pod="openstack/barbican-worker-787dc78df5-jtcv6" Dec 03 13:18:41 crc kubenswrapper[4986]: I1203 13:18:41.312661 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17f8ed93-1afd-41c1-a52b-addefeb38ab0-combined-ca-bundle\") pod \"barbican-worker-787dc78df5-jtcv6\" (UID: \"17f8ed93-1afd-41c1-a52b-addefeb38ab0\") " pod="openstack/barbican-worker-787dc78df5-jtcv6" Dec 03 13:18:41 crc kubenswrapper[4986]: I1203 13:18:41.357066 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-787dc78df5-jtcv6" Dec 03 13:18:41 crc kubenswrapper[4986]: I1203 13:18:41.376674 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b1a7aa7a-cf10-4bc1-8e84-fc83ea863d28-ovsdbserver-nb\") pod \"dnsmasq-dns-789c5c5cb7-4722d\" (UID: \"b1a7aa7a-cf10-4bc1-8e84-fc83ea863d28\") " pod="openstack/dnsmasq-dns-789c5c5cb7-4722d" Dec 03 13:18:41 crc kubenswrapper[4986]: I1203 13:18:41.376748 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1a7aa7a-cf10-4bc1-8e84-fc83ea863d28-config\") pod \"dnsmasq-dns-789c5c5cb7-4722d\" (UID: \"b1a7aa7a-cf10-4bc1-8e84-fc83ea863d28\") " pod="openstack/dnsmasq-dns-789c5c5cb7-4722d" Dec 03 13:18:41 crc kubenswrapper[4986]: I1203 13:18:41.376776 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b1a7aa7a-cf10-4bc1-8e84-fc83ea863d28-dns-svc\") pod \"dnsmasq-dns-789c5c5cb7-4722d\" (UID: \"b1a7aa7a-cf10-4bc1-8e84-fc83ea863d28\") " pod="openstack/dnsmasq-dns-789c5c5cb7-4722d" Dec 03 13:18:41 crc kubenswrapper[4986]: I1203 13:18:41.376804 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4d708ae2-3c0a-4134-97fe-36270231010f-logs\") pod \"barbican-api-7bc8976f54-hn8pf\" (UID: \"4d708ae2-3c0a-4134-97fe-36270231010f\") " pod="openstack/barbican-api-7bc8976f54-hn8pf" Dec 03 13:18:41 crc kubenswrapper[4986]: I1203 13:18:41.376835 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4d708ae2-3c0a-4134-97fe-36270231010f-config-data-custom\") pod \"barbican-api-7bc8976f54-hn8pf\" (UID: \"4d708ae2-3c0a-4134-97fe-36270231010f\") " pod="openstack/barbican-api-7bc8976f54-hn8pf" Dec 03 13:18:41 crc kubenswrapper[4986]: I1203 13:18:41.376864 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b1a7aa7a-cf10-4bc1-8e84-fc83ea863d28-dns-swift-storage-0\") pod \"dnsmasq-dns-789c5c5cb7-4722d\" (UID: \"b1a7aa7a-cf10-4bc1-8e84-fc83ea863d28\") " pod="openstack/dnsmasq-dns-789c5c5cb7-4722d" Dec 03 13:18:41 crc kubenswrapper[4986]: I1203 13:18:41.376907 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b1a7aa7a-cf10-4bc1-8e84-fc83ea863d28-ovsdbserver-sb\") pod \"dnsmasq-dns-789c5c5cb7-4722d\" (UID: \"b1a7aa7a-cf10-4bc1-8e84-fc83ea863d28\") " pod="openstack/dnsmasq-dns-789c5c5cb7-4722d" Dec 03 13:18:41 crc kubenswrapper[4986]: I1203 13:18:41.376931 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d708ae2-3c0a-4134-97fe-36270231010f-config-data\") pod \"barbican-api-7bc8976f54-hn8pf\" (UID: \"4d708ae2-3c0a-4134-97fe-36270231010f\") " pod="openstack/barbican-api-7bc8976f54-hn8pf" Dec 03 13:18:41 crc kubenswrapper[4986]: I1203 13:18:41.377000 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9p2jz\" (UniqueName: \"kubernetes.io/projected/4d708ae2-3c0a-4134-97fe-36270231010f-kube-api-access-9p2jz\") pod \"barbican-api-7bc8976f54-hn8pf\" (UID: \"4d708ae2-3c0a-4134-97fe-36270231010f\") " pod="openstack/barbican-api-7bc8976f54-hn8pf" Dec 03 13:18:41 crc kubenswrapper[4986]: I1203 13:18:41.377067 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28zkr\" (UniqueName: \"kubernetes.io/projected/b1a7aa7a-cf10-4bc1-8e84-fc83ea863d28-kube-api-access-28zkr\") pod \"dnsmasq-dns-789c5c5cb7-4722d\" (UID: \"b1a7aa7a-cf10-4bc1-8e84-fc83ea863d28\") " pod="openstack/dnsmasq-dns-789c5c5cb7-4722d" Dec 03 13:18:41 crc kubenswrapper[4986]: I1203 13:18:41.377140 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d708ae2-3c0a-4134-97fe-36270231010f-combined-ca-bundle\") pod \"barbican-api-7bc8976f54-hn8pf\" (UID: \"4d708ae2-3c0a-4134-97fe-36270231010f\") " pod="openstack/barbican-api-7bc8976f54-hn8pf" Dec 03 13:18:41 crc kubenswrapper[4986]: I1203 13:18:41.378256 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b1a7aa7a-cf10-4bc1-8e84-fc83ea863d28-ovsdbserver-nb\") pod \"dnsmasq-dns-789c5c5cb7-4722d\" (UID: \"b1a7aa7a-cf10-4bc1-8e84-fc83ea863d28\") " pod="openstack/dnsmasq-dns-789c5c5cb7-4722d" Dec 03 13:18:41 crc kubenswrapper[4986]: I1203 13:18:41.379074 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1a7aa7a-cf10-4bc1-8e84-fc83ea863d28-config\") pod \"dnsmasq-dns-789c5c5cb7-4722d\" (UID: \"b1a7aa7a-cf10-4bc1-8e84-fc83ea863d28\") " pod="openstack/dnsmasq-dns-789c5c5cb7-4722d" Dec 03 13:18:41 crc kubenswrapper[4986]: I1203 13:18:41.379691 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b1a7aa7a-cf10-4bc1-8e84-fc83ea863d28-dns-svc\") pod \"dnsmasq-dns-789c5c5cb7-4722d\" (UID: \"b1a7aa7a-cf10-4bc1-8e84-fc83ea863d28\") " pod="openstack/dnsmasq-dns-789c5c5cb7-4722d" Dec 03 13:18:41 crc kubenswrapper[4986]: I1203 13:18:41.381897 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b1a7aa7a-cf10-4bc1-8e84-fc83ea863d28-ovsdbserver-sb\") pod \"dnsmasq-dns-789c5c5cb7-4722d\" (UID: \"b1a7aa7a-cf10-4bc1-8e84-fc83ea863d28\") " pod="openstack/dnsmasq-dns-789c5c5cb7-4722d" Dec 03 13:18:41 crc kubenswrapper[4986]: I1203 13:18:41.381933 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b1a7aa7a-cf10-4bc1-8e84-fc83ea863d28-dns-swift-storage-0\") pod \"dnsmasq-dns-789c5c5cb7-4722d\" (UID: \"b1a7aa7a-cf10-4bc1-8e84-fc83ea863d28\") " pod="openstack/dnsmasq-dns-789c5c5cb7-4722d" Dec 03 13:18:41 crc kubenswrapper[4986]: I1203 13:18:41.412065 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28zkr\" (UniqueName: \"kubernetes.io/projected/b1a7aa7a-cf10-4bc1-8e84-fc83ea863d28-kube-api-access-28zkr\") pod \"dnsmasq-dns-789c5c5cb7-4722d\" (UID: \"b1a7aa7a-cf10-4bc1-8e84-fc83ea863d28\") " pod="openstack/dnsmasq-dns-789c5c5cb7-4722d" Dec 03 13:18:41 crc kubenswrapper[4986]: I1203 13:18:41.426678 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-789c5c5cb7-4722d" Dec 03 13:18:41 crc kubenswrapper[4986]: I1203 13:18:41.480547 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d708ae2-3c0a-4134-97fe-36270231010f-combined-ca-bundle\") pod \"barbican-api-7bc8976f54-hn8pf\" (UID: \"4d708ae2-3c0a-4134-97fe-36270231010f\") " pod="openstack/barbican-api-7bc8976f54-hn8pf" Dec 03 13:18:41 crc kubenswrapper[4986]: I1203 13:18:41.480924 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4d708ae2-3c0a-4134-97fe-36270231010f-logs\") pod \"barbican-api-7bc8976f54-hn8pf\" (UID: \"4d708ae2-3c0a-4134-97fe-36270231010f\") " pod="openstack/barbican-api-7bc8976f54-hn8pf" Dec 03 13:18:41 crc kubenswrapper[4986]: I1203 13:18:41.480960 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4d708ae2-3c0a-4134-97fe-36270231010f-config-data-custom\") pod \"barbican-api-7bc8976f54-hn8pf\" (UID: \"4d708ae2-3c0a-4134-97fe-36270231010f\") " pod="openstack/barbican-api-7bc8976f54-hn8pf" Dec 03 13:18:41 crc kubenswrapper[4986]: I1203 13:18:41.481020 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d708ae2-3c0a-4134-97fe-36270231010f-config-data\") pod \"barbican-api-7bc8976f54-hn8pf\" (UID: \"4d708ae2-3c0a-4134-97fe-36270231010f\") " pod="openstack/barbican-api-7bc8976f54-hn8pf" Dec 03 13:18:41 crc kubenswrapper[4986]: I1203 13:18:41.481110 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9p2jz\" (UniqueName: \"kubernetes.io/projected/4d708ae2-3c0a-4134-97fe-36270231010f-kube-api-access-9p2jz\") pod \"barbican-api-7bc8976f54-hn8pf\" (UID: \"4d708ae2-3c0a-4134-97fe-36270231010f\") " pod="openstack/barbican-api-7bc8976f54-hn8pf" Dec 03 13:18:41 crc kubenswrapper[4986]: I1203 13:18:41.483687 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4d708ae2-3c0a-4134-97fe-36270231010f-logs\") pod \"barbican-api-7bc8976f54-hn8pf\" (UID: \"4d708ae2-3c0a-4134-97fe-36270231010f\") " pod="openstack/barbican-api-7bc8976f54-hn8pf" Dec 03 13:18:41 crc kubenswrapper[4986]: I1203 13:18:41.490412 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d708ae2-3c0a-4134-97fe-36270231010f-config-data\") pod \"barbican-api-7bc8976f54-hn8pf\" (UID: \"4d708ae2-3c0a-4134-97fe-36270231010f\") " pod="openstack/barbican-api-7bc8976f54-hn8pf" Dec 03 13:18:41 crc kubenswrapper[4986]: I1203 13:18:41.497713 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d708ae2-3c0a-4134-97fe-36270231010f-combined-ca-bundle\") pod \"barbican-api-7bc8976f54-hn8pf\" (UID: \"4d708ae2-3c0a-4134-97fe-36270231010f\") " pod="openstack/barbican-api-7bc8976f54-hn8pf" Dec 03 13:18:41 crc kubenswrapper[4986]: I1203 13:18:41.515299 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4d708ae2-3c0a-4134-97fe-36270231010f-config-data-custom\") pod \"barbican-api-7bc8976f54-hn8pf\" (UID: \"4d708ae2-3c0a-4134-97fe-36270231010f\") " pod="openstack/barbican-api-7bc8976f54-hn8pf" Dec 03 13:18:41 crc kubenswrapper[4986]: I1203 13:18:41.520959 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9p2jz\" (UniqueName: \"kubernetes.io/projected/4d708ae2-3c0a-4134-97fe-36270231010f-kube-api-access-9p2jz\") pod \"barbican-api-7bc8976f54-hn8pf\" (UID: \"4d708ae2-3c0a-4134-97fe-36270231010f\") " pod="openstack/barbican-api-7bc8976f54-hn8pf" Dec 03 13:18:41 crc kubenswrapper[4986]: I1203 13:18:41.548739 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7bc8976f54-hn8pf" Dec 03 13:18:41 crc kubenswrapper[4986]: I1203 13:18:41.729290 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65965d6475-4jj4z" event={"ID":"0379ff65-3714-49fe-8f61-22fa65b88922","Type":"ContainerStarted","Data":"86895d8912f9a31ef1c666669ccaa58c5876e12464ff94ad6e8ad0c843ad1dea"} Dec 03 13:18:41 crc kubenswrapper[4986]: I1203 13:18:41.729582 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-65965d6475-4jj4z" Dec 03 13:18:41 crc kubenswrapper[4986]: I1203 13:18:41.732028 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db8cd8b46-ffl2g" event={"ID":"c622f06b-5b3c-45e4-890e-9f7ba2283ab3","Type":"ContainerStarted","Data":"a9bca1388e25f62ff25199b234125eeda96d3d1d31d365b2b09d8b7ef2882ac2"} Dec 03 13:18:41 crc kubenswrapper[4986]: I1203 13:18:41.760608 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"143eb6df-5711-4616-baaa-42417113bfed","Type":"ContainerStarted","Data":"a1a8f1bb0109a39a22c7d1184ef9832ec739c4597a7672dc07759640a1d91ad2"} Dec 03 13:18:41 crc kubenswrapper[4986]: I1203 13:18:41.762188 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-65965d6475-4jj4z" podStartSLOduration=8.762175273 podStartE2EDuration="8.762175273s" podCreationTimestamp="2025-12-03 13:18:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:18:41.750712234 +0000 UTC m=+1381.217143425" watchObservedRunningTime="2025-12-03 13:18:41.762175273 +0000 UTC m=+1381.228606464" Dec 03 13:18:41 crc kubenswrapper[4986]: I1203 13:18:41.765890 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-868bb78845-npjxs" event={"ID":"652fbb43-2a64-471b-9123-cd6734de8993","Type":"ContainerStarted","Data":"3a12cebb702b13eea4077062b807ed5430fe94228f69e68f035721862b2d8dc1"} Dec 03 13:18:41 crc kubenswrapper[4986]: I1203 13:18:41.781529 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59f49d79c7-qt4rk" event={"ID":"5a350421-5f01-4d60-92b8-edc85e4ef3c5","Type":"ContainerStarted","Data":"b2aad61c6ea50ba1674d4cd94376a02d3c14983c93ada5dd1ed27108c7eb46d1"} Dec 03 13:18:41 crc kubenswrapper[4986]: I1203 13:18:41.943691 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-747896b766-b8kzr"] Dec 03 13:18:42 crc kubenswrapper[4986]: I1203 13:18:42.118158 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-789c5c5cb7-4722d"] Dec 03 13:18:42 crc kubenswrapper[4986]: I1203 13:18:42.130111 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-787dc78df5-jtcv6"] Dec 03 13:18:42 crc kubenswrapper[4986]: I1203 13:18:42.297627 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7bc8976f54-hn8pf"] Dec 03 13:18:42 crc kubenswrapper[4986]: I1203 13:18:42.791023 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-789c5c5cb7-4722d" event={"ID":"b1a7aa7a-cf10-4bc1-8e84-fc83ea863d28","Type":"ContainerStarted","Data":"63d700d4a1f4a731dcbf762e1c5948ffdb8fc96b0ae9c649604f0a5f3d0c9cb7"} Dec 03 13:18:42 crc kubenswrapper[4986]: I1203 13:18:42.792663 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7bc8976f54-hn8pf" event={"ID":"4d708ae2-3c0a-4134-97fe-36270231010f","Type":"ContainerStarted","Data":"998122e1b817fdf5a5b212f9bd8251fdc89f312f87c34ec8b24f827705a7b221"} Dec 03 13:18:42 crc kubenswrapper[4986]: I1203 13:18:42.794563 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-747896b766-b8kzr" event={"ID":"ebfa5ced-0a56-44db-ba24-d5f663d65920","Type":"ContainerStarted","Data":"6e2036bb202dd39538d7ff16edda26b85f47c2e5f0661aa32afbe5330fde1fb9"} Dec 03 13:18:42 crc kubenswrapper[4986]: I1203 13:18:42.795679 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-787dc78df5-jtcv6" event={"ID":"17f8ed93-1afd-41c1-a52b-addefeb38ab0","Type":"ContainerStarted","Data":"95a3e285cc26d9e6cb56869f66df2676cd72be235d07ab6321f3c43d6ea29c8f"} Dec 03 13:18:42 crc kubenswrapper[4986]: I1203 13:18:42.795870 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-65965d6475-4jj4z" podUID="0379ff65-3714-49fe-8f61-22fa65b88922" containerName="dnsmasq-dns" containerID="cri-o://86895d8912f9a31ef1c666669ccaa58c5876e12464ff94ad6e8ad0c843ad1dea" gracePeriod=10 Dec 03 13:18:43 crc kubenswrapper[4986]: I1203 13:18:43.816307 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-868bb78845-npjxs" event={"ID":"652fbb43-2a64-471b-9123-cd6734de8993","Type":"ContainerStarted","Data":"c04ea1eb7488e67c019f5ec21028f93425962b4aef8e94e3be3048a57ecd38a5"} Dec 03 13:18:43 crc kubenswrapper[4986]: I1203 13:18:43.817184 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-868bb78845-npjxs" Dec 03 13:18:43 crc kubenswrapper[4986]: I1203 13:18:43.854703 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-868bb78845-npjxs" podStartSLOduration=7.8546819679999995 podStartE2EDuration="7.854681968s" podCreationTimestamp="2025-12-03 13:18:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:18:43.847059322 +0000 UTC m=+1383.313490513" watchObservedRunningTime="2025-12-03 13:18:43.854681968 +0000 UTC m=+1383.321113159" Dec 03 13:18:43 crc kubenswrapper[4986]: I1203 13:18:43.869826 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59f49d79c7-qt4rk" event={"ID":"5a350421-5f01-4d60-92b8-edc85e4ef3c5","Type":"ContainerStarted","Data":"ce199db0304c378cf573bc89cd239ea48819e47dc44c854aeb8f2462fabefa0c"} Dec 03 13:18:43 crc kubenswrapper[4986]: I1203 13:18:43.898039 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65965d6475-4jj4z" Dec 03 13:18:43 crc kubenswrapper[4986]: I1203 13:18:43.909475 4986 generic.go:334] "Generic (PLEG): container finished" podID="0379ff65-3714-49fe-8f61-22fa65b88922" containerID="86895d8912f9a31ef1c666669ccaa58c5876e12464ff94ad6e8ad0c843ad1dea" exitCode=0 Dec 03 13:18:43 crc kubenswrapper[4986]: I1203 13:18:43.909545 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65965d6475-4jj4z" event={"ID":"0379ff65-3714-49fe-8f61-22fa65b88922","Type":"ContainerDied","Data":"86895d8912f9a31ef1c666669ccaa58c5876e12464ff94ad6e8ad0c843ad1dea"} Dec 03 13:18:43 crc kubenswrapper[4986]: I1203 13:18:43.909580 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65965d6475-4jj4z" event={"ID":"0379ff65-3714-49fe-8f61-22fa65b88922","Type":"ContainerDied","Data":"05be4e92340dc3294644f4f5c83cbad6e4d10644880ea592e4c33c2601ebf24b"} Dec 03 13:18:43 crc kubenswrapper[4986]: I1203 13:18:43.909596 4986 scope.go:117] "RemoveContainer" containerID="86895d8912f9a31ef1c666669ccaa58c5876e12464ff94ad6e8ad0c843ad1dea" Dec 03 13:18:43 crc kubenswrapper[4986]: I1203 13:18:43.923084 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db8cd8b46-ffl2g" event={"ID":"c622f06b-5b3c-45e4-890e-9f7ba2283ab3","Type":"ContainerStarted","Data":"b4050417520f11f0f680755923bce54e7fda57da64625d540681a010e84da354"} Dec 03 13:18:43 crc kubenswrapper[4986]: I1203 13:18:43.923473 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-db8cd8b46-ffl2g" Dec 03 13:18:43 crc kubenswrapper[4986]: I1203 13:18:43.923741 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-db8cd8b46-ffl2g" Dec 03 13:18:43 crc kubenswrapper[4986]: I1203 13:18:43.942237 4986 generic.go:334] "Generic (PLEG): container finished" podID="b1a7aa7a-cf10-4bc1-8e84-fc83ea863d28" containerID="af7585f513bae018e351dcf4d3e5a466b52e06383153fcb7da49251b017d8b92" exitCode=0 Dec 03 13:18:43 crc kubenswrapper[4986]: I1203 13:18:43.942356 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-789c5c5cb7-4722d" event={"ID":"b1a7aa7a-cf10-4bc1-8e84-fc83ea863d28","Type":"ContainerDied","Data":"af7585f513bae018e351dcf4d3e5a466b52e06383153fcb7da49251b017d8b92"} Dec 03 13:18:43 crc kubenswrapper[4986]: I1203 13:18:43.945052 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5744ccfcbb-rcmx5"] Dec 03 13:18:43 crc kubenswrapper[4986]: E1203 13:18:43.945510 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0379ff65-3714-49fe-8f61-22fa65b88922" containerName="init" Dec 03 13:18:43 crc kubenswrapper[4986]: I1203 13:18:43.945530 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="0379ff65-3714-49fe-8f61-22fa65b88922" containerName="init" Dec 03 13:18:43 crc kubenswrapper[4986]: E1203 13:18:43.945550 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0379ff65-3714-49fe-8f61-22fa65b88922" containerName="dnsmasq-dns" Dec 03 13:18:43 crc kubenswrapper[4986]: I1203 13:18:43.945557 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="0379ff65-3714-49fe-8f61-22fa65b88922" containerName="dnsmasq-dns" Dec 03 13:18:43 crc kubenswrapper[4986]: I1203 13:18:43.945767 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="0379ff65-3714-49fe-8f61-22fa65b88922" containerName="dnsmasq-dns" Dec 03 13:18:43 crc kubenswrapper[4986]: I1203 13:18:43.946696 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5744ccfcbb-rcmx5" Dec 03 13:18:43 crc kubenswrapper[4986]: I1203 13:18:43.953651 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Dec 03 13:18:43 crc kubenswrapper[4986]: I1203 13:18:43.953884 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Dec 03 13:18:43 crc kubenswrapper[4986]: I1203 13:18:43.962025 4986 scope.go:117] "RemoveContainer" containerID="db3dc22832047931f86bf6218091734d04161dc8d7dde4f969f6a0483e6f489e" Dec 03 13:18:43 crc kubenswrapper[4986]: I1203 13:18:43.963339 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7bc8976f54-hn8pf" event={"ID":"4d708ae2-3c0a-4134-97fe-36270231010f","Type":"ContainerStarted","Data":"e221479e09edabe1d4433296404302474b808fd5df11d6fedd4a03e7a8e8e042"} Dec 03 13:18:43 crc kubenswrapper[4986]: I1203 13:18:43.964198 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7bc8976f54-hn8pf" Dec 03 13:18:43 crc kubenswrapper[4986]: I1203 13:18:43.964231 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7bc8976f54-hn8pf" Dec 03 13:18:44 crc kubenswrapper[4986]: I1203 13:18:44.040767 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0379ff65-3714-49fe-8f61-22fa65b88922-ovsdbserver-sb\") pod \"0379ff65-3714-49fe-8f61-22fa65b88922\" (UID: \"0379ff65-3714-49fe-8f61-22fa65b88922\") " Dec 03 13:18:44 crc kubenswrapper[4986]: I1203 13:18:44.040997 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0379ff65-3714-49fe-8f61-22fa65b88922-ovsdbserver-nb\") pod \"0379ff65-3714-49fe-8f61-22fa65b88922\" (UID: \"0379ff65-3714-49fe-8f61-22fa65b88922\") " Dec 03 13:18:44 crc kubenswrapper[4986]: I1203 13:18:44.041025 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0379ff65-3714-49fe-8f61-22fa65b88922-dns-swift-storage-0\") pod \"0379ff65-3714-49fe-8f61-22fa65b88922\" (UID: \"0379ff65-3714-49fe-8f61-22fa65b88922\") " Dec 03 13:18:44 crc kubenswrapper[4986]: I1203 13:18:44.041047 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6x4g8\" (UniqueName: \"kubernetes.io/projected/0379ff65-3714-49fe-8f61-22fa65b88922-kube-api-access-6x4g8\") pod \"0379ff65-3714-49fe-8f61-22fa65b88922\" (UID: \"0379ff65-3714-49fe-8f61-22fa65b88922\") " Dec 03 13:18:44 crc kubenswrapper[4986]: I1203 13:18:44.041090 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0379ff65-3714-49fe-8f61-22fa65b88922-config\") pod \"0379ff65-3714-49fe-8f61-22fa65b88922\" (UID: \"0379ff65-3714-49fe-8f61-22fa65b88922\") " Dec 03 13:18:44 crc kubenswrapper[4986]: I1203 13:18:44.041123 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0379ff65-3714-49fe-8f61-22fa65b88922-dns-svc\") pod \"0379ff65-3714-49fe-8f61-22fa65b88922\" (UID: \"0379ff65-3714-49fe-8f61-22fa65b88922\") " Dec 03 13:18:44 crc kubenswrapper[4986]: I1203 13:18:44.050476 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0379ff65-3714-49fe-8f61-22fa65b88922-kube-api-access-6x4g8" (OuterVolumeSpecName: "kube-api-access-6x4g8") pod "0379ff65-3714-49fe-8f61-22fa65b88922" (UID: "0379ff65-3714-49fe-8f61-22fa65b88922"). InnerVolumeSpecName "kube-api-access-6x4g8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:18:44 crc kubenswrapper[4986]: I1203 13:18:44.074406 4986 scope.go:117] "RemoveContainer" containerID="86895d8912f9a31ef1c666669ccaa58c5876e12464ff94ad6e8ad0c843ad1dea" Dec 03 13:18:44 crc kubenswrapper[4986]: E1203 13:18:44.075478 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86895d8912f9a31ef1c666669ccaa58c5876e12464ff94ad6e8ad0c843ad1dea\": container with ID starting with 86895d8912f9a31ef1c666669ccaa58c5876e12464ff94ad6e8ad0c843ad1dea not found: ID does not exist" containerID="86895d8912f9a31ef1c666669ccaa58c5876e12464ff94ad6e8ad0c843ad1dea" Dec 03 13:18:44 crc kubenswrapper[4986]: I1203 13:18:44.075520 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86895d8912f9a31ef1c666669ccaa58c5876e12464ff94ad6e8ad0c843ad1dea"} err="failed to get container status \"86895d8912f9a31ef1c666669ccaa58c5876e12464ff94ad6e8ad0c843ad1dea\": rpc error: code = NotFound desc = could not find container \"86895d8912f9a31ef1c666669ccaa58c5876e12464ff94ad6e8ad0c843ad1dea\": container with ID starting with 86895d8912f9a31ef1c666669ccaa58c5876e12464ff94ad6e8ad0c843ad1dea not found: ID does not exist" Dec 03 13:18:44 crc kubenswrapper[4986]: I1203 13:18:44.075545 4986 scope.go:117] "RemoveContainer" containerID="db3dc22832047931f86bf6218091734d04161dc8d7dde4f969f6a0483e6f489e" Dec 03 13:18:44 crc kubenswrapper[4986]: E1203 13:18:44.079545 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db3dc22832047931f86bf6218091734d04161dc8d7dde4f969f6a0483e6f489e\": container with ID starting with db3dc22832047931f86bf6218091734d04161dc8d7dde4f969f6a0483e6f489e not found: ID does not exist" containerID="db3dc22832047931f86bf6218091734d04161dc8d7dde4f969f6a0483e6f489e" Dec 03 13:18:44 crc kubenswrapper[4986]: I1203 13:18:44.079580 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db3dc22832047931f86bf6218091734d04161dc8d7dde4f969f6a0483e6f489e"} err="failed to get container status \"db3dc22832047931f86bf6218091734d04161dc8d7dde4f969f6a0483e6f489e\": rpc error: code = NotFound desc = could not find container \"db3dc22832047931f86bf6218091734d04161dc8d7dde4f969f6a0483e6f489e\": container with ID starting with db3dc22832047931f86bf6218091734d04161dc8d7dde4f969f6a0483e6f489e not found: ID does not exist" Dec 03 13:18:44 crc kubenswrapper[4986]: I1203 13:18:44.100262 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5744ccfcbb-rcmx5"] Dec 03 13:18:44 crc kubenswrapper[4986]: I1203 13:18:44.106660 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0379ff65-3714-49fe-8f61-22fa65b88922-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0379ff65-3714-49fe-8f61-22fa65b88922" (UID: "0379ff65-3714-49fe-8f61-22fa65b88922"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:18:44 crc kubenswrapper[4986]: I1203 13:18:44.107868 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db8cd8b46-ffl2g" podStartSLOduration=8.107845739 podStartE2EDuration="8.107845739s" podCreationTimestamp="2025-12-03 13:18:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:18:43.973504964 +0000 UTC m=+1383.439936175" watchObservedRunningTime="2025-12-03 13:18:44.107845739 +0000 UTC m=+1383.574276930" Dec 03 13:18:44 crc kubenswrapper[4986]: I1203 13:18:44.125720 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0379ff65-3714-49fe-8f61-22fa65b88922-config" (OuterVolumeSpecName: "config") pod "0379ff65-3714-49fe-8f61-22fa65b88922" (UID: "0379ff65-3714-49fe-8f61-22fa65b88922"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:18:44 crc kubenswrapper[4986]: I1203 13:18:44.126274 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7bc8976f54-hn8pf" podStartSLOduration=3.126255617 podStartE2EDuration="3.126255617s" podCreationTimestamp="2025-12-03 13:18:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:18:44.060170333 +0000 UTC m=+1383.526601534" watchObservedRunningTime="2025-12-03 13:18:44.126255617 +0000 UTC m=+1383.592686808" Dec 03 13:18:44 crc kubenswrapper[4986]: I1203 13:18:44.143233 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/37ccb095-f90f-4383-88e9-05d2d82cab28-logs\") pod \"barbican-api-5744ccfcbb-rcmx5\" (UID: \"37ccb095-f90f-4383-88e9-05d2d82cab28\") " pod="openstack/barbican-api-5744ccfcbb-rcmx5" Dec 03 13:18:44 crc kubenswrapper[4986]: I1203 13:18:44.143296 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37ccb095-f90f-4383-88e9-05d2d82cab28-combined-ca-bundle\") pod \"barbican-api-5744ccfcbb-rcmx5\" (UID: \"37ccb095-f90f-4383-88e9-05d2d82cab28\") " pod="openstack/barbican-api-5744ccfcbb-rcmx5" Dec 03 13:18:44 crc kubenswrapper[4986]: I1203 13:18:44.143325 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjtwz\" (UniqueName: \"kubernetes.io/projected/37ccb095-f90f-4383-88e9-05d2d82cab28-kube-api-access-tjtwz\") pod \"barbican-api-5744ccfcbb-rcmx5\" (UID: \"37ccb095-f90f-4383-88e9-05d2d82cab28\") " pod="openstack/barbican-api-5744ccfcbb-rcmx5" Dec 03 13:18:44 crc kubenswrapper[4986]: I1203 13:18:44.143367 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37ccb095-f90f-4383-88e9-05d2d82cab28-config-data\") pod \"barbican-api-5744ccfcbb-rcmx5\" (UID: \"37ccb095-f90f-4383-88e9-05d2d82cab28\") " pod="openstack/barbican-api-5744ccfcbb-rcmx5" Dec 03 13:18:44 crc kubenswrapper[4986]: I1203 13:18:44.143386 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/37ccb095-f90f-4383-88e9-05d2d82cab28-internal-tls-certs\") pod \"barbican-api-5744ccfcbb-rcmx5\" (UID: \"37ccb095-f90f-4383-88e9-05d2d82cab28\") " pod="openstack/barbican-api-5744ccfcbb-rcmx5" Dec 03 13:18:44 crc kubenswrapper[4986]: I1203 13:18:44.143762 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/37ccb095-f90f-4383-88e9-05d2d82cab28-public-tls-certs\") pod \"barbican-api-5744ccfcbb-rcmx5\" (UID: \"37ccb095-f90f-4383-88e9-05d2d82cab28\") " pod="openstack/barbican-api-5744ccfcbb-rcmx5" Dec 03 13:18:44 crc kubenswrapper[4986]: I1203 13:18:44.144056 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/37ccb095-f90f-4383-88e9-05d2d82cab28-config-data-custom\") pod \"barbican-api-5744ccfcbb-rcmx5\" (UID: \"37ccb095-f90f-4383-88e9-05d2d82cab28\") " pod="openstack/barbican-api-5744ccfcbb-rcmx5" Dec 03 13:18:44 crc kubenswrapper[4986]: I1203 13:18:44.144139 4986 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0379ff65-3714-49fe-8f61-22fa65b88922-config\") on node \"crc\" DevicePath \"\"" Dec 03 13:18:44 crc kubenswrapper[4986]: I1203 13:18:44.144151 4986 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0379ff65-3714-49fe-8f61-22fa65b88922-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 03 13:18:44 crc kubenswrapper[4986]: I1203 13:18:44.144161 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6x4g8\" (UniqueName: \"kubernetes.io/projected/0379ff65-3714-49fe-8f61-22fa65b88922-kube-api-access-6x4g8\") on node \"crc\" DevicePath \"\"" Dec 03 13:18:44 crc kubenswrapper[4986]: I1203 13:18:44.145599 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0379ff65-3714-49fe-8f61-22fa65b88922-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "0379ff65-3714-49fe-8f61-22fa65b88922" (UID: "0379ff65-3714-49fe-8f61-22fa65b88922"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:18:44 crc kubenswrapper[4986]: I1203 13:18:44.215840 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0379ff65-3714-49fe-8f61-22fa65b88922-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0379ff65-3714-49fe-8f61-22fa65b88922" (UID: "0379ff65-3714-49fe-8f61-22fa65b88922"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:18:44 crc kubenswrapper[4986]: I1203 13:18:44.245959 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/37ccb095-f90f-4383-88e9-05d2d82cab28-logs\") pod \"barbican-api-5744ccfcbb-rcmx5\" (UID: \"37ccb095-f90f-4383-88e9-05d2d82cab28\") " pod="openstack/barbican-api-5744ccfcbb-rcmx5" Dec 03 13:18:44 crc kubenswrapper[4986]: I1203 13:18:44.246008 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37ccb095-f90f-4383-88e9-05d2d82cab28-combined-ca-bundle\") pod \"barbican-api-5744ccfcbb-rcmx5\" (UID: \"37ccb095-f90f-4383-88e9-05d2d82cab28\") " pod="openstack/barbican-api-5744ccfcbb-rcmx5" Dec 03 13:18:44 crc kubenswrapper[4986]: I1203 13:18:44.246042 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjtwz\" (UniqueName: \"kubernetes.io/projected/37ccb095-f90f-4383-88e9-05d2d82cab28-kube-api-access-tjtwz\") pod \"barbican-api-5744ccfcbb-rcmx5\" (UID: \"37ccb095-f90f-4383-88e9-05d2d82cab28\") " pod="openstack/barbican-api-5744ccfcbb-rcmx5" Dec 03 13:18:44 crc kubenswrapper[4986]: I1203 13:18:44.246078 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37ccb095-f90f-4383-88e9-05d2d82cab28-config-data\") pod \"barbican-api-5744ccfcbb-rcmx5\" (UID: \"37ccb095-f90f-4383-88e9-05d2d82cab28\") " pod="openstack/barbican-api-5744ccfcbb-rcmx5" Dec 03 13:18:44 crc kubenswrapper[4986]: I1203 13:18:44.246104 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/37ccb095-f90f-4383-88e9-05d2d82cab28-internal-tls-certs\") pod \"barbican-api-5744ccfcbb-rcmx5\" (UID: \"37ccb095-f90f-4383-88e9-05d2d82cab28\") " pod="openstack/barbican-api-5744ccfcbb-rcmx5" Dec 03 13:18:44 crc kubenswrapper[4986]: I1203 13:18:44.246119 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/37ccb095-f90f-4383-88e9-05d2d82cab28-public-tls-certs\") pod \"barbican-api-5744ccfcbb-rcmx5\" (UID: \"37ccb095-f90f-4383-88e9-05d2d82cab28\") " pod="openstack/barbican-api-5744ccfcbb-rcmx5" Dec 03 13:18:44 crc kubenswrapper[4986]: I1203 13:18:44.246198 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/37ccb095-f90f-4383-88e9-05d2d82cab28-config-data-custom\") pod \"barbican-api-5744ccfcbb-rcmx5\" (UID: \"37ccb095-f90f-4383-88e9-05d2d82cab28\") " pod="openstack/barbican-api-5744ccfcbb-rcmx5" Dec 03 13:18:44 crc kubenswrapper[4986]: I1203 13:18:44.246263 4986 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0379ff65-3714-49fe-8f61-22fa65b88922-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 03 13:18:44 crc kubenswrapper[4986]: I1203 13:18:44.246274 4986 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0379ff65-3714-49fe-8f61-22fa65b88922-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 03 13:18:44 crc kubenswrapper[4986]: I1203 13:18:44.247939 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/37ccb095-f90f-4383-88e9-05d2d82cab28-logs\") pod \"barbican-api-5744ccfcbb-rcmx5\" (UID: \"37ccb095-f90f-4383-88e9-05d2d82cab28\") " pod="openstack/barbican-api-5744ccfcbb-rcmx5" Dec 03 13:18:44 crc kubenswrapper[4986]: I1203 13:18:44.247963 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0379ff65-3714-49fe-8f61-22fa65b88922-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0379ff65-3714-49fe-8f61-22fa65b88922" (UID: "0379ff65-3714-49fe-8f61-22fa65b88922"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:18:44 crc kubenswrapper[4986]: I1203 13:18:44.250942 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/37ccb095-f90f-4383-88e9-05d2d82cab28-public-tls-certs\") pod \"barbican-api-5744ccfcbb-rcmx5\" (UID: \"37ccb095-f90f-4383-88e9-05d2d82cab28\") " pod="openstack/barbican-api-5744ccfcbb-rcmx5" Dec 03 13:18:44 crc kubenswrapper[4986]: I1203 13:18:44.251257 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/37ccb095-f90f-4383-88e9-05d2d82cab28-internal-tls-certs\") pod \"barbican-api-5744ccfcbb-rcmx5\" (UID: \"37ccb095-f90f-4383-88e9-05d2d82cab28\") " pod="openstack/barbican-api-5744ccfcbb-rcmx5" Dec 03 13:18:44 crc kubenswrapper[4986]: I1203 13:18:44.257357 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37ccb095-f90f-4383-88e9-05d2d82cab28-config-data\") pod \"barbican-api-5744ccfcbb-rcmx5\" (UID: \"37ccb095-f90f-4383-88e9-05d2d82cab28\") " pod="openstack/barbican-api-5744ccfcbb-rcmx5" Dec 03 13:18:44 crc kubenswrapper[4986]: I1203 13:18:44.262201 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/37ccb095-f90f-4383-88e9-05d2d82cab28-config-data-custom\") pod \"barbican-api-5744ccfcbb-rcmx5\" (UID: \"37ccb095-f90f-4383-88e9-05d2d82cab28\") " pod="openstack/barbican-api-5744ccfcbb-rcmx5" Dec 03 13:18:44 crc kubenswrapper[4986]: I1203 13:18:44.264753 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37ccb095-f90f-4383-88e9-05d2d82cab28-combined-ca-bundle\") pod \"barbican-api-5744ccfcbb-rcmx5\" (UID: \"37ccb095-f90f-4383-88e9-05d2d82cab28\") " pod="openstack/barbican-api-5744ccfcbb-rcmx5" Dec 03 13:18:44 crc kubenswrapper[4986]: I1203 13:18:44.265681 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjtwz\" (UniqueName: \"kubernetes.io/projected/37ccb095-f90f-4383-88e9-05d2d82cab28-kube-api-access-tjtwz\") pod \"barbican-api-5744ccfcbb-rcmx5\" (UID: \"37ccb095-f90f-4383-88e9-05d2d82cab28\") " pod="openstack/barbican-api-5744ccfcbb-rcmx5" Dec 03 13:18:44 crc kubenswrapper[4986]: I1203 13:18:44.282204 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5744ccfcbb-rcmx5" Dec 03 13:18:44 crc kubenswrapper[4986]: I1203 13:18:44.348794 4986 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0379ff65-3714-49fe-8f61-22fa65b88922-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 03 13:18:44 crc kubenswrapper[4986]: I1203 13:18:44.848074 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5744ccfcbb-rcmx5"] Dec 03 13:18:44 crc kubenswrapper[4986]: I1203 13:18:44.985168 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59f49d79c7-qt4rk" event={"ID":"5a350421-5f01-4d60-92b8-edc85e4ef3c5","Type":"ContainerStarted","Data":"49e870f36179b2a1d3e719f8f24bebef46ee4c9a776a04255a218e7aa89ac190"} Dec 03 13:18:44 crc kubenswrapper[4986]: I1203 13:18:44.988636 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-59f49d79c7-qt4rk" Dec 03 13:18:44 crc kubenswrapper[4986]: I1203 13:18:44.999186 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65965d6475-4jj4z" Dec 03 13:18:45 crc kubenswrapper[4986]: I1203 13:18:45.014428 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5744ccfcbb-rcmx5" event={"ID":"37ccb095-f90f-4383-88e9-05d2d82cab28","Type":"ContainerStarted","Data":"8c33a2fa7cf9d57e03b782ab865e9589b691c8679de6934c8de88a88981b04a7"} Dec 03 13:18:45 crc kubenswrapper[4986]: I1203 13:18:45.030824 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-59f49d79c7-qt4rk" podStartSLOduration=10.030801345 podStartE2EDuration="10.030801345s" podCreationTimestamp="2025-12-03 13:18:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:18:45.021802151 +0000 UTC m=+1384.488233362" watchObservedRunningTime="2025-12-03 13:18:45.030801345 +0000 UTC m=+1384.497232546" Dec 03 13:18:45 crc kubenswrapper[4986]: I1203 13:18:45.037270 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db8cd8b46-ffl2g" event={"ID":"c622f06b-5b3c-45e4-890e-9f7ba2283ab3","Type":"ContainerStarted","Data":"1d3d366abba735b48e9344f2ba71530880ebc198ca2df639a485f62dff0c6023"} Dec 03 13:18:45 crc kubenswrapper[4986]: I1203 13:18:45.048317 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-789c5c5cb7-4722d" event={"ID":"b1a7aa7a-cf10-4bc1-8e84-fc83ea863d28","Type":"ContainerStarted","Data":"941d7e6ffa8dcf723671459f75d3cab253afd3f5e8f93f0904aca73f7bd6be06"} Dec 03 13:18:45 crc kubenswrapper[4986]: I1203 13:18:45.048352 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-789c5c5cb7-4722d" Dec 03 13:18:45 crc kubenswrapper[4986]: I1203 13:18:45.053238 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7bc8976f54-hn8pf" event={"ID":"4d708ae2-3c0a-4134-97fe-36270231010f","Type":"ContainerStarted","Data":"639d14405922b129b66585e80277451a8daf2c9acad37a002013359435e61394"} Dec 03 13:18:45 crc kubenswrapper[4986]: I1203 13:18:45.060811 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-65965d6475-4jj4z"] Dec 03 13:18:45 crc kubenswrapper[4986]: I1203 13:18:45.076483 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-65965d6475-4jj4z"] Dec 03 13:18:45 crc kubenswrapper[4986]: I1203 13:18:45.082103 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-789c5c5cb7-4722d" podStartSLOduration=5.082087788 podStartE2EDuration="5.082087788s" podCreationTimestamp="2025-12-03 13:18:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:18:45.066173099 +0000 UTC m=+1384.532604290" watchObservedRunningTime="2025-12-03 13:18:45.082087788 +0000 UTC m=+1384.548518979" Dec 03 13:18:46 crc kubenswrapper[4986]: I1203 13:18:46.086304 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5744ccfcbb-rcmx5" event={"ID":"37ccb095-f90f-4383-88e9-05d2d82cab28","Type":"ContainerStarted","Data":"2565f9bb1bc6536e2360e03d438f01e22d79e524adb1cf639843e6875260e86a"} Dec 03 13:18:46 crc kubenswrapper[4986]: I1203 13:18:46.957428 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0379ff65-3714-49fe-8f61-22fa65b88922" path="/var/lib/kubelet/pods/0379ff65-3714-49fe-8f61-22fa65b88922/volumes" Dec 03 13:18:47 crc kubenswrapper[4986]: I1203 13:18:47.107957 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5744ccfcbb-rcmx5" event={"ID":"37ccb095-f90f-4383-88e9-05d2d82cab28","Type":"ContainerStarted","Data":"4ea566c846db2d7deca38a7591fa08f54dc2711e617640f00c341f2ef9087f68"} Dec 03 13:18:47 crc kubenswrapper[4986]: I1203 13:18:47.108325 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5744ccfcbb-rcmx5" Dec 03 13:18:47 crc kubenswrapper[4986]: I1203 13:18:47.113192 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-787dc78df5-jtcv6" event={"ID":"17f8ed93-1afd-41c1-a52b-addefeb38ab0","Type":"ContainerStarted","Data":"44a808a0f253913ae5f4cda846825d6872700526d59c9a8152efc9b38f6a5714"} Dec 03 13:18:47 crc kubenswrapper[4986]: I1203 13:18:47.113243 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-787dc78df5-jtcv6" event={"ID":"17f8ed93-1afd-41c1-a52b-addefeb38ab0","Type":"ContainerStarted","Data":"e7dd925855f6e1b48782838f0b890b7e0a624c090f12974b90e56ffa12186a17"} Dec 03 13:18:47 crc kubenswrapper[4986]: I1203 13:18:47.116511 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-qpv6c" event={"ID":"7b8ed42a-89ce-4098-9489-5291e678bf18","Type":"ContainerStarted","Data":"18886cff3842efc33ba6818a9946242f28ff95bfae0ee8564fed6be9b265c47f"} Dec 03 13:18:47 crc kubenswrapper[4986]: I1203 13:18:47.118336 4986 generic.go:334] "Generic (PLEG): container finished" podID="0874137d-06da-450a-9e93-ad53257c5115" containerID="eb1cbefc6c37e055dbcce2b749fbdf02cc41bb51b8fcdfd46e94ad0193235add" exitCode=0 Dec 03 13:18:47 crc kubenswrapper[4986]: I1203 13:18:47.118394 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-8f5bt" event={"ID":"0874137d-06da-450a-9e93-ad53257c5115","Type":"ContainerDied","Data":"eb1cbefc6c37e055dbcce2b749fbdf02cc41bb51b8fcdfd46e94ad0193235add"} Dec 03 13:18:47 crc kubenswrapper[4986]: I1203 13:18:47.120705 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-747896b766-b8kzr" event={"ID":"ebfa5ced-0a56-44db-ba24-d5f663d65920","Type":"ContainerStarted","Data":"ff90c014d19152f1edc09e400d0a74457e9e06ff72c79e1a43507df7fca605bb"} Dec 03 13:18:47 crc kubenswrapper[4986]: I1203 13:18:47.120743 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-747896b766-b8kzr" event={"ID":"ebfa5ced-0a56-44db-ba24-d5f663d65920","Type":"ContainerStarted","Data":"056caf3ea0ee6a8cf6cb0a86f7c3de7d2912b482e505491f42eb575dc5ea8456"} Dec 03 13:18:47 crc kubenswrapper[4986]: I1203 13:18:47.131877 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5744ccfcbb-rcmx5" podStartSLOduration=4.131856908 podStartE2EDuration="4.131856908s" podCreationTimestamp="2025-12-03 13:18:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:18:47.126656098 +0000 UTC m=+1386.593087289" watchObservedRunningTime="2025-12-03 13:18:47.131856908 +0000 UTC m=+1386.598288089" Dec 03 13:18:47 crc kubenswrapper[4986]: I1203 13:18:47.154670 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-747896b766-b8kzr" podStartSLOduration=3.140629608 podStartE2EDuration="7.154647433s" podCreationTimestamp="2025-12-03 13:18:40 +0000 UTC" firstStartedPulling="2025-12-03 13:18:41.95217935 +0000 UTC m=+1381.418610541" lastFinishedPulling="2025-12-03 13:18:45.966197175 +0000 UTC m=+1385.432628366" observedRunningTime="2025-12-03 13:18:47.154572481 +0000 UTC m=+1386.621003692" watchObservedRunningTime="2025-12-03 13:18:47.154647433 +0000 UTC m=+1386.621078624" Dec 03 13:18:47 crc kubenswrapper[4986]: I1203 13:18:47.226344 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-qpv6c" podStartSLOduration=2.942265484 podStartE2EDuration="48.226328478s" podCreationTimestamp="2025-12-03 13:17:59 +0000 UTC" firstStartedPulling="2025-12-03 13:18:00.690944288 +0000 UTC m=+1340.157375479" lastFinishedPulling="2025-12-03 13:18:45.975007282 +0000 UTC m=+1385.441438473" observedRunningTime="2025-12-03 13:18:47.195227628 +0000 UTC m=+1386.661658829" watchObservedRunningTime="2025-12-03 13:18:47.226328478 +0000 UTC m=+1386.692759659" Dec 03 13:18:47 crc kubenswrapper[4986]: I1203 13:18:47.236161 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-787dc78df5-jtcv6" podStartSLOduration=3.402861345 podStartE2EDuration="7.236140273s" podCreationTimestamp="2025-12-03 13:18:40 +0000 UTC" firstStartedPulling="2025-12-03 13:18:42.137249244 +0000 UTC m=+1381.603680425" lastFinishedPulling="2025-12-03 13:18:45.970528162 +0000 UTC m=+1385.436959353" observedRunningTime="2025-12-03 13:18:47.222847383 +0000 UTC m=+1386.689278574" watchObservedRunningTime="2025-12-03 13:18:47.236140273 +0000 UTC m=+1386.702571464" Dec 03 13:18:48 crc kubenswrapper[4986]: I1203 13:18:48.129813 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5744ccfcbb-rcmx5" Dec 03 13:18:49 crc kubenswrapper[4986]: I1203 13:18:49.122140 4986 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-cc774c568-phcpp" podUID="cee5d9b6-d11e-4bad-b013-196a4f401404" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.141:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.141:8443: connect: connection refused" Dec 03 13:18:49 crc kubenswrapper[4986]: I1203 13:18:49.208046 4986 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7797f969d4-6c2wn" podUID="4d67e23b-bda4-42d5-81b6-be58c643861d" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.142:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.142:8443: connect: connection refused" Dec 03 13:18:50 crc kubenswrapper[4986]: I1203 13:18:50.569589 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-8f5bt" Dec 03 13:18:50 crc kubenswrapper[4986]: I1203 13:18:50.671790 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qph6n\" (UniqueName: \"kubernetes.io/projected/0874137d-06da-450a-9e93-ad53257c5115-kube-api-access-qph6n\") pod \"0874137d-06da-450a-9e93-ad53257c5115\" (UID: \"0874137d-06da-450a-9e93-ad53257c5115\") " Dec 03 13:18:50 crc kubenswrapper[4986]: I1203 13:18:50.671912 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0874137d-06da-450a-9e93-ad53257c5115-config-data\") pod \"0874137d-06da-450a-9e93-ad53257c5115\" (UID: \"0874137d-06da-450a-9e93-ad53257c5115\") " Dec 03 13:18:50 crc kubenswrapper[4986]: I1203 13:18:50.671959 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0874137d-06da-450a-9e93-ad53257c5115-db-sync-config-data\") pod \"0874137d-06da-450a-9e93-ad53257c5115\" (UID: \"0874137d-06da-450a-9e93-ad53257c5115\") " Dec 03 13:18:50 crc kubenswrapper[4986]: I1203 13:18:50.672036 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0874137d-06da-450a-9e93-ad53257c5115-combined-ca-bundle\") pod \"0874137d-06da-450a-9e93-ad53257c5115\" (UID: \"0874137d-06da-450a-9e93-ad53257c5115\") " Dec 03 13:18:50 crc kubenswrapper[4986]: I1203 13:18:50.681432 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0874137d-06da-450a-9e93-ad53257c5115-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "0874137d-06da-450a-9e93-ad53257c5115" (UID: "0874137d-06da-450a-9e93-ad53257c5115"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:18:50 crc kubenswrapper[4986]: I1203 13:18:50.683401 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0874137d-06da-450a-9e93-ad53257c5115-kube-api-access-qph6n" (OuterVolumeSpecName: "kube-api-access-qph6n") pod "0874137d-06da-450a-9e93-ad53257c5115" (UID: "0874137d-06da-450a-9e93-ad53257c5115"). InnerVolumeSpecName "kube-api-access-qph6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:18:50 crc kubenswrapper[4986]: I1203 13:18:50.707371 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0874137d-06da-450a-9e93-ad53257c5115-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0874137d-06da-450a-9e93-ad53257c5115" (UID: "0874137d-06da-450a-9e93-ad53257c5115"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:18:50 crc kubenswrapper[4986]: I1203 13:18:50.740227 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0874137d-06da-450a-9e93-ad53257c5115-config-data" (OuterVolumeSpecName: "config-data") pod "0874137d-06da-450a-9e93-ad53257c5115" (UID: "0874137d-06da-450a-9e93-ad53257c5115"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:18:50 crc kubenswrapper[4986]: I1203 13:18:50.774578 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qph6n\" (UniqueName: \"kubernetes.io/projected/0874137d-06da-450a-9e93-ad53257c5115-kube-api-access-qph6n\") on node \"crc\" DevicePath \"\"" Dec 03 13:18:50 crc kubenswrapper[4986]: I1203 13:18:50.774610 4986 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0874137d-06da-450a-9e93-ad53257c5115-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 13:18:50 crc kubenswrapper[4986]: I1203 13:18:50.774620 4986 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0874137d-06da-450a-9e93-ad53257c5115-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 13:18:50 crc kubenswrapper[4986]: I1203 13:18:50.774628 4986 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0874137d-06da-450a-9e93-ad53257c5115-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 13:18:51 crc kubenswrapper[4986]: I1203 13:18:51.179621 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-8f5bt" event={"ID":"0874137d-06da-450a-9e93-ad53257c5115","Type":"ContainerDied","Data":"88889fbfd0cbfe77bf4a959c1e3815b4fe66102110e77687108f934a453f8112"} Dec 03 13:18:51 crc kubenswrapper[4986]: I1203 13:18:51.179675 4986 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="88889fbfd0cbfe77bf4a959c1e3815b4fe66102110e77687108f934a453f8112" Dec 03 13:18:51 crc kubenswrapper[4986]: I1203 13:18:51.179718 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-8f5bt" Dec 03 13:18:51 crc kubenswrapper[4986]: I1203 13:18:51.430622 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-789c5c5cb7-4722d" Dec 03 13:18:51 crc kubenswrapper[4986]: I1203 13:18:51.519566 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76fcf4b695-sxknq"] Dec 03 13:18:51 crc kubenswrapper[4986]: I1203 13:18:51.520150 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-76fcf4b695-sxknq" podUID="d09ba1cc-e27c-4ea3-9f4a-2f7791afce95" containerName="dnsmasq-dns" containerID="cri-o://dd7880f3d2c0a7ebdd3f37e30bf52b79dfaa6ea479dbf58d3aa1b70b5d2a8121" gracePeriod=10 Dec 03 13:18:52 crc kubenswrapper[4986]: I1203 13:18:52.013332 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-xcsvv"] Dec 03 13:18:52 crc kubenswrapper[4986]: E1203 13:18:52.013757 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0874137d-06da-450a-9e93-ad53257c5115" containerName="glance-db-sync" Dec 03 13:18:52 crc kubenswrapper[4986]: I1203 13:18:52.013772 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="0874137d-06da-450a-9e93-ad53257c5115" containerName="glance-db-sync" Dec 03 13:18:52 crc kubenswrapper[4986]: I1203 13:18:52.013959 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="0874137d-06da-450a-9e93-ad53257c5115" containerName="glance-db-sync" Dec 03 13:18:52 crc kubenswrapper[4986]: I1203 13:18:52.014885 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c8ddd69c-xcsvv" Dec 03 13:18:52 crc kubenswrapper[4986]: I1203 13:18:52.040592 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-xcsvv"] Dec 03 13:18:52 crc kubenswrapper[4986]: I1203 13:18:52.097667 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/578cfbdf-da14-435d-a13f-e525d482a66d-ovsdbserver-nb\") pod \"dnsmasq-dns-75c8ddd69c-xcsvv\" (UID: \"578cfbdf-da14-435d-a13f-e525d482a66d\") " pod="openstack/dnsmasq-dns-75c8ddd69c-xcsvv" Dec 03 13:18:52 crc kubenswrapper[4986]: I1203 13:18:52.097748 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/578cfbdf-da14-435d-a13f-e525d482a66d-dns-swift-storage-0\") pod \"dnsmasq-dns-75c8ddd69c-xcsvv\" (UID: \"578cfbdf-da14-435d-a13f-e525d482a66d\") " pod="openstack/dnsmasq-dns-75c8ddd69c-xcsvv" Dec 03 13:18:52 crc kubenswrapper[4986]: I1203 13:18:52.097786 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/578cfbdf-da14-435d-a13f-e525d482a66d-ovsdbserver-sb\") pod \"dnsmasq-dns-75c8ddd69c-xcsvv\" (UID: \"578cfbdf-da14-435d-a13f-e525d482a66d\") " pod="openstack/dnsmasq-dns-75c8ddd69c-xcsvv" Dec 03 13:18:52 crc kubenswrapper[4986]: I1203 13:18:52.097814 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/578cfbdf-da14-435d-a13f-e525d482a66d-dns-svc\") pod \"dnsmasq-dns-75c8ddd69c-xcsvv\" (UID: \"578cfbdf-da14-435d-a13f-e525d482a66d\") " pod="openstack/dnsmasq-dns-75c8ddd69c-xcsvv" Dec 03 13:18:52 crc kubenswrapper[4986]: I1203 13:18:52.097881 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkwps\" (UniqueName: \"kubernetes.io/projected/578cfbdf-da14-435d-a13f-e525d482a66d-kube-api-access-gkwps\") pod \"dnsmasq-dns-75c8ddd69c-xcsvv\" (UID: \"578cfbdf-da14-435d-a13f-e525d482a66d\") " pod="openstack/dnsmasq-dns-75c8ddd69c-xcsvv" Dec 03 13:18:52 crc kubenswrapper[4986]: I1203 13:18:52.097936 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/578cfbdf-da14-435d-a13f-e525d482a66d-config\") pod \"dnsmasq-dns-75c8ddd69c-xcsvv\" (UID: \"578cfbdf-da14-435d-a13f-e525d482a66d\") " pod="openstack/dnsmasq-dns-75c8ddd69c-xcsvv" Dec 03 13:18:52 crc kubenswrapper[4986]: I1203 13:18:52.199417 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkwps\" (UniqueName: \"kubernetes.io/projected/578cfbdf-da14-435d-a13f-e525d482a66d-kube-api-access-gkwps\") pod \"dnsmasq-dns-75c8ddd69c-xcsvv\" (UID: \"578cfbdf-da14-435d-a13f-e525d482a66d\") " pod="openstack/dnsmasq-dns-75c8ddd69c-xcsvv" Dec 03 13:18:52 crc kubenswrapper[4986]: I1203 13:18:52.199492 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/578cfbdf-da14-435d-a13f-e525d482a66d-config\") pod \"dnsmasq-dns-75c8ddd69c-xcsvv\" (UID: \"578cfbdf-da14-435d-a13f-e525d482a66d\") " pod="openstack/dnsmasq-dns-75c8ddd69c-xcsvv" Dec 03 13:18:52 crc kubenswrapper[4986]: I1203 13:18:52.199580 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/578cfbdf-da14-435d-a13f-e525d482a66d-ovsdbserver-nb\") pod \"dnsmasq-dns-75c8ddd69c-xcsvv\" (UID: \"578cfbdf-da14-435d-a13f-e525d482a66d\") " pod="openstack/dnsmasq-dns-75c8ddd69c-xcsvv" Dec 03 13:18:52 crc kubenswrapper[4986]: I1203 13:18:52.199625 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/578cfbdf-da14-435d-a13f-e525d482a66d-dns-swift-storage-0\") pod \"dnsmasq-dns-75c8ddd69c-xcsvv\" (UID: \"578cfbdf-da14-435d-a13f-e525d482a66d\") " pod="openstack/dnsmasq-dns-75c8ddd69c-xcsvv" Dec 03 13:18:52 crc kubenswrapper[4986]: I1203 13:18:52.199655 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/578cfbdf-da14-435d-a13f-e525d482a66d-ovsdbserver-sb\") pod \"dnsmasq-dns-75c8ddd69c-xcsvv\" (UID: \"578cfbdf-da14-435d-a13f-e525d482a66d\") " pod="openstack/dnsmasq-dns-75c8ddd69c-xcsvv" Dec 03 13:18:52 crc kubenswrapper[4986]: I1203 13:18:52.199694 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/578cfbdf-da14-435d-a13f-e525d482a66d-dns-svc\") pod \"dnsmasq-dns-75c8ddd69c-xcsvv\" (UID: \"578cfbdf-da14-435d-a13f-e525d482a66d\") " pod="openstack/dnsmasq-dns-75c8ddd69c-xcsvv" Dec 03 13:18:52 crc kubenswrapper[4986]: I1203 13:18:52.200885 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/578cfbdf-da14-435d-a13f-e525d482a66d-config\") pod \"dnsmasq-dns-75c8ddd69c-xcsvv\" (UID: \"578cfbdf-da14-435d-a13f-e525d482a66d\") " pod="openstack/dnsmasq-dns-75c8ddd69c-xcsvv" Dec 03 13:18:52 crc kubenswrapper[4986]: I1203 13:18:52.201121 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/578cfbdf-da14-435d-a13f-e525d482a66d-ovsdbserver-sb\") pod \"dnsmasq-dns-75c8ddd69c-xcsvv\" (UID: \"578cfbdf-da14-435d-a13f-e525d482a66d\") " pod="openstack/dnsmasq-dns-75c8ddd69c-xcsvv" Dec 03 13:18:52 crc kubenswrapper[4986]: I1203 13:18:52.201422 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/578cfbdf-da14-435d-a13f-e525d482a66d-dns-swift-storage-0\") pod \"dnsmasq-dns-75c8ddd69c-xcsvv\" (UID: \"578cfbdf-da14-435d-a13f-e525d482a66d\") " pod="openstack/dnsmasq-dns-75c8ddd69c-xcsvv" Dec 03 13:18:52 crc kubenswrapper[4986]: I1203 13:18:52.201583 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/578cfbdf-da14-435d-a13f-e525d482a66d-dns-svc\") pod \"dnsmasq-dns-75c8ddd69c-xcsvv\" (UID: \"578cfbdf-da14-435d-a13f-e525d482a66d\") " pod="openstack/dnsmasq-dns-75c8ddd69c-xcsvv" Dec 03 13:18:52 crc kubenswrapper[4986]: I1203 13:18:52.201897 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/578cfbdf-da14-435d-a13f-e525d482a66d-ovsdbserver-nb\") pod \"dnsmasq-dns-75c8ddd69c-xcsvv\" (UID: \"578cfbdf-da14-435d-a13f-e525d482a66d\") " pod="openstack/dnsmasq-dns-75c8ddd69c-xcsvv" Dec 03 13:18:52 crc kubenswrapper[4986]: I1203 13:18:52.208763 4986 generic.go:334] "Generic (PLEG): container finished" podID="d09ba1cc-e27c-4ea3-9f4a-2f7791afce95" containerID="dd7880f3d2c0a7ebdd3f37e30bf52b79dfaa6ea479dbf58d3aa1b70b5d2a8121" exitCode=0 Dec 03 13:18:52 crc kubenswrapper[4986]: I1203 13:18:52.208813 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76fcf4b695-sxknq" event={"ID":"d09ba1cc-e27c-4ea3-9f4a-2f7791afce95","Type":"ContainerDied","Data":"dd7880f3d2c0a7ebdd3f37e30bf52b79dfaa6ea479dbf58d3aa1b70b5d2a8121"} Dec 03 13:18:52 crc kubenswrapper[4986]: I1203 13:18:52.227201 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkwps\" (UniqueName: \"kubernetes.io/projected/578cfbdf-da14-435d-a13f-e525d482a66d-kube-api-access-gkwps\") pod \"dnsmasq-dns-75c8ddd69c-xcsvv\" (UID: \"578cfbdf-da14-435d-a13f-e525d482a66d\") " pod="openstack/dnsmasq-dns-75c8ddd69c-xcsvv" Dec 03 13:18:52 crc kubenswrapper[4986]: I1203 13:18:52.360832 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c8ddd69c-xcsvv" Dec 03 13:18:52 crc kubenswrapper[4986]: I1203 13:18:52.989060 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 03 13:18:52 crc kubenswrapper[4986]: I1203 13:18:52.991461 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 03 13:18:53 crc kubenswrapper[4986]: I1203 13:18:53.008783 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 03 13:18:53 crc kubenswrapper[4986]: I1203 13:18:53.011450 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-kj54w" Dec 03 13:18:53 crc kubenswrapper[4986]: I1203 13:18:53.011632 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Dec 03 13:18:53 crc kubenswrapper[4986]: I1203 13:18:53.012016 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 03 13:18:53 crc kubenswrapper[4986]: I1203 13:18:53.104300 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 03 13:18:53 crc kubenswrapper[4986]: I1203 13:18:53.105878 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 03 13:18:53 crc kubenswrapper[4986]: I1203 13:18:53.108454 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 03 13:18:53 crc kubenswrapper[4986]: I1203 13:18:53.113799 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67af2ee4-2a0e-4095-b731-5123b5f54053-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"67af2ee4-2a0e-4095-b731-5123b5f54053\") " pod="openstack/glance-default-external-api-0" Dec 03 13:18:53 crc kubenswrapper[4986]: I1203 13:18:53.113849 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67af2ee4-2a0e-4095-b731-5123b5f54053-config-data\") pod \"glance-default-external-api-0\" (UID: \"67af2ee4-2a0e-4095-b731-5123b5f54053\") " pod="openstack/glance-default-external-api-0" Dec 03 13:18:53 crc kubenswrapper[4986]: I1203 13:18:53.113879 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67af2ee4-2a0e-4095-b731-5123b5f54053-scripts\") pod \"glance-default-external-api-0\" (UID: \"67af2ee4-2a0e-4095-b731-5123b5f54053\") " pod="openstack/glance-default-external-api-0" Dec 03 13:18:53 crc kubenswrapper[4986]: I1203 13:18:53.113927 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"67af2ee4-2a0e-4095-b731-5123b5f54053\") " pod="openstack/glance-default-external-api-0" Dec 03 13:18:53 crc kubenswrapper[4986]: I1203 13:18:53.113957 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/67af2ee4-2a0e-4095-b731-5123b5f54053-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"67af2ee4-2a0e-4095-b731-5123b5f54053\") " pod="openstack/glance-default-external-api-0" Dec 03 13:18:53 crc kubenswrapper[4986]: I1203 13:18:53.114018 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67af2ee4-2a0e-4095-b731-5123b5f54053-logs\") pod \"glance-default-external-api-0\" (UID: \"67af2ee4-2a0e-4095-b731-5123b5f54053\") " pod="openstack/glance-default-external-api-0" Dec 03 13:18:53 crc kubenswrapper[4986]: I1203 13:18:53.114051 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pr2mb\" (UniqueName: \"kubernetes.io/projected/67af2ee4-2a0e-4095-b731-5123b5f54053-kube-api-access-pr2mb\") pod \"glance-default-external-api-0\" (UID: \"67af2ee4-2a0e-4095-b731-5123b5f54053\") " pod="openstack/glance-default-external-api-0" Dec 03 13:18:53 crc kubenswrapper[4986]: I1203 13:18:53.115187 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 03 13:18:53 crc kubenswrapper[4986]: I1203 13:18:53.215367 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67af2ee4-2a0e-4095-b731-5123b5f54053-logs\") pod \"glance-default-external-api-0\" (UID: \"67af2ee4-2a0e-4095-b731-5123b5f54053\") " pod="openstack/glance-default-external-api-0" Dec 03 13:18:53 crc kubenswrapper[4986]: I1203 13:18:53.215415 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"82935ead-8451-43d6-bd7a-c0444aadc886\") " pod="openstack/glance-default-internal-api-0" Dec 03 13:18:53 crc kubenswrapper[4986]: I1203 13:18:53.215434 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82935ead-8451-43d6-bd7a-c0444aadc886-scripts\") pod \"glance-default-internal-api-0\" (UID: \"82935ead-8451-43d6-bd7a-c0444aadc886\") " pod="openstack/glance-default-internal-api-0" Dec 03 13:18:53 crc kubenswrapper[4986]: I1203 13:18:53.215469 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82935ead-8451-43d6-bd7a-c0444aadc886-logs\") pod \"glance-default-internal-api-0\" (UID: \"82935ead-8451-43d6-bd7a-c0444aadc886\") " pod="openstack/glance-default-internal-api-0" Dec 03 13:18:53 crc kubenswrapper[4986]: I1203 13:18:53.215491 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pr2mb\" (UniqueName: \"kubernetes.io/projected/67af2ee4-2a0e-4095-b731-5123b5f54053-kube-api-access-pr2mb\") pod \"glance-default-external-api-0\" (UID: \"67af2ee4-2a0e-4095-b731-5123b5f54053\") " pod="openstack/glance-default-external-api-0" Dec 03 13:18:53 crc kubenswrapper[4986]: I1203 13:18:53.215511 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67af2ee4-2a0e-4095-b731-5123b5f54053-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"67af2ee4-2a0e-4095-b731-5123b5f54053\") " pod="openstack/glance-default-external-api-0" Dec 03 13:18:53 crc kubenswrapper[4986]: I1203 13:18:53.215533 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82935ead-8451-43d6-bd7a-c0444aadc886-config-data\") pod \"glance-default-internal-api-0\" (UID: \"82935ead-8451-43d6-bd7a-c0444aadc886\") " pod="openstack/glance-default-internal-api-0" Dec 03 13:18:53 crc kubenswrapper[4986]: I1203 13:18:53.215712 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67af2ee4-2a0e-4095-b731-5123b5f54053-config-data\") pod \"glance-default-external-api-0\" (UID: \"67af2ee4-2a0e-4095-b731-5123b5f54053\") " pod="openstack/glance-default-external-api-0" Dec 03 13:18:53 crc kubenswrapper[4986]: I1203 13:18:53.215758 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/82935ead-8451-43d6-bd7a-c0444aadc886-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"82935ead-8451-43d6-bd7a-c0444aadc886\") " pod="openstack/glance-default-internal-api-0" Dec 03 13:18:53 crc kubenswrapper[4986]: I1203 13:18:53.215787 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82935ead-8451-43d6-bd7a-c0444aadc886-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"82935ead-8451-43d6-bd7a-c0444aadc886\") " pod="openstack/glance-default-internal-api-0" Dec 03 13:18:53 crc kubenswrapper[4986]: I1203 13:18:53.215834 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67af2ee4-2a0e-4095-b731-5123b5f54053-scripts\") pod \"glance-default-external-api-0\" (UID: \"67af2ee4-2a0e-4095-b731-5123b5f54053\") " pod="openstack/glance-default-external-api-0" Dec 03 13:18:53 crc kubenswrapper[4986]: I1203 13:18:53.215880 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67af2ee4-2a0e-4095-b731-5123b5f54053-logs\") pod \"glance-default-external-api-0\" (UID: \"67af2ee4-2a0e-4095-b731-5123b5f54053\") " pod="openstack/glance-default-external-api-0" Dec 03 13:18:53 crc kubenswrapper[4986]: I1203 13:18:53.216071 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"67af2ee4-2a0e-4095-b731-5123b5f54053\") " pod="openstack/glance-default-external-api-0" Dec 03 13:18:53 crc kubenswrapper[4986]: I1203 13:18:53.216195 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/67af2ee4-2a0e-4095-b731-5123b5f54053-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"67af2ee4-2a0e-4095-b731-5123b5f54053\") " pod="openstack/glance-default-external-api-0" Dec 03 13:18:53 crc kubenswrapper[4986]: I1203 13:18:53.216353 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2f4c\" (UniqueName: \"kubernetes.io/projected/82935ead-8451-43d6-bd7a-c0444aadc886-kube-api-access-j2f4c\") pod \"glance-default-internal-api-0\" (UID: \"82935ead-8451-43d6-bd7a-c0444aadc886\") " pod="openstack/glance-default-internal-api-0" Dec 03 13:18:53 crc kubenswrapper[4986]: I1203 13:18:53.216394 4986 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"67af2ee4-2a0e-4095-b731-5123b5f54053\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-external-api-0" Dec 03 13:18:53 crc kubenswrapper[4986]: I1203 13:18:53.216619 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/67af2ee4-2a0e-4095-b731-5123b5f54053-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"67af2ee4-2a0e-4095-b731-5123b5f54053\") " pod="openstack/glance-default-external-api-0" Dec 03 13:18:53 crc kubenswrapper[4986]: I1203 13:18:53.227795 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67af2ee4-2a0e-4095-b731-5123b5f54053-scripts\") pod \"glance-default-external-api-0\" (UID: \"67af2ee4-2a0e-4095-b731-5123b5f54053\") " pod="openstack/glance-default-external-api-0" Dec 03 13:18:53 crc kubenswrapper[4986]: I1203 13:18:53.228078 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67af2ee4-2a0e-4095-b731-5123b5f54053-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"67af2ee4-2a0e-4095-b731-5123b5f54053\") " pod="openstack/glance-default-external-api-0" Dec 03 13:18:53 crc kubenswrapper[4986]: I1203 13:18:53.230455 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67af2ee4-2a0e-4095-b731-5123b5f54053-config-data\") pod \"glance-default-external-api-0\" (UID: \"67af2ee4-2a0e-4095-b731-5123b5f54053\") " pod="openstack/glance-default-external-api-0" Dec 03 13:18:53 crc kubenswrapper[4986]: I1203 13:18:53.232709 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pr2mb\" (UniqueName: \"kubernetes.io/projected/67af2ee4-2a0e-4095-b731-5123b5f54053-kube-api-access-pr2mb\") pod \"glance-default-external-api-0\" (UID: \"67af2ee4-2a0e-4095-b731-5123b5f54053\") " pod="openstack/glance-default-external-api-0" Dec 03 13:18:53 crc kubenswrapper[4986]: I1203 13:18:53.243573 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"67af2ee4-2a0e-4095-b731-5123b5f54053\") " pod="openstack/glance-default-external-api-0" Dec 03 13:18:53 crc kubenswrapper[4986]: I1203 13:18:53.311835 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 03 13:18:53 crc kubenswrapper[4986]: I1203 13:18:53.317831 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2f4c\" (UniqueName: \"kubernetes.io/projected/82935ead-8451-43d6-bd7a-c0444aadc886-kube-api-access-j2f4c\") pod \"glance-default-internal-api-0\" (UID: \"82935ead-8451-43d6-bd7a-c0444aadc886\") " pod="openstack/glance-default-internal-api-0" Dec 03 13:18:53 crc kubenswrapper[4986]: I1203 13:18:53.317928 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"82935ead-8451-43d6-bd7a-c0444aadc886\") " pod="openstack/glance-default-internal-api-0" Dec 03 13:18:53 crc kubenswrapper[4986]: I1203 13:18:53.317981 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82935ead-8451-43d6-bd7a-c0444aadc886-scripts\") pod \"glance-default-internal-api-0\" (UID: \"82935ead-8451-43d6-bd7a-c0444aadc886\") " pod="openstack/glance-default-internal-api-0" Dec 03 13:18:53 crc kubenswrapper[4986]: I1203 13:18:53.318027 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82935ead-8451-43d6-bd7a-c0444aadc886-logs\") pod \"glance-default-internal-api-0\" (UID: \"82935ead-8451-43d6-bd7a-c0444aadc886\") " pod="openstack/glance-default-internal-api-0" Dec 03 13:18:53 crc kubenswrapper[4986]: I1203 13:18:53.318067 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82935ead-8451-43d6-bd7a-c0444aadc886-config-data\") pod \"glance-default-internal-api-0\" (UID: \"82935ead-8451-43d6-bd7a-c0444aadc886\") " pod="openstack/glance-default-internal-api-0" Dec 03 13:18:53 crc kubenswrapper[4986]: I1203 13:18:53.318094 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82935ead-8451-43d6-bd7a-c0444aadc886-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"82935ead-8451-43d6-bd7a-c0444aadc886\") " pod="openstack/glance-default-internal-api-0" Dec 03 13:18:53 crc kubenswrapper[4986]: I1203 13:18:53.318119 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/82935ead-8451-43d6-bd7a-c0444aadc886-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"82935ead-8451-43d6-bd7a-c0444aadc886\") " pod="openstack/glance-default-internal-api-0" Dec 03 13:18:53 crc kubenswrapper[4986]: I1203 13:18:53.318134 4986 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"82935ead-8451-43d6-bd7a-c0444aadc886\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-internal-api-0" Dec 03 13:18:53 crc kubenswrapper[4986]: I1203 13:18:53.318549 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82935ead-8451-43d6-bd7a-c0444aadc886-logs\") pod \"glance-default-internal-api-0\" (UID: \"82935ead-8451-43d6-bd7a-c0444aadc886\") " pod="openstack/glance-default-internal-api-0" Dec 03 13:18:53 crc kubenswrapper[4986]: I1203 13:18:53.318711 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/82935ead-8451-43d6-bd7a-c0444aadc886-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"82935ead-8451-43d6-bd7a-c0444aadc886\") " pod="openstack/glance-default-internal-api-0" Dec 03 13:18:53 crc kubenswrapper[4986]: I1203 13:18:53.332649 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82935ead-8451-43d6-bd7a-c0444aadc886-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"82935ead-8451-43d6-bd7a-c0444aadc886\") " pod="openstack/glance-default-internal-api-0" Dec 03 13:18:53 crc kubenswrapper[4986]: I1203 13:18:53.338019 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82935ead-8451-43d6-bd7a-c0444aadc886-scripts\") pod \"glance-default-internal-api-0\" (UID: \"82935ead-8451-43d6-bd7a-c0444aadc886\") " pod="openstack/glance-default-internal-api-0" Dec 03 13:18:53 crc kubenswrapper[4986]: I1203 13:18:53.340332 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82935ead-8451-43d6-bd7a-c0444aadc886-config-data\") pod \"glance-default-internal-api-0\" (UID: \"82935ead-8451-43d6-bd7a-c0444aadc886\") " pod="openstack/glance-default-internal-api-0" Dec 03 13:18:53 crc kubenswrapper[4986]: I1203 13:18:53.350957 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2f4c\" (UniqueName: \"kubernetes.io/projected/82935ead-8451-43d6-bd7a-c0444aadc886-kube-api-access-j2f4c\") pod \"glance-default-internal-api-0\" (UID: \"82935ead-8451-43d6-bd7a-c0444aadc886\") " pod="openstack/glance-default-internal-api-0" Dec 03 13:18:53 crc kubenswrapper[4986]: I1203 13:18:53.445560 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"82935ead-8451-43d6-bd7a-c0444aadc886\") " pod="openstack/glance-default-internal-api-0" Dec 03 13:18:53 crc kubenswrapper[4986]: I1203 13:18:53.554088 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7bc8976f54-hn8pf" Dec 03 13:18:53 crc kubenswrapper[4986]: I1203 13:18:53.723119 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 03 13:18:54 crc kubenswrapper[4986]: I1203 13:18:54.117133 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76fcf4b695-sxknq" Dec 03 13:18:54 crc kubenswrapper[4986]: I1203 13:18:54.142966 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d09ba1cc-e27c-4ea3-9f4a-2f7791afce95-dns-svc\") pod \"d09ba1cc-e27c-4ea3-9f4a-2f7791afce95\" (UID: \"d09ba1cc-e27c-4ea3-9f4a-2f7791afce95\") " Dec 03 13:18:54 crc kubenswrapper[4986]: I1203 13:18:54.143056 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d09ba1cc-e27c-4ea3-9f4a-2f7791afce95-config\") pod \"d09ba1cc-e27c-4ea3-9f4a-2f7791afce95\" (UID: \"d09ba1cc-e27c-4ea3-9f4a-2f7791afce95\") " Dec 03 13:18:54 crc kubenswrapper[4986]: I1203 13:18:54.143124 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d09ba1cc-e27c-4ea3-9f4a-2f7791afce95-dns-swift-storage-0\") pod \"d09ba1cc-e27c-4ea3-9f4a-2f7791afce95\" (UID: \"d09ba1cc-e27c-4ea3-9f4a-2f7791afce95\") " Dec 03 13:18:54 crc kubenswrapper[4986]: I1203 13:18:54.143174 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d09ba1cc-e27c-4ea3-9f4a-2f7791afce95-ovsdbserver-sb\") pod \"d09ba1cc-e27c-4ea3-9f4a-2f7791afce95\" (UID: \"d09ba1cc-e27c-4ea3-9f4a-2f7791afce95\") " Dec 03 13:18:54 crc kubenswrapper[4986]: I1203 13:18:54.143202 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d09ba1cc-e27c-4ea3-9f4a-2f7791afce95-ovsdbserver-nb\") pod \"d09ba1cc-e27c-4ea3-9f4a-2f7791afce95\" (UID: \"d09ba1cc-e27c-4ea3-9f4a-2f7791afce95\") " Dec 03 13:18:54 crc kubenswrapper[4986]: I1203 13:18:54.143398 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jb5c4\" (UniqueName: \"kubernetes.io/projected/d09ba1cc-e27c-4ea3-9f4a-2f7791afce95-kube-api-access-jb5c4\") pod \"d09ba1cc-e27c-4ea3-9f4a-2f7791afce95\" (UID: \"d09ba1cc-e27c-4ea3-9f4a-2f7791afce95\") " Dec 03 13:18:54 crc kubenswrapper[4986]: I1203 13:18:54.175462 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d09ba1cc-e27c-4ea3-9f4a-2f7791afce95-kube-api-access-jb5c4" (OuterVolumeSpecName: "kube-api-access-jb5c4") pod "d09ba1cc-e27c-4ea3-9f4a-2f7791afce95" (UID: "d09ba1cc-e27c-4ea3-9f4a-2f7791afce95"). InnerVolumeSpecName "kube-api-access-jb5c4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:18:54 crc kubenswrapper[4986]: I1203 13:18:54.208140 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7bc8976f54-hn8pf" Dec 03 13:18:54 crc kubenswrapper[4986]: I1203 13:18:54.261512 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jb5c4\" (UniqueName: \"kubernetes.io/projected/d09ba1cc-e27c-4ea3-9f4a-2f7791afce95-kube-api-access-jb5c4\") on node \"crc\" DevicePath \"\"" Dec 03 13:18:54 crc kubenswrapper[4986]: I1203 13:18:54.271759 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d09ba1cc-e27c-4ea3-9f4a-2f7791afce95-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d09ba1cc-e27c-4ea3-9f4a-2f7791afce95" (UID: "d09ba1cc-e27c-4ea3-9f4a-2f7791afce95"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:18:54 crc kubenswrapper[4986]: I1203 13:18:54.281048 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d09ba1cc-e27c-4ea3-9f4a-2f7791afce95-config" (OuterVolumeSpecName: "config") pod "d09ba1cc-e27c-4ea3-9f4a-2f7791afce95" (UID: "d09ba1cc-e27c-4ea3-9f4a-2f7791afce95"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:18:54 crc kubenswrapper[4986]: I1203 13:18:54.287489 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d09ba1cc-e27c-4ea3-9f4a-2f7791afce95-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d09ba1cc-e27c-4ea3-9f4a-2f7791afce95" (UID: "d09ba1cc-e27c-4ea3-9f4a-2f7791afce95"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:18:54 crc kubenswrapper[4986]: I1203 13:18:54.296111 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76fcf4b695-sxknq" event={"ID":"d09ba1cc-e27c-4ea3-9f4a-2f7791afce95","Type":"ContainerDied","Data":"9fe313453051a4a52ad6520df7954c1f9c273f1440f54886bb07673f3e5478a2"} Dec 03 13:18:54 crc kubenswrapper[4986]: I1203 13:18:54.296241 4986 scope.go:117] "RemoveContainer" containerID="dd7880f3d2c0a7ebdd3f37e30bf52b79dfaa6ea479dbf58d3aa1b70b5d2a8121" Dec 03 13:18:54 crc kubenswrapper[4986]: I1203 13:18:54.298018 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76fcf4b695-sxknq" Dec 03 13:18:54 crc kubenswrapper[4986]: I1203 13:18:54.319433 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d09ba1cc-e27c-4ea3-9f4a-2f7791afce95-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d09ba1cc-e27c-4ea3-9f4a-2f7791afce95" (UID: "d09ba1cc-e27c-4ea3-9f4a-2f7791afce95"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:18:54 crc kubenswrapper[4986]: I1203 13:18:54.327939 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="143eb6df-5711-4616-baaa-42417113bfed" containerName="ceilometer-central-agent" containerID="cri-o://31623ea22a4e33dc46e24291324cbd81da55ddc262f3a20d8b8d61d022e9f667" gracePeriod=30 Dec 03 13:18:54 crc kubenswrapper[4986]: I1203 13:18:54.328189 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"143eb6df-5711-4616-baaa-42417113bfed","Type":"ContainerStarted","Data":"66dd85225308b34c6a81db82d34abc5ae62cc79acc8eb6bf19ae153e4b9dbef9"} Dec 03 13:18:54 crc kubenswrapper[4986]: I1203 13:18:54.328227 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 03 13:18:54 crc kubenswrapper[4986]: I1203 13:18:54.328460 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="143eb6df-5711-4616-baaa-42417113bfed" containerName="proxy-httpd" containerID="cri-o://66dd85225308b34c6a81db82d34abc5ae62cc79acc8eb6bf19ae153e4b9dbef9" gracePeriod=30 Dec 03 13:18:54 crc kubenswrapper[4986]: I1203 13:18:54.328501 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="143eb6df-5711-4616-baaa-42417113bfed" containerName="sg-core" containerID="cri-o://a1a8f1bb0109a39a22c7d1184ef9832ec739c4597a7672dc07759640a1d91ad2" gracePeriod=30 Dec 03 13:18:54 crc kubenswrapper[4986]: I1203 13:18:54.328530 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="143eb6df-5711-4616-baaa-42417113bfed" containerName="ceilometer-notification-agent" containerID="cri-o://8fd2e1e494f36abbefe9886a1586a47aa16686c32a62d8f2e95942c2e0f8f466" gracePeriod=30 Dec 03 13:18:54 crc kubenswrapper[4986]: I1203 13:18:54.362835 4986 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d09ba1cc-e27c-4ea3-9f4a-2f7791afce95-config\") on node \"crc\" DevicePath \"\"" Dec 03 13:18:54 crc kubenswrapper[4986]: I1203 13:18:54.362873 4986 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d09ba1cc-e27c-4ea3-9f4a-2f7791afce95-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 03 13:18:54 crc kubenswrapper[4986]: I1203 13:18:54.362884 4986 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d09ba1cc-e27c-4ea3-9f4a-2f7791afce95-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 03 13:18:54 crc kubenswrapper[4986]: I1203 13:18:54.362892 4986 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d09ba1cc-e27c-4ea3-9f4a-2f7791afce95-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 03 13:18:54 crc kubenswrapper[4986]: I1203 13:18:54.388377 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.487736042 podStartE2EDuration="55.388349005s" podCreationTimestamp="2025-12-03 13:17:59 +0000 UTC" firstStartedPulling="2025-12-03 13:18:00.87985993 +0000 UTC m=+1340.346291121" lastFinishedPulling="2025-12-03 13:18:53.780472903 +0000 UTC m=+1393.246904084" observedRunningTime="2025-12-03 13:18:54.365845038 +0000 UTC m=+1393.832276239" watchObservedRunningTime="2025-12-03 13:18:54.388349005 +0000 UTC m=+1393.854780196" Dec 03 13:18:54 crc kubenswrapper[4986]: I1203 13:18:54.398840 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d09ba1cc-e27c-4ea3-9f4a-2f7791afce95-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d09ba1cc-e27c-4ea3-9f4a-2f7791afce95" (UID: "d09ba1cc-e27c-4ea3-9f4a-2f7791afce95"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:18:54 crc kubenswrapper[4986]: I1203 13:18:54.457462 4986 scope.go:117] "RemoveContainer" containerID="21c9c3e6ec0cad740667c94517148149003615b9acfbe08287bea8a0bb9806fd" Dec 03 13:18:54 crc kubenswrapper[4986]: I1203 13:18:54.465734 4986 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d09ba1cc-e27c-4ea3-9f4a-2f7791afce95-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 03 13:18:54 crc kubenswrapper[4986]: I1203 13:18:54.597026 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 03 13:18:54 crc kubenswrapper[4986]: I1203 13:18:54.706135 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76fcf4b695-sxknq"] Dec 03 13:18:54 crc kubenswrapper[4986]: I1203 13:18:54.736908 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-76fcf4b695-sxknq"] Dec 03 13:18:54 crc kubenswrapper[4986]: I1203 13:18:54.791221 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-xcsvv"] Dec 03 13:18:54 crc kubenswrapper[4986]: I1203 13:18:54.999816 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d09ba1cc-e27c-4ea3-9f4a-2f7791afce95" path="/var/lib/kubelet/pods/d09ba1cc-e27c-4ea3-9f4a-2f7791afce95/volumes" Dec 03 13:18:55 crc kubenswrapper[4986]: I1203 13:18:55.000593 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 03 13:18:55 crc kubenswrapper[4986]: I1203 13:18:55.402015 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"82935ead-8451-43d6-bd7a-c0444aadc886","Type":"ContainerStarted","Data":"64a1e60965009eddb1462d64c09194b75700246f92b5ed4e127e163d5777d9be"} Dec 03 13:18:55 crc kubenswrapper[4986]: I1203 13:18:55.427049 4986 generic.go:334] "Generic (PLEG): container finished" podID="578cfbdf-da14-435d-a13f-e525d482a66d" containerID="c72b696c32d9adda237c39f64e841e2c05b2a3c118ed19e6afbff6ef78ecc88c" exitCode=0 Dec 03 13:18:55 crc kubenswrapper[4986]: I1203 13:18:55.427156 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c8ddd69c-xcsvv" event={"ID":"578cfbdf-da14-435d-a13f-e525d482a66d","Type":"ContainerDied","Data":"c72b696c32d9adda237c39f64e841e2c05b2a3c118ed19e6afbff6ef78ecc88c"} Dec 03 13:18:55 crc kubenswrapper[4986]: I1203 13:18:55.427187 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c8ddd69c-xcsvv" event={"ID":"578cfbdf-da14-435d-a13f-e525d482a66d","Type":"ContainerStarted","Data":"f9951143d0e42f80cf52b9d384d51012b1dec395057f65c53858d2e7f8aabce0"} Dec 03 13:18:55 crc kubenswrapper[4986]: I1203 13:18:55.453000 4986 generic.go:334] "Generic (PLEG): container finished" podID="7b8ed42a-89ce-4098-9489-5291e678bf18" containerID="18886cff3842efc33ba6818a9946242f28ff95bfae0ee8564fed6be9b265c47f" exitCode=0 Dec 03 13:18:55 crc kubenswrapper[4986]: I1203 13:18:55.453140 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-qpv6c" event={"ID":"7b8ed42a-89ce-4098-9489-5291e678bf18","Type":"ContainerDied","Data":"18886cff3842efc33ba6818a9946242f28ff95bfae0ee8564fed6be9b265c47f"} Dec 03 13:18:55 crc kubenswrapper[4986]: I1203 13:18:55.471608 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"67af2ee4-2a0e-4095-b731-5123b5f54053","Type":"ContainerStarted","Data":"9851663107e2242d56bcfa33dd02069dae3700cdb8f99baddaceed3796083c95"} Dec 03 13:18:55 crc kubenswrapper[4986]: I1203 13:18:55.492476 4986 generic.go:334] "Generic (PLEG): container finished" podID="143eb6df-5711-4616-baaa-42417113bfed" containerID="66dd85225308b34c6a81db82d34abc5ae62cc79acc8eb6bf19ae153e4b9dbef9" exitCode=0 Dec 03 13:18:55 crc kubenswrapper[4986]: I1203 13:18:55.492509 4986 generic.go:334] "Generic (PLEG): container finished" podID="143eb6df-5711-4616-baaa-42417113bfed" containerID="a1a8f1bb0109a39a22c7d1184ef9832ec739c4597a7672dc07759640a1d91ad2" exitCode=2 Dec 03 13:18:55 crc kubenswrapper[4986]: I1203 13:18:55.492524 4986 generic.go:334] "Generic (PLEG): container finished" podID="143eb6df-5711-4616-baaa-42417113bfed" containerID="31623ea22a4e33dc46e24291324cbd81da55ddc262f3a20d8b8d61d022e9f667" exitCode=0 Dec 03 13:18:55 crc kubenswrapper[4986]: I1203 13:18:55.492545 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"143eb6df-5711-4616-baaa-42417113bfed","Type":"ContainerDied","Data":"66dd85225308b34c6a81db82d34abc5ae62cc79acc8eb6bf19ae153e4b9dbef9"} Dec 03 13:18:55 crc kubenswrapper[4986]: I1203 13:18:55.492571 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"143eb6df-5711-4616-baaa-42417113bfed","Type":"ContainerDied","Data":"a1a8f1bb0109a39a22c7d1184ef9832ec739c4597a7672dc07759640a1d91ad2"} Dec 03 13:18:55 crc kubenswrapper[4986]: I1203 13:18:55.492581 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"143eb6df-5711-4616-baaa-42417113bfed","Type":"ContainerDied","Data":"31623ea22a4e33dc46e24291324cbd81da55ddc262f3a20d8b8d61d022e9f667"} Dec 03 13:18:55 crc kubenswrapper[4986]: I1203 13:18:55.706716 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 03 13:18:55 crc kubenswrapper[4986]: I1203 13:18:55.782910 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 03 13:18:56 crc kubenswrapper[4986]: I1203 13:18:56.512538 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"82935ead-8451-43d6-bd7a-c0444aadc886","Type":"ContainerStarted","Data":"9474eb41c1f71ecec7edb3764be46ed6eeb981c0ddc4034e9c74ba1875199c8a"} Dec 03 13:18:56 crc kubenswrapper[4986]: I1203 13:18:56.517084 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c8ddd69c-xcsvv" event={"ID":"578cfbdf-da14-435d-a13f-e525d482a66d","Type":"ContainerStarted","Data":"7f54dd37064d434913a38b90d2ce79ca46e7d951907cd62e1d7c9b3fe3067a7f"} Dec 03 13:18:56 crc kubenswrapper[4986]: I1203 13:18:56.518302 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-75c8ddd69c-xcsvv" Dec 03 13:18:56 crc kubenswrapper[4986]: I1203 13:18:56.531676 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"67af2ee4-2a0e-4095-b731-5123b5f54053","Type":"ContainerStarted","Data":"46a78bb19739ed1234d6044ae7c2279ef1e7c249ac32d33541d8b0de53690c02"} Dec 03 13:18:56 crc kubenswrapper[4986]: I1203 13:18:56.541900 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-75c8ddd69c-xcsvv" podStartSLOduration=5.541886466 podStartE2EDuration="5.541886466s" podCreationTimestamp="2025-12-03 13:18:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:18:56.539671975 +0000 UTC m=+1396.006103166" watchObservedRunningTime="2025-12-03 13:18:56.541886466 +0000 UTC m=+1396.008317657" Dec 03 13:18:56 crc kubenswrapper[4986]: I1203 13:18:56.832931 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5744ccfcbb-rcmx5" Dec 03 13:18:57 crc kubenswrapper[4986]: I1203 13:18:57.037030 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-qpv6c" Dec 03 13:18:57 crc kubenswrapper[4986]: I1203 13:18:57.172359 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b8ed42a-89ce-4098-9489-5291e678bf18-scripts\") pod \"7b8ed42a-89ce-4098-9489-5291e678bf18\" (UID: \"7b8ed42a-89ce-4098-9489-5291e678bf18\") " Dec 03 13:18:57 crc kubenswrapper[4986]: I1203 13:18:57.172509 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7nxn8\" (UniqueName: \"kubernetes.io/projected/7b8ed42a-89ce-4098-9489-5291e678bf18-kube-api-access-7nxn8\") pod \"7b8ed42a-89ce-4098-9489-5291e678bf18\" (UID: \"7b8ed42a-89ce-4098-9489-5291e678bf18\") " Dec 03 13:18:57 crc kubenswrapper[4986]: I1203 13:18:57.172590 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7b8ed42a-89ce-4098-9489-5291e678bf18-etc-machine-id\") pod \"7b8ed42a-89ce-4098-9489-5291e678bf18\" (UID: \"7b8ed42a-89ce-4098-9489-5291e678bf18\") " Dec 03 13:18:57 crc kubenswrapper[4986]: I1203 13:18:57.172618 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b8ed42a-89ce-4098-9489-5291e678bf18-combined-ca-bundle\") pod \"7b8ed42a-89ce-4098-9489-5291e678bf18\" (UID: \"7b8ed42a-89ce-4098-9489-5291e678bf18\") " Dec 03 13:18:57 crc kubenswrapper[4986]: I1203 13:18:57.172688 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b8ed42a-89ce-4098-9489-5291e678bf18-config-data\") pod \"7b8ed42a-89ce-4098-9489-5291e678bf18\" (UID: \"7b8ed42a-89ce-4098-9489-5291e678bf18\") " Dec 03 13:18:57 crc kubenswrapper[4986]: I1203 13:18:57.172740 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7b8ed42a-89ce-4098-9489-5291e678bf18-db-sync-config-data\") pod \"7b8ed42a-89ce-4098-9489-5291e678bf18\" (UID: \"7b8ed42a-89ce-4098-9489-5291e678bf18\") " Dec 03 13:18:57 crc kubenswrapper[4986]: I1203 13:18:57.173342 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7b8ed42a-89ce-4098-9489-5291e678bf18-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "7b8ed42a-89ce-4098-9489-5291e678bf18" (UID: "7b8ed42a-89ce-4098-9489-5291e678bf18"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 13:18:57 crc kubenswrapper[4986]: I1203 13:18:57.181248 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b8ed42a-89ce-4098-9489-5291e678bf18-scripts" (OuterVolumeSpecName: "scripts") pod "7b8ed42a-89ce-4098-9489-5291e678bf18" (UID: "7b8ed42a-89ce-4098-9489-5291e678bf18"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:18:57 crc kubenswrapper[4986]: I1203 13:18:57.183707 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b8ed42a-89ce-4098-9489-5291e678bf18-kube-api-access-7nxn8" (OuterVolumeSpecName: "kube-api-access-7nxn8") pod "7b8ed42a-89ce-4098-9489-5291e678bf18" (UID: "7b8ed42a-89ce-4098-9489-5291e678bf18"). InnerVolumeSpecName "kube-api-access-7nxn8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:18:57 crc kubenswrapper[4986]: I1203 13:18:57.183949 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5744ccfcbb-rcmx5" Dec 03 13:18:57 crc kubenswrapper[4986]: I1203 13:18:57.194595 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b8ed42a-89ce-4098-9489-5291e678bf18-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "7b8ed42a-89ce-4098-9489-5291e678bf18" (UID: "7b8ed42a-89ce-4098-9489-5291e678bf18"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:18:57 crc kubenswrapper[4986]: I1203 13:18:57.228347 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b8ed42a-89ce-4098-9489-5291e678bf18-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7b8ed42a-89ce-4098-9489-5291e678bf18" (UID: "7b8ed42a-89ce-4098-9489-5291e678bf18"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:18:57 crc kubenswrapper[4986]: I1203 13:18:57.252712 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b8ed42a-89ce-4098-9489-5291e678bf18-config-data" (OuterVolumeSpecName: "config-data") pod "7b8ed42a-89ce-4098-9489-5291e678bf18" (UID: "7b8ed42a-89ce-4098-9489-5291e678bf18"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:18:57 crc kubenswrapper[4986]: I1203 13:18:57.255468 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7bc8976f54-hn8pf"] Dec 03 13:18:57 crc kubenswrapper[4986]: I1203 13:18:57.255730 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7bc8976f54-hn8pf" podUID="4d708ae2-3c0a-4134-97fe-36270231010f" containerName="barbican-api-log" containerID="cri-o://e221479e09edabe1d4433296404302474b808fd5df11d6fedd4a03e7a8e8e042" gracePeriod=30 Dec 03 13:18:57 crc kubenswrapper[4986]: I1203 13:18:57.255827 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7bc8976f54-hn8pf" podUID="4d708ae2-3c0a-4134-97fe-36270231010f" containerName="barbican-api" containerID="cri-o://639d14405922b129b66585e80277451a8daf2c9acad37a002013359435e61394" gracePeriod=30 Dec 03 13:18:57 crc kubenswrapper[4986]: I1203 13:18:57.266797 4986 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-7bc8976f54-hn8pf" podUID="4d708ae2-3c0a-4134-97fe-36270231010f" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.152:9311/healthcheck\": EOF" Dec 03 13:18:57 crc kubenswrapper[4986]: I1203 13:18:57.269367 4986 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-7bc8976f54-hn8pf" podUID="4d708ae2-3c0a-4134-97fe-36270231010f" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.152:9311/healthcheck\": EOF" Dec 03 13:18:57 crc kubenswrapper[4986]: I1203 13:18:57.275690 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7nxn8\" (UniqueName: \"kubernetes.io/projected/7b8ed42a-89ce-4098-9489-5291e678bf18-kube-api-access-7nxn8\") on node \"crc\" DevicePath \"\"" Dec 03 13:18:57 crc kubenswrapper[4986]: I1203 13:18:57.275725 4986 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7b8ed42a-89ce-4098-9489-5291e678bf18-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 03 13:18:57 crc kubenswrapper[4986]: I1203 13:18:57.275738 4986 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b8ed42a-89ce-4098-9489-5291e678bf18-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 13:18:57 crc kubenswrapper[4986]: I1203 13:18:57.275749 4986 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b8ed42a-89ce-4098-9489-5291e678bf18-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 13:18:57 crc kubenswrapper[4986]: I1203 13:18:57.275761 4986 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7b8ed42a-89ce-4098-9489-5291e678bf18-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 13:18:57 crc kubenswrapper[4986]: I1203 13:18:57.275770 4986 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b8ed42a-89ce-4098-9489-5291e678bf18-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 13:18:57 crc kubenswrapper[4986]: I1203 13:18:57.595940 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-qpv6c" event={"ID":"7b8ed42a-89ce-4098-9489-5291e678bf18","Type":"ContainerDied","Data":"9749150b08d5677247a9ad250d112e717b19b6a22d41a13419398fce583c11ff"} Dec 03 13:18:57 crc kubenswrapper[4986]: I1203 13:18:57.596236 4986 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9749150b08d5677247a9ad250d112e717b19b6a22d41a13419398fce583c11ff" Dec 03 13:18:57 crc kubenswrapper[4986]: I1203 13:18:57.596311 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-qpv6c" Dec 03 13:18:57 crc kubenswrapper[4986]: I1203 13:18:57.628488 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"67af2ee4-2a0e-4095-b731-5123b5f54053","Type":"ContainerStarted","Data":"70630f8b5819312cadb40cbf9f86e473df92e83ccb4812a4521b3f309227da07"} Dec 03 13:18:57 crc kubenswrapper[4986]: I1203 13:18:57.628922 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="67af2ee4-2a0e-4095-b731-5123b5f54053" containerName="glance-log" containerID="cri-o://46a78bb19739ed1234d6044ae7c2279ef1e7c249ac32d33541d8b0de53690c02" gracePeriod=30 Dec 03 13:18:57 crc kubenswrapper[4986]: I1203 13:18:57.629011 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="67af2ee4-2a0e-4095-b731-5123b5f54053" containerName="glance-httpd" containerID="cri-o://70630f8b5819312cadb40cbf9f86e473df92e83ccb4812a4521b3f309227da07" gracePeriod=30 Dec 03 13:18:57 crc kubenswrapper[4986]: I1203 13:18:57.649717 4986 generic.go:334] "Generic (PLEG): container finished" podID="4d708ae2-3c0a-4134-97fe-36270231010f" containerID="e221479e09edabe1d4433296404302474b808fd5df11d6fedd4a03e7a8e8e042" exitCode=143 Dec 03 13:18:57 crc kubenswrapper[4986]: I1203 13:18:57.649815 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7bc8976f54-hn8pf" event={"ID":"4d708ae2-3c0a-4134-97fe-36270231010f","Type":"ContainerDied","Data":"e221479e09edabe1d4433296404302474b808fd5df11d6fedd4a03e7a8e8e042"} Dec 03 13:18:57 crc kubenswrapper[4986]: I1203 13:18:57.658941 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"82935ead-8451-43d6-bd7a-c0444aadc886","Type":"ContainerStarted","Data":"eac9e098543c89e6f66d111eb8925b1bbaf1d91b0d2c46856ab77df08fce9a7a"} Dec 03 13:18:57 crc kubenswrapper[4986]: I1203 13:18:57.659136 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="82935ead-8451-43d6-bd7a-c0444aadc886" containerName="glance-log" containerID="cri-o://9474eb41c1f71ecec7edb3764be46ed6eeb981c0ddc4034e9c74ba1875199c8a" gracePeriod=30 Dec 03 13:18:57 crc kubenswrapper[4986]: I1203 13:18:57.659326 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="82935ead-8451-43d6-bd7a-c0444aadc886" containerName="glance-httpd" containerID="cri-o://eac9e098543c89e6f66d111eb8925b1bbaf1d91b0d2c46856ab77df08fce9a7a" gracePeriod=30 Dec 03 13:18:57 crc kubenswrapper[4986]: I1203 13:18:57.718553 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.718533075 podStartE2EDuration="6.718533075s" podCreationTimestamp="2025-12-03 13:18:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:18:57.673642214 +0000 UTC m=+1397.140073405" watchObservedRunningTime="2025-12-03 13:18:57.718533075 +0000 UTC m=+1397.184964276" Dec 03 13:18:57 crc kubenswrapper[4986]: I1203 13:18:57.721671 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.72165627 podStartE2EDuration="5.72165627s" podCreationTimestamp="2025-12-03 13:18:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:18:57.711682701 +0000 UTC m=+1397.178113892" watchObservedRunningTime="2025-12-03 13:18:57.72165627 +0000 UTC m=+1397.188087461" Dec 03 13:18:57 crc kubenswrapper[4986]: I1203 13:18:57.879689 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Dec 03 13:18:57 crc kubenswrapper[4986]: E1203 13:18:57.880173 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d09ba1cc-e27c-4ea3-9f4a-2f7791afce95" containerName="init" Dec 03 13:18:57 crc kubenswrapper[4986]: I1203 13:18:57.880199 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="d09ba1cc-e27c-4ea3-9f4a-2f7791afce95" containerName="init" Dec 03 13:18:57 crc kubenswrapper[4986]: E1203 13:18:57.880241 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d09ba1cc-e27c-4ea3-9f4a-2f7791afce95" containerName="dnsmasq-dns" Dec 03 13:18:57 crc kubenswrapper[4986]: I1203 13:18:57.880249 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="d09ba1cc-e27c-4ea3-9f4a-2f7791afce95" containerName="dnsmasq-dns" Dec 03 13:18:57 crc kubenswrapper[4986]: E1203 13:18:57.880267 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b8ed42a-89ce-4098-9489-5291e678bf18" containerName="cinder-db-sync" Dec 03 13:18:57 crc kubenswrapper[4986]: I1203 13:18:57.880274 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b8ed42a-89ce-4098-9489-5291e678bf18" containerName="cinder-db-sync" Dec 03 13:18:57 crc kubenswrapper[4986]: I1203 13:18:57.880508 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="d09ba1cc-e27c-4ea3-9f4a-2f7791afce95" containerName="dnsmasq-dns" Dec 03 13:18:57 crc kubenswrapper[4986]: I1203 13:18:57.880539 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b8ed42a-89ce-4098-9489-5291e678bf18" containerName="cinder-db-sync" Dec 03 13:18:57 crc kubenswrapper[4986]: I1203 13:18:57.881713 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 03 13:18:57 crc kubenswrapper[4986]: I1203 13:18:57.890114 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-trqh5" Dec 03 13:18:57 crc kubenswrapper[4986]: I1203 13:18:57.890165 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Dec 03 13:18:57 crc kubenswrapper[4986]: I1203 13:18:57.890322 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Dec 03 13:18:57 crc kubenswrapper[4986]: I1203 13:18:57.890445 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Dec 03 13:18:57 crc kubenswrapper[4986]: I1203 13:18:57.919251 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 03 13:18:57 crc kubenswrapper[4986]: I1203 13:18:57.930125 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-xcsvv"] Dec 03 13:18:57 crc kubenswrapper[4986]: I1203 13:18:57.956633 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-xrr75"] Dec 03 13:18:57 crc kubenswrapper[4986]: I1203 13:18:57.958116 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-xrr75" Dec 03 13:18:57 crc kubenswrapper[4986]: I1203 13:18:57.995135 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-xrr75"] Dec 03 13:18:57 crc kubenswrapper[4986]: I1203 13:18:57.997667 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2ctx\" (UniqueName: \"kubernetes.io/projected/aec700a6-d4c5-4c24-8726-16dc792d5658-kube-api-access-r2ctx\") pod \"cinder-scheduler-0\" (UID: \"aec700a6-d4c5-4c24-8726-16dc792d5658\") " pod="openstack/cinder-scheduler-0" Dec 03 13:18:57 crc kubenswrapper[4986]: I1203 13:18:57.997712 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aec700a6-d4c5-4c24-8726-16dc792d5658-config-data\") pod \"cinder-scheduler-0\" (UID: \"aec700a6-d4c5-4c24-8726-16dc792d5658\") " pod="openstack/cinder-scheduler-0" Dec 03 13:18:57 crc kubenswrapper[4986]: I1203 13:18:57.997738 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aec700a6-d4c5-4c24-8726-16dc792d5658-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"aec700a6-d4c5-4c24-8726-16dc792d5658\") " pod="openstack/cinder-scheduler-0" Dec 03 13:18:57 crc kubenswrapper[4986]: I1203 13:18:57.997788 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aec700a6-d4c5-4c24-8726-16dc792d5658-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"aec700a6-d4c5-4c24-8726-16dc792d5658\") " pod="openstack/cinder-scheduler-0" Dec 03 13:18:57 crc kubenswrapper[4986]: I1203 13:18:57.997842 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/aec700a6-d4c5-4c24-8726-16dc792d5658-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"aec700a6-d4c5-4c24-8726-16dc792d5658\") " pod="openstack/cinder-scheduler-0" Dec 03 13:18:57 crc kubenswrapper[4986]: I1203 13:18:57.997857 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aec700a6-d4c5-4c24-8726-16dc792d5658-scripts\") pod \"cinder-scheduler-0\" (UID: \"aec700a6-d4c5-4c24-8726-16dc792d5658\") " pod="openstack/cinder-scheduler-0" Dec 03 13:18:58 crc kubenswrapper[4986]: I1203 13:18:58.099192 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2ctx\" (UniqueName: \"kubernetes.io/projected/aec700a6-d4c5-4c24-8726-16dc792d5658-kube-api-access-r2ctx\") pod \"cinder-scheduler-0\" (UID: \"aec700a6-d4c5-4c24-8726-16dc792d5658\") " pod="openstack/cinder-scheduler-0" Dec 03 13:18:58 crc kubenswrapper[4986]: I1203 13:18:58.099258 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aec700a6-d4c5-4c24-8726-16dc792d5658-config-data\") pod \"cinder-scheduler-0\" (UID: \"aec700a6-d4c5-4c24-8726-16dc792d5658\") " pod="openstack/cinder-scheduler-0" Dec 03 13:18:58 crc kubenswrapper[4986]: I1203 13:18:58.099714 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e622eeb7-7b89-4328-a775-bc363cb7ebd5-dns-swift-storage-0\") pod \"dnsmasq-dns-5784cf869f-xrr75\" (UID: \"e622eeb7-7b89-4328-a775-bc363cb7ebd5\") " pod="openstack/dnsmasq-dns-5784cf869f-xrr75" Dec 03 13:18:58 crc kubenswrapper[4986]: I1203 13:18:58.099764 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrffk\" (UniqueName: \"kubernetes.io/projected/e622eeb7-7b89-4328-a775-bc363cb7ebd5-kube-api-access-jrffk\") pod \"dnsmasq-dns-5784cf869f-xrr75\" (UID: \"e622eeb7-7b89-4328-a775-bc363cb7ebd5\") " pod="openstack/dnsmasq-dns-5784cf869f-xrr75" Dec 03 13:18:58 crc kubenswrapper[4986]: I1203 13:18:58.099795 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aec700a6-d4c5-4c24-8726-16dc792d5658-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"aec700a6-d4c5-4c24-8726-16dc792d5658\") " pod="openstack/cinder-scheduler-0" Dec 03 13:18:58 crc kubenswrapper[4986]: I1203 13:18:58.099886 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e622eeb7-7b89-4328-a775-bc363cb7ebd5-ovsdbserver-nb\") pod \"dnsmasq-dns-5784cf869f-xrr75\" (UID: \"e622eeb7-7b89-4328-a775-bc363cb7ebd5\") " pod="openstack/dnsmasq-dns-5784cf869f-xrr75" Dec 03 13:18:58 crc kubenswrapper[4986]: I1203 13:18:58.100634 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aec700a6-d4c5-4c24-8726-16dc792d5658-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"aec700a6-d4c5-4c24-8726-16dc792d5658\") " pod="openstack/cinder-scheduler-0" Dec 03 13:18:58 crc kubenswrapper[4986]: I1203 13:18:58.100709 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e622eeb7-7b89-4328-a775-bc363cb7ebd5-config\") pod \"dnsmasq-dns-5784cf869f-xrr75\" (UID: \"e622eeb7-7b89-4328-a775-bc363cb7ebd5\") " pod="openstack/dnsmasq-dns-5784cf869f-xrr75" Dec 03 13:18:58 crc kubenswrapper[4986]: I1203 13:18:58.100785 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/aec700a6-d4c5-4c24-8726-16dc792d5658-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"aec700a6-d4c5-4c24-8726-16dc792d5658\") " pod="openstack/cinder-scheduler-0" Dec 03 13:18:58 crc kubenswrapper[4986]: I1203 13:18:58.100819 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aec700a6-d4c5-4c24-8726-16dc792d5658-scripts\") pod \"cinder-scheduler-0\" (UID: \"aec700a6-d4c5-4c24-8726-16dc792d5658\") " pod="openstack/cinder-scheduler-0" Dec 03 13:18:58 crc kubenswrapper[4986]: I1203 13:18:58.100873 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e622eeb7-7b89-4328-a775-bc363cb7ebd5-dns-svc\") pod \"dnsmasq-dns-5784cf869f-xrr75\" (UID: \"e622eeb7-7b89-4328-a775-bc363cb7ebd5\") " pod="openstack/dnsmasq-dns-5784cf869f-xrr75" Dec 03 13:18:58 crc kubenswrapper[4986]: I1203 13:18:58.100940 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e622eeb7-7b89-4328-a775-bc363cb7ebd5-ovsdbserver-sb\") pod \"dnsmasq-dns-5784cf869f-xrr75\" (UID: \"e622eeb7-7b89-4328-a775-bc363cb7ebd5\") " pod="openstack/dnsmasq-dns-5784cf869f-xrr75" Dec 03 13:18:58 crc kubenswrapper[4986]: I1203 13:18:58.102484 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/aec700a6-d4c5-4c24-8726-16dc792d5658-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"aec700a6-d4c5-4c24-8726-16dc792d5658\") " pod="openstack/cinder-scheduler-0" Dec 03 13:18:58 crc kubenswrapper[4986]: I1203 13:18:58.117976 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aec700a6-d4c5-4c24-8726-16dc792d5658-scripts\") pod \"cinder-scheduler-0\" (UID: \"aec700a6-d4c5-4c24-8726-16dc792d5658\") " pod="openstack/cinder-scheduler-0" Dec 03 13:18:58 crc kubenswrapper[4986]: I1203 13:18:58.117986 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aec700a6-d4c5-4c24-8726-16dc792d5658-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"aec700a6-d4c5-4c24-8726-16dc792d5658\") " pod="openstack/cinder-scheduler-0" Dec 03 13:18:58 crc kubenswrapper[4986]: I1203 13:18:58.127041 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aec700a6-d4c5-4c24-8726-16dc792d5658-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"aec700a6-d4c5-4c24-8726-16dc792d5658\") " pod="openstack/cinder-scheduler-0" Dec 03 13:18:58 crc kubenswrapper[4986]: I1203 13:18:58.127400 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aec700a6-d4c5-4c24-8726-16dc792d5658-config-data\") pod \"cinder-scheduler-0\" (UID: \"aec700a6-d4c5-4c24-8726-16dc792d5658\") " pod="openstack/cinder-scheduler-0" Dec 03 13:18:58 crc kubenswrapper[4986]: I1203 13:18:58.134126 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2ctx\" (UniqueName: \"kubernetes.io/projected/aec700a6-d4c5-4c24-8726-16dc792d5658-kube-api-access-r2ctx\") pod \"cinder-scheduler-0\" (UID: \"aec700a6-d4c5-4c24-8726-16dc792d5658\") " pod="openstack/cinder-scheduler-0" Dec 03 13:18:58 crc kubenswrapper[4986]: I1203 13:18:58.195118 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Dec 03 13:18:58 crc kubenswrapper[4986]: I1203 13:18:58.196922 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 03 13:18:58 crc kubenswrapper[4986]: I1203 13:18:58.202857 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e622eeb7-7b89-4328-a775-bc363cb7ebd5-dns-svc\") pod \"dnsmasq-dns-5784cf869f-xrr75\" (UID: \"e622eeb7-7b89-4328-a775-bc363cb7ebd5\") " pod="openstack/dnsmasq-dns-5784cf869f-xrr75" Dec 03 13:18:58 crc kubenswrapper[4986]: I1203 13:18:58.202916 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e622eeb7-7b89-4328-a775-bc363cb7ebd5-ovsdbserver-sb\") pod \"dnsmasq-dns-5784cf869f-xrr75\" (UID: \"e622eeb7-7b89-4328-a775-bc363cb7ebd5\") " pod="openstack/dnsmasq-dns-5784cf869f-xrr75" Dec 03 13:18:58 crc kubenswrapper[4986]: I1203 13:18:58.202946 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e622eeb7-7b89-4328-a775-bc363cb7ebd5-dns-swift-storage-0\") pod \"dnsmasq-dns-5784cf869f-xrr75\" (UID: \"e622eeb7-7b89-4328-a775-bc363cb7ebd5\") " pod="openstack/dnsmasq-dns-5784cf869f-xrr75" Dec 03 13:18:58 crc kubenswrapper[4986]: I1203 13:18:58.202964 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrffk\" (UniqueName: \"kubernetes.io/projected/e622eeb7-7b89-4328-a775-bc363cb7ebd5-kube-api-access-jrffk\") pod \"dnsmasq-dns-5784cf869f-xrr75\" (UID: \"e622eeb7-7b89-4328-a775-bc363cb7ebd5\") " pod="openstack/dnsmasq-dns-5784cf869f-xrr75" Dec 03 13:18:58 crc kubenswrapper[4986]: I1203 13:18:58.202994 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e622eeb7-7b89-4328-a775-bc363cb7ebd5-ovsdbserver-nb\") pod \"dnsmasq-dns-5784cf869f-xrr75\" (UID: \"e622eeb7-7b89-4328-a775-bc363cb7ebd5\") " pod="openstack/dnsmasq-dns-5784cf869f-xrr75" Dec 03 13:18:58 crc kubenswrapper[4986]: I1203 13:18:58.203054 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e622eeb7-7b89-4328-a775-bc363cb7ebd5-config\") pod \"dnsmasq-dns-5784cf869f-xrr75\" (UID: \"e622eeb7-7b89-4328-a775-bc363cb7ebd5\") " pod="openstack/dnsmasq-dns-5784cf869f-xrr75" Dec 03 13:18:58 crc kubenswrapper[4986]: I1203 13:18:58.203956 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e622eeb7-7b89-4328-a775-bc363cb7ebd5-config\") pod \"dnsmasq-dns-5784cf869f-xrr75\" (UID: \"e622eeb7-7b89-4328-a775-bc363cb7ebd5\") " pod="openstack/dnsmasq-dns-5784cf869f-xrr75" Dec 03 13:18:58 crc kubenswrapper[4986]: I1203 13:18:58.204472 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e622eeb7-7b89-4328-a775-bc363cb7ebd5-dns-svc\") pod \"dnsmasq-dns-5784cf869f-xrr75\" (UID: \"e622eeb7-7b89-4328-a775-bc363cb7ebd5\") " pod="openstack/dnsmasq-dns-5784cf869f-xrr75" Dec 03 13:18:58 crc kubenswrapper[4986]: I1203 13:18:58.204927 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e622eeb7-7b89-4328-a775-bc363cb7ebd5-ovsdbserver-sb\") pod \"dnsmasq-dns-5784cf869f-xrr75\" (UID: \"e622eeb7-7b89-4328-a775-bc363cb7ebd5\") " pod="openstack/dnsmasq-dns-5784cf869f-xrr75" Dec 03 13:18:58 crc kubenswrapper[4986]: I1203 13:18:58.204947 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Dec 03 13:18:58 crc kubenswrapper[4986]: I1203 13:18:58.205549 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e622eeb7-7b89-4328-a775-bc363cb7ebd5-dns-swift-storage-0\") pod \"dnsmasq-dns-5784cf869f-xrr75\" (UID: \"e622eeb7-7b89-4328-a775-bc363cb7ebd5\") " pod="openstack/dnsmasq-dns-5784cf869f-xrr75" Dec 03 13:18:58 crc kubenswrapper[4986]: I1203 13:18:58.206094 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e622eeb7-7b89-4328-a775-bc363cb7ebd5-ovsdbserver-nb\") pod \"dnsmasq-dns-5784cf869f-xrr75\" (UID: \"e622eeb7-7b89-4328-a775-bc363cb7ebd5\") " pod="openstack/dnsmasq-dns-5784cf869f-xrr75" Dec 03 13:18:58 crc kubenswrapper[4986]: I1203 13:18:58.211659 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 03 13:18:58 crc kubenswrapper[4986]: I1203 13:18:58.228719 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 03 13:18:58 crc kubenswrapper[4986]: I1203 13:18:58.230635 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrffk\" (UniqueName: \"kubernetes.io/projected/e622eeb7-7b89-4328-a775-bc363cb7ebd5-kube-api-access-jrffk\") pod \"dnsmasq-dns-5784cf869f-xrr75\" (UID: \"e622eeb7-7b89-4328-a775-bc363cb7ebd5\") " pod="openstack/dnsmasq-dns-5784cf869f-xrr75" Dec 03 13:18:58 crc kubenswrapper[4986]: I1203 13:18:58.277032 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-xrr75" Dec 03 13:18:58 crc kubenswrapper[4986]: I1203 13:18:58.305041 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7f4d4dfb-2e4f-42b0-8760-01287415424c-config-data-custom\") pod \"cinder-api-0\" (UID: \"7f4d4dfb-2e4f-42b0-8760-01287415424c\") " pod="openstack/cinder-api-0" Dec 03 13:18:58 crc kubenswrapper[4986]: I1203 13:18:58.305182 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpxn9\" (UniqueName: \"kubernetes.io/projected/7f4d4dfb-2e4f-42b0-8760-01287415424c-kube-api-access-gpxn9\") pod \"cinder-api-0\" (UID: \"7f4d4dfb-2e4f-42b0-8760-01287415424c\") " pod="openstack/cinder-api-0" Dec 03 13:18:58 crc kubenswrapper[4986]: I1203 13:18:58.305250 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f4d4dfb-2e4f-42b0-8760-01287415424c-scripts\") pod \"cinder-api-0\" (UID: \"7f4d4dfb-2e4f-42b0-8760-01287415424c\") " pod="openstack/cinder-api-0" Dec 03 13:18:58 crc kubenswrapper[4986]: I1203 13:18:58.305275 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f4d4dfb-2e4f-42b0-8760-01287415424c-config-data\") pod \"cinder-api-0\" (UID: \"7f4d4dfb-2e4f-42b0-8760-01287415424c\") " pod="openstack/cinder-api-0" Dec 03 13:18:58 crc kubenswrapper[4986]: I1203 13:18:58.305344 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f4d4dfb-2e4f-42b0-8760-01287415424c-logs\") pod \"cinder-api-0\" (UID: \"7f4d4dfb-2e4f-42b0-8760-01287415424c\") " pod="openstack/cinder-api-0" Dec 03 13:18:58 crc kubenswrapper[4986]: I1203 13:18:58.305400 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7f4d4dfb-2e4f-42b0-8760-01287415424c-etc-machine-id\") pod \"cinder-api-0\" (UID: \"7f4d4dfb-2e4f-42b0-8760-01287415424c\") " pod="openstack/cinder-api-0" Dec 03 13:18:58 crc kubenswrapper[4986]: I1203 13:18:58.305427 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f4d4dfb-2e4f-42b0-8760-01287415424c-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"7f4d4dfb-2e4f-42b0-8760-01287415424c\") " pod="openstack/cinder-api-0" Dec 03 13:18:58 crc kubenswrapper[4986]: I1203 13:18:58.406619 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f4d4dfb-2e4f-42b0-8760-01287415424c-scripts\") pod \"cinder-api-0\" (UID: \"7f4d4dfb-2e4f-42b0-8760-01287415424c\") " pod="openstack/cinder-api-0" Dec 03 13:18:58 crc kubenswrapper[4986]: I1203 13:18:58.406945 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f4d4dfb-2e4f-42b0-8760-01287415424c-config-data\") pod \"cinder-api-0\" (UID: \"7f4d4dfb-2e4f-42b0-8760-01287415424c\") " pod="openstack/cinder-api-0" Dec 03 13:18:58 crc kubenswrapper[4986]: I1203 13:18:58.407001 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f4d4dfb-2e4f-42b0-8760-01287415424c-logs\") pod \"cinder-api-0\" (UID: \"7f4d4dfb-2e4f-42b0-8760-01287415424c\") " pod="openstack/cinder-api-0" Dec 03 13:18:58 crc kubenswrapper[4986]: I1203 13:18:58.407036 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7f4d4dfb-2e4f-42b0-8760-01287415424c-etc-machine-id\") pod \"cinder-api-0\" (UID: \"7f4d4dfb-2e4f-42b0-8760-01287415424c\") " pod="openstack/cinder-api-0" Dec 03 13:18:58 crc kubenswrapper[4986]: I1203 13:18:58.407063 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f4d4dfb-2e4f-42b0-8760-01287415424c-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"7f4d4dfb-2e4f-42b0-8760-01287415424c\") " pod="openstack/cinder-api-0" Dec 03 13:18:58 crc kubenswrapper[4986]: I1203 13:18:58.407111 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7f4d4dfb-2e4f-42b0-8760-01287415424c-config-data-custom\") pod \"cinder-api-0\" (UID: \"7f4d4dfb-2e4f-42b0-8760-01287415424c\") " pod="openstack/cinder-api-0" Dec 03 13:18:58 crc kubenswrapper[4986]: I1203 13:18:58.407185 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpxn9\" (UniqueName: \"kubernetes.io/projected/7f4d4dfb-2e4f-42b0-8760-01287415424c-kube-api-access-gpxn9\") pod \"cinder-api-0\" (UID: \"7f4d4dfb-2e4f-42b0-8760-01287415424c\") " pod="openstack/cinder-api-0" Dec 03 13:18:58 crc kubenswrapper[4986]: I1203 13:18:58.408340 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7f4d4dfb-2e4f-42b0-8760-01287415424c-etc-machine-id\") pod \"cinder-api-0\" (UID: \"7f4d4dfb-2e4f-42b0-8760-01287415424c\") " pod="openstack/cinder-api-0" Dec 03 13:18:58 crc kubenswrapper[4986]: I1203 13:18:58.409584 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f4d4dfb-2e4f-42b0-8760-01287415424c-logs\") pod \"cinder-api-0\" (UID: \"7f4d4dfb-2e4f-42b0-8760-01287415424c\") " pod="openstack/cinder-api-0" Dec 03 13:18:58 crc kubenswrapper[4986]: I1203 13:18:58.415124 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f4d4dfb-2e4f-42b0-8760-01287415424c-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"7f4d4dfb-2e4f-42b0-8760-01287415424c\") " pod="openstack/cinder-api-0" Dec 03 13:18:58 crc kubenswrapper[4986]: I1203 13:18:58.415463 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f4d4dfb-2e4f-42b0-8760-01287415424c-scripts\") pod \"cinder-api-0\" (UID: \"7f4d4dfb-2e4f-42b0-8760-01287415424c\") " pod="openstack/cinder-api-0" Dec 03 13:18:58 crc kubenswrapper[4986]: I1203 13:18:58.417480 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7f4d4dfb-2e4f-42b0-8760-01287415424c-config-data-custom\") pod \"cinder-api-0\" (UID: \"7f4d4dfb-2e4f-42b0-8760-01287415424c\") " pod="openstack/cinder-api-0" Dec 03 13:18:58 crc kubenswrapper[4986]: I1203 13:18:58.437787 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpxn9\" (UniqueName: \"kubernetes.io/projected/7f4d4dfb-2e4f-42b0-8760-01287415424c-kube-api-access-gpxn9\") pod \"cinder-api-0\" (UID: \"7f4d4dfb-2e4f-42b0-8760-01287415424c\") " pod="openstack/cinder-api-0" Dec 03 13:18:58 crc kubenswrapper[4986]: I1203 13:18:58.438667 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f4d4dfb-2e4f-42b0-8760-01287415424c-config-data\") pod \"cinder-api-0\" (UID: \"7f4d4dfb-2e4f-42b0-8760-01287415424c\") " pod="openstack/cinder-api-0" Dec 03 13:18:58 crc kubenswrapper[4986]: I1203 13:18:58.546800 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 03 13:18:58 crc kubenswrapper[4986]: I1203 13:18:58.684273 4986 generic.go:334] "Generic (PLEG): container finished" podID="82935ead-8451-43d6-bd7a-c0444aadc886" containerID="eac9e098543c89e6f66d111eb8925b1bbaf1d91b0d2c46856ab77df08fce9a7a" exitCode=0 Dec 03 13:18:58 crc kubenswrapper[4986]: I1203 13:18:58.684327 4986 generic.go:334] "Generic (PLEG): container finished" podID="82935ead-8451-43d6-bd7a-c0444aadc886" containerID="9474eb41c1f71ecec7edb3764be46ed6eeb981c0ddc4034e9c74ba1875199c8a" exitCode=143 Dec 03 13:18:58 crc kubenswrapper[4986]: I1203 13:18:58.684385 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"82935ead-8451-43d6-bd7a-c0444aadc886","Type":"ContainerDied","Data":"eac9e098543c89e6f66d111eb8925b1bbaf1d91b0d2c46856ab77df08fce9a7a"} Dec 03 13:18:58 crc kubenswrapper[4986]: I1203 13:18:58.684423 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"82935ead-8451-43d6-bd7a-c0444aadc886","Type":"ContainerDied","Data":"9474eb41c1f71ecec7edb3764be46ed6eeb981c0ddc4034e9c74ba1875199c8a"} Dec 03 13:18:58 crc kubenswrapper[4986]: I1203 13:18:58.687711 4986 generic.go:334] "Generic (PLEG): container finished" podID="67af2ee4-2a0e-4095-b731-5123b5f54053" containerID="70630f8b5819312cadb40cbf9f86e473df92e83ccb4812a4521b3f309227da07" exitCode=0 Dec 03 13:18:58 crc kubenswrapper[4986]: I1203 13:18:58.687737 4986 generic.go:334] "Generic (PLEG): container finished" podID="67af2ee4-2a0e-4095-b731-5123b5f54053" containerID="46a78bb19739ed1234d6044ae7c2279ef1e7c249ac32d33541d8b0de53690c02" exitCode=143 Dec 03 13:18:58 crc kubenswrapper[4986]: I1203 13:18:58.687781 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"67af2ee4-2a0e-4095-b731-5123b5f54053","Type":"ContainerDied","Data":"70630f8b5819312cadb40cbf9f86e473df92e83ccb4812a4521b3f309227da07"} Dec 03 13:18:58 crc kubenswrapper[4986]: I1203 13:18:58.687834 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"67af2ee4-2a0e-4095-b731-5123b5f54053","Type":"ContainerDied","Data":"46a78bb19739ed1234d6044ae7c2279ef1e7c249ac32d33541d8b0de53690c02"} Dec 03 13:18:58 crc kubenswrapper[4986]: I1203 13:18:58.784558 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 03 13:18:58 crc kubenswrapper[4986]: I1203 13:18:58.922845 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"82935ead-8451-43d6-bd7a-c0444aadc886\" (UID: \"82935ead-8451-43d6-bd7a-c0444aadc886\") " Dec 03 13:18:58 crc kubenswrapper[4986]: I1203 13:18:58.923840 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82935ead-8451-43d6-bd7a-c0444aadc886-logs\") pod \"82935ead-8451-43d6-bd7a-c0444aadc886\" (UID: \"82935ead-8451-43d6-bd7a-c0444aadc886\") " Dec 03 13:18:58 crc kubenswrapper[4986]: I1203 13:18:58.924003 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82935ead-8451-43d6-bd7a-c0444aadc886-combined-ca-bundle\") pod \"82935ead-8451-43d6-bd7a-c0444aadc886\" (UID: \"82935ead-8451-43d6-bd7a-c0444aadc886\") " Dec 03 13:18:58 crc kubenswrapper[4986]: I1203 13:18:58.924134 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82935ead-8451-43d6-bd7a-c0444aadc886-config-data\") pod \"82935ead-8451-43d6-bd7a-c0444aadc886\" (UID: \"82935ead-8451-43d6-bd7a-c0444aadc886\") " Dec 03 13:18:58 crc kubenswrapper[4986]: I1203 13:18:58.924178 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j2f4c\" (UniqueName: \"kubernetes.io/projected/82935ead-8451-43d6-bd7a-c0444aadc886-kube-api-access-j2f4c\") pod \"82935ead-8451-43d6-bd7a-c0444aadc886\" (UID: \"82935ead-8451-43d6-bd7a-c0444aadc886\") " Dec 03 13:18:58 crc kubenswrapper[4986]: I1203 13:18:58.924249 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82935ead-8451-43d6-bd7a-c0444aadc886-scripts\") pod \"82935ead-8451-43d6-bd7a-c0444aadc886\" (UID: \"82935ead-8451-43d6-bd7a-c0444aadc886\") " Dec 03 13:18:58 crc kubenswrapper[4986]: I1203 13:18:58.924311 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/82935ead-8451-43d6-bd7a-c0444aadc886-httpd-run\") pod \"82935ead-8451-43d6-bd7a-c0444aadc886\" (UID: \"82935ead-8451-43d6-bd7a-c0444aadc886\") " Dec 03 13:18:58 crc kubenswrapper[4986]: I1203 13:18:58.924667 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82935ead-8451-43d6-bd7a-c0444aadc886-logs" (OuterVolumeSpecName: "logs") pod "82935ead-8451-43d6-bd7a-c0444aadc886" (UID: "82935ead-8451-43d6-bd7a-c0444aadc886"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 13:18:58 crc kubenswrapper[4986]: I1203 13:18:58.924719 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82935ead-8451-43d6-bd7a-c0444aadc886-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "82935ead-8451-43d6-bd7a-c0444aadc886" (UID: "82935ead-8451-43d6-bd7a-c0444aadc886"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 13:18:58 crc kubenswrapper[4986]: I1203 13:18:58.925094 4986 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82935ead-8451-43d6-bd7a-c0444aadc886-logs\") on node \"crc\" DevicePath \"\"" Dec 03 13:18:58 crc kubenswrapper[4986]: I1203 13:18:58.925107 4986 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/82935ead-8451-43d6-bd7a-c0444aadc886-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 03 13:18:58 crc kubenswrapper[4986]: I1203 13:18:58.937472 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "82935ead-8451-43d6-bd7a-c0444aadc886" (UID: "82935ead-8451-43d6-bd7a-c0444aadc886"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 03 13:18:58 crc kubenswrapper[4986]: I1203 13:18:58.945371 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82935ead-8451-43d6-bd7a-c0444aadc886-kube-api-access-j2f4c" (OuterVolumeSpecName: "kube-api-access-j2f4c") pod "82935ead-8451-43d6-bd7a-c0444aadc886" (UID: "82935ead-8451-43d6-bd7a-c0444aadc886"). InnerVolumeSpecName "kube-api-access-j2f4c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:18:58 crc kubenswrapper[4986]: I1203 13:18:58.945568 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82935ead-8451-43d6-bd7a-c0444aadc886-scripts" (OuterVolumeSpecName: "scripts") pod "82935ead-8451-43d6-bd7a-c0444aadc886" (UID: "82935ead-8451-43d6-bd7a-c0444aadc886"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:18:59 crc kubenswrapper[4986]: I1203 13:18:59.012650 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 03 13:18:59 crc kubenswrapper[4986]: I1203 13:18:59.013141 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82935ead-8451-43d6-bd7a-c0444aadc886-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "82935ead-8451-43d6-bd7a-c0444aadc886" (UID: "82935ead-8451-43d6-bd7a-c0444aadc886"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:18:59 crc kubenswrapper[4986]: I1203 13:18:59.030598 4986 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82935ead-8451-43d6-bd7a-c0444aadc886-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 13:18:59 crc kubenswrapper[4986]: I1203 13:18:59.030625 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j2f4c\" (UniqueName: \"kubernetes.io/projected/82935ead-8451-43d6-bd7a-c0444aadc886-kube-api-access-j2f4c\") on node \"crc\" DevicePath \"\"" Dec 03 13:18:59 crc kubenswrapper[4986]: I1203 13:18:59.030637 4986 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82935ead-8451-43d6-bd7a-c0444aadc886-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 13:18:59 crc kubenswrapper[4986]: I1203 13:18:59.030659 4986 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Dec 03 13:18:59 crc kubenswrapper[4986]: I1203 13:18:59.101388 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 03 13:18:59 crc kubenswrapper[4986]: I1203 13:18:59.115912 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82935ead-8451-43d6-bd7a-c0444aadc886-config-data" (OuterVolumeSpecName: "config-data") pod "82935ead-8451-43d6-bd7a-c0444aadc886" (UID: "82935ead-8451-43d6-bd7a-c0444aadc886"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:18:59 crc kubenswrapper[4986]: I1203 13:18:59.134055 4986 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82935ead-8451-43d6-bd7a-c0444aadc886-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 13:18:59 crc kubenswrapper[4986]: I1203 13:18:59.137131 4986 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Dec 03 13:18:59 crc kubenswrapper[4986]: I1203 13:18:59.179138 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-xrr75"] Dec 03 13:18:59 crc kubenswrapper[4986]: I1203 13:18:59.235679 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67af2ee4-2a0e-4095-b731-5123b5f54053-config-data\") pod \"67af2ee4-2a0e-4095-b731-5123b5f54053\" (UID: \"67af2ee4-2a0e-4095-b731-5123b5f54053\") " Dec 03 13:18:59 crc kubenswrapper[4986]: I1203 13:18:59.236174 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"67af2ee4-2a0e-4095-b731-5123b5f54053\" (UID: \"67af2ee4-2a0e-4095-b731-5123b5f54053\") " Dec 03 13:18:59 crc kubenswrapper[4986]: I1203 13:18:59.236269 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67af2ee4-2a0e-4095-b731-5123b5f54053-scripts\") pod \"67af2ee4-2a0e-4095-b731-5123b5f54053\" (UID: \"67af2ee4-2a0e-4095-b731-5123b5f54053\") " Dec 03 13:18:59 crc kubenswrapper[4986]: I1203 13:18:59.236464 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pr2mb\" (UniqueName: \"kubernetes.io/projected/67af2ee4-2a0e-4095-b731-5123b5f54053-kube-api-access-pr2mb\") pod \"67af2ee4-2a0e-4095-b731-5123b5f54053\" (UID: \"67af2ee4-2a0e-4095-b731-5123b5f54053\") " Dec 03 13:18:59 crc kubenswrapper[4986]: I1203 13:18:59.236583 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/67af2ee4-2a0e-4095-b731-5123b5f54053-httpd-run\") pod \"67af2ee4-2a0e-4095-b731-5123b5f54053\" (UID: \"67af2ee4-2a0e-4095-b731-5123b5f54053\") " Dec 03 13:18:59 crc kubenswrapper[4986]: I1203 13:18:59.236693 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67af2ee4-2a0e-4095-b731-5123b5f54053-combined-ca-bundle\") pod \"67af2ee4-2a0e-4095-b731-5123b5f54053\" (UID: \"67af2ee4-2a0e-4095-b731-5123b5f54053\") " Dec 03 13:18:59 crc kubenswrapper[4986]: I1203 13:18:59.236764 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67af2ee4-2a0e-4095-b731-5123b5f54053-logs\") pod \"67af2ee4-2a0e-4095-b731-5123b5f54053\" (UID: \"67af2ee4-2a0e-4095-b731-5123b5f54053\") " Dec 03 13:18:59 crc kubenswrapper[4986]: I1203 13:18:59.237016 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67af2ee4-2a0e-4095-b731-5123b5f54053-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "67af2ee4-2a0e-4095-b731-5123b5f54053" (UID: "67af2ee4-2a0e-4095-b731-5123b5f54053"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 13:18:59 crc kubenswrapper[4986]: I1203 13:18:59.237163 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67af2ee4-2a0e-4095-b731-5123b5f54053-logs" (OuterVolumeSpecName: "logs") pod "67af2ee4-2a0e-4095-b731-5123b5f54053" (UID: "67af2ee4-2a0e-4095-b731-5123b5f54053"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 13:18:59 crc kubenswrapper[4986]: I1203 13:18:59.237298 4986 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Dec 03 13:18:59 crc kubenswrapper[4986]: I1203 13:18:59.237368 4986 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/67af2ee4-2a0e-4095-b731-5123b5f54053-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 03 13:18:59 crc kubenswrapper[4986]: I1203 13:18:59.239557 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "67af2ee4-2a0e-4095-b731-5123b5f54053" (UID: "67af2ee4-2a0e-4095-b731-5123b5f54053"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 03 13:18:59 crc kubenswrapper[4986]: I1203 13:18:59.240769 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67af2ee4-2a0e-4095-b731-5123b5f54053-scripts" (OuterVolumeSpecName: "scripts") pod "67af2ee4-2a0e-4095-b731-5123b5f54053" (UID: "67af2ee4-2a0e-4095-b731-5123b5f54053"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:18:59 crc kubenswrapper[4986]: I1203 13:18:59.243863 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67af2ee4-2a0e-4095-b731-5123b5f54053-kube-api-access-pr2mb" (OuterVolumeSpecName: "kube-api-access-pr2mb") pod "67af2ee4-2a0e-4095-b731-5123b5f54053" (UID: "67af2ee4-2a0e-4095-b731-5123b5f54053"). InnerVolumeSpecName "kube-api-access-pr2mb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:18:59 crc kubenswrapper[4986]: I1203 13:18:59.291479 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67af2ee4-2a0e-4095-b731-5123b5f54053-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "67af2ee4-2a0e-4095-b731-5123b5f54053" (UID: "67af2ee4-2a0e-4095-b731-5123b5f54053"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:18:59 crc kubenswrapper[4986]: I1203 13:18:59.328912 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67af2ee4-2a0e-4095-b731-5123b5f54053-config-data" (OuterVolumeSpecName: "config-data") pod "67af2ee4-2a0e-4095-b731-5123b5f54053" (UID: "67af2ee4-2a0e-4095-b731-5123b5f54053"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:18:59 crc kubenswrapper[4986]: I1203 13:18:59.339480 4986 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67af2ee4-2a0e-4095-b731-5123b5f54053-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 13:18:59 crc kubenswrapper[4986]: I1203 13:18:59.339515 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pr2mb\" (UniqueName: \"kubernetes.io/projected/67af2ee4-2a0e-4095-b731-5123b5f54053-kube-api-access-pr2mb\") on node \"crc\" DevicePath \"\"" Dec 03 13:18:59 crc kubenswrapper[4986]: I1203 13:18:59.339530 4986 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67af2ee4-2a0e-4095-b731-5123b5f54053-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 13:18:59 crc kubenswrapper[4986]: I1203 13:18:59.339541 4986 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67af2ee4-2a0e-4095-b731-5123b5f54053-logs\") on node \"crc\" DevicePath \"\"" Dec 03 13:18:59 crc kubenswrapper[4986]: I1203 13:18:59.339552 4986 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67af2ee4-2a0e-4095-b731-5123b5f54053-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 13:18:59 crc kubenswrapper[4986]: I1203 13:18:59.339584 4986 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Dec 03 13:18:59 crc kubenswrapper[4986]: I1203 13:18:59.389934 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 03 13:18:59 crc kubenswrapper[4986]: I1203 13:18:59.394743 4986 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Dec 03 13:18:59 crc kubenswrapper[4986]: I1203 13:18:59.441904 4986 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Dec 03 13:18:59 crc kubenswrapper[4986]: I1203 13:18:59.689055 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 13:18:59 crc kubenswrapper[4986]: I1203 13:18:59.710209 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7f4d4dfb-2e4f-42b0-8760-01287415424c","Type":"ContainerStarted","Data":"7f5554334360aaea690d8db623d7d69c91b93cfc4490fb6ee985fdd879b81a3d"} Dec 03 13:18:59 crc kubenswrapper[4986]: I1203 13:18:59.715579 4986 generic.go:334] "Generic (PLEG): container finished" podID="e622eeb7-7b89-4328-a775-bc363cb7ebd5" containerID="3b452d881bfbe7e6782935c827b149861070ea7f9b3cc398c0161668aa50c0f8" exitCode=0 Dec 03 13:18:59 crc kubenswrapper[4986]: I1203 13:18:59.715652 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-xrr75" event={"ID":"e622eeb7-7b89-4328-a775-bc363cb7ebd5","Type":"ContainerDied","Data":"3b452d881bfbe7e6782935c827b149861070ea7f9b3cc398c0161668aa50c0f8"} Dec 03 13:18:59 crc kubenswrapper[4986]: I1203 13:18:59.715677 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-xrr75" event={"ID":"e622eeb7-7b89-4328-a775-bc363cb7ebd5","Type":"ContainerStarted","Data":"b17c17ef05f26f6096d4c38db5284c6cef39cdd9a0bfb49e9c98dea2070f33b1"} Dec 03 13:18:59 crc kubenswrapper[4986]: I1203 13:18:59.724585 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"aec700a6-d4c5-4c24-8726-16dc792d5658","Type":"ContainerStarted","Data":"f3fbe8fa122668c911f3927a6c787ae5482c0c9eab5469b3a441a00e13708810"} Dec 03 13:18:59 crc kubenswrapper[4986]: I1203 13:18:59.726623 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"67af2ee4-2a0e-4095-b731-5123b5f54053","Type":"ContainerDied","Data":"9851663107e2242d56bcfa33dd02069dae3700cdb8f99baddaceed3796083c95"} Dec 03 13:18:59 crc kubenswrapper[4986]: I1203 13:18:59.726653 4986 scope.go:117] "RemoveContainer" containerID="70630f8b5819312cadb40cbf9f86e473df92e83ccb4812a4521b3f309227da07" Dec 03 13:18:59 crc kubenswrapper[4986]: I1203 13:18:59.726744 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 03 13:18:59 crc kubenswrapper[4986]: I1203 13:18:59.774121 4986 generic.go:334] "Generic (PLEG): container finished" podID="143eb6df-5711-4616-baaa-42417113bfed" containerID="8fd2e1e494f36abbefe9886a1586a47aa16686c32a62d8f2e95942c2e0f8f466" exitCode=0 Dec 03 13:18:59 crc kubenswrapper[4986]: I1203 13:18:59.774200 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"143eb6df-5711-4616-baaa-42417113bfed","Type":"ContainerDied","Data":"8fd2e1e494f36abbefe9886a1586a47aa16686c32a62d8f2e95942c2e0f8f466"} Dec 03 13:18:59 crc kubenswrapper[4986]: I1203 13:18:59.774225 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"143eb6df-5711-4616-baaa-42417113bfed","Type":"ContainerDied","Data":"cb6685371160ee4e0c3ef5afc3cfc3b242614925197e07206ed0a0bfdf43ace3"} Dec 03 13:18:59 crc kubenswrapper[4986]: I1203 13:18:59.774363 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 13:18:59 crc kubenswrapper[4986]: I1203 13:18:59.790474 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 03 13:18:59 crc kubenswrapper[4986]: I1203 13:18:59.790537 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"82935ead-8451-43d6-bd7a-c0444aadc886","Type":"ContainerDied","Data":"64a1e60965009eddb1462d64c09194b75700246f92b5ed4e127e163d5777d9be"} Dec 03 13:18:59 crc kubenswrapper[4986]: I1203 13:18:59.790605 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-75c8ddd69c-xcsvv" podUID="578cfbdf-da14-435d-a13f-e525d482a66d" containerName="dnsmasq-dns" containerID="cri-o://7f54dd37064d434913a38b90d2ce79ca46e7d951907cd62e1d7c9b3fe3067a7f" gracePeriod=10 Dec 03 13:18:59 crc kubenswrapper[4986]: I1203 13:18:59.832390 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 03 13:18:59 crc kubenswrapper[4986]: I1203 13:18:59.852825 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/143eb6df-5711-4616-baaa-42417113bfed-config-data\") pod \"143eb6df-5711-4616-baaa-42417113bfed\" (UID: \"143eb6df-5711-4616-baaa-42417113bfed\") " Dec 03 13:18:59 crc kubenswrapper[4986]: I1203 13:18:59.852884 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gpz5g\" (UniqueName: \"kubernetes.io/projected/143eb6df-5711-4616-baaa-42417113bfed-kube-api-access-gpz5g\") pod \"143eb6df-5711-4616-baaa-42417113bfed\" (UID: \"143eb6df-5711-4616-baaa-42417113bfed\") " Dec 03 13:18:59 crc kubenswrapper[4986]: I1203 13:18:59.852948 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/143eb6df-5711-4616-baaa-42417113bfed-run-httpd\") pod \"143eb6df-5711-4616-baaa-42417113bfed\" (UID: \"143eb6df-5711-4616-baaa-42417113bfed\") " Dec 03 13:18:59 crc kubenswrapper[4986]: I1203 13:18:59.853029 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/143eb6df-5711-4616-baaa-42417113bfed-combined-ca-bundle\") pod \"143eb6df-5711-4616-baaa-42417113bfed\" (UID: \"143eb6df-5711-4616-baaa-42417113bfed\") " Dec 03 13:18:59 crc kubenswrapper[4986]: I1203 13:18:59.853089 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/143eb6df-5711-4616-baaa-42417113bfed-sg-core-conf-yaml\") pod \"143eb6df-5711-4616-baaa-42417113bfed\" (UID: \"143eb6df-5711-4616-baaa-42417113bfed\") " Dec 03 13:18:59 crc kubenswrapper[4986]: I1203 13:18:59.853155 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/143eb6df-5711-4616-baaa-42417113bfed-log-httpd\") pod \"143eb6df-5711-4616-baaa-42417113bfed\" (UID: \"143eb6df-5711-4616-baaa-42417113bfed\") " Dec 03 13:18:59 crc kubenswrapper[4986]: I1203 13:18:59.853172 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/143eb6df-5711-4616-baaa-42417113bfed-scripts\") pod \"143eb6df-5711-4616-baaa-42417113bfed\" (UID: \"143eb6df-5711-4616-baaa-42417113bfed\") " Dec 03 13:18:59 crc kubenswrapper[4986]: I1203 13:18:59.860852 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/143eb6df-5711-4616-baaa-42417113bfed-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "143eb6df-5711-4616-baaa-42417113bfed" (UID: "143eb6df-5711-4616-baaa-42417113bfed"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 13:18:59 crc kubenswrapper[4986]: I1203 13:18:59.861135 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/143eb6df-5711-4616-baaa-42417113bfed-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "143eb6df-5711-4616-baaa-42417113bfed" (UID: "143eb6df-5711-4616-baaa-42417113bfed"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 13:18:59 crc kubenswrapper[4986]: I1203 13:18:59.870871 4986 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/143eb6df-5711-4616-baaa-42417113bfed-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 03 13:18:59 crc kubenswrapper[4986]: I1203 13:18:59.870908 4986 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/143eb6df-5711-4616-baaa-42417113bfed-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 03 13:18:59 crc kubenswrapper[4986]: I1203 13:18:59.877088 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/143eb6df-5711-4616-baaa-42417113bfed-kube-api-access-gpz5g" (OuterVolumeSpecName: "kube-api-access-gpz5g") pod "143eb6df-5711-4616-baaa-42417113bfed" (UID: "143eb6df-5711-4616-baaa-42417113bfed"). InnerVolumeSpecName "kube-api-access-gpz5g". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:18:59 crc kubenswrapper[4986]: I1203 13:18:59.880069 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 03 13:18:59 crc kubenswrapper[4986]: I1203 13:18:59.885155 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/143eb6df-5711-4616-baaa-42417113bfed-scripts" (OuterVolumeSpecName: "scripts") pod "143eb6df-5711-4616-baaa-42417113bfed" (UID: "143eb6df-5711-4616-baaa-42417113bfed"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:18:59 crc kubenswrapper[4986]: I1203 13:18:59.911472 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/143eb6df-5711-4616-baaa-42417113bfed-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "143eb6df-5711-4616-baaa-42417113bfed" (UID: "143eb6df-5711-4616-baaa-42417113bfed"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:18:59 crc kubenswrapper[4986]: I1203 13:18:59.930367 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 03 13:18:59 crc kubenswrapper[4986]: E1203 13:18:59.930764 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="143eb6df-5711-4616-baaa-42417113bfed" containerName="proxy-httpd" Dec 03 13:18:59 crc kubenswrapper[4986]: I1203 13:18:59.930775 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="143eb6df-5711-4616-baaa-42417113bfed" containerName="proxy-httpd" Dec 03 13:18:59 crc kubenswrapper[4986]: E1203 13:18:59.930783 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67af2ee4-2a0e-4095-b731-5123b5f54053" containerName="glance-httpd" Dec 03 13:18:59 crc kubenswrapper[4986]: I1203 13:18:59.930789 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="67af2ee4-2a0e-4095-b731-5123b5f54053" containerName="glance-httpd" Dec 03 13:18:59 crc kubenswrapper[4986]: E1203 13:18:59.930799 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="143eb6df-5711-4616-baaa-42417113bfed" containerName="sg-core" Dec 03 13:18:59 crc kubenswrapper[4986]: I1203 13:18:59.930804 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="143eb6df-5711-4616-baaa-42417113bfed" containerName="sg-core" Dec 03 13:18:59 crc kubenswrapper[4986]: E1203 13:18:59.930810 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82935ead-8451-43d6-bd7a-c0444aadc886" containerName="glance-log" Dec 03 13:18:59 crc kubenswrapper[4986]: I1203 13:18:59.930816 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="82935ead-8451-43d6-bd7a-c0444aadc886" containerName="glance-log" Dec 03 13:18:59 crc kubenswrapper[4986]: E1203 13:18:59.930839 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67af2ee4-2a0e-4095-b731-5123b5f54053" containerName="glance-log" Dec 03 13:18:59 crc kubenswrapper[4986]: I1203 13:18:59.930844 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="67af2ee4-2a0e-4095-b731-5123b5f54053" containerName="glance-log" Dec 03 13:18:59 crc kubenswrapper[4986]: E1203 13:18:59.930861 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82935ead-8451-43d6-bd7a-c0444aadc886" containerName="glance-httpd" Dec 03 13:18:59 crc kubenswrapper[4986]: I1203 13:18:59.930867 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="82935ead-8451-43d6-bd7a-c0444aadc886" containerName="glance-httpd" Dec 03 13:18:59 crc kubenswrapper[4986]: E1203 13:18:59.930875 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="143eb6df-5711-4616-baaa-42417113bfed" containerName="ceilometer-central-agent" Dec 03 13:18:59 crc kubenswrapper[4986]: I1203 13:18:59.930881 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="143eb6df-5711-4616-baaa-42417113bfed" containerName="ceilometer-central-agent" Dec 03 13:18:59 crc kubenswrapper[4986]: E1203 13:18:59.930888 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="143eb6df-5711-4616-baaa-42417113bfed" containerName="ceilometer-notification-agent" Dec 03 13:18:59 crc kubenswrapper[4986]: I1203 13:18:59.930894 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="143eb6df-5711-4616-baaa-42417113bfed" containerName="ceilometer-notification-agent" Dec 03 13:18:59 crc kubenswrapper[4986]: I1203 13:18:59.931053 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="143eb6df-5711-4616-baaa-42417113bfed" containerName="ceilometer-central-agent" Dec 03 13:18:59 crc kubenswrapper[4986]: I1203 13:18:59.931065 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="82935ead-8451-43d6-bd7a-c0444aadc886" containerName="glance-log" Dec 03 13:18:59 crc kubenswrapper[4986]: I1203 13:18:59.931074 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="82935ead-8451-43d6-bd7a-c0444aadc886" containerName="glance-httpd" Dec 03 13:18:59 crc kubenswrapper[4986]: I1203 13:18:59.931084 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="67af2ee4-2a0e-4095-b731-5123b5f54053" containerName="glance-log" Dec 03 13:18:59 crc kubenswrapper[4986]: I1203 13:18:59.931097 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="143eb6df-5711-4616-baaa-42417113bfed" containerName="ceilometer-notification-agent" Dec 03 13:18:59 crc kubenswrapper[4986]: I1203 13:18:59.931106 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="143eb6df-5711-4616-baaa-42417113bfed" containerName="sg-core" Dec 03 13:18:59 crc kubenswrapper[4986]: I1203 13:18:59.931118 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="143eb6df-5711-4616-baaa-42417113bfed" containerName="proxy-httpd" Dec 03 13:18:59 crc kubenswrapper[4986]: I1203 13:18:59.931128 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="67af2ee4-2a0e-4095-b731-5123b5f54053" containerName="glance-httpd" Dec 03 13:18:59 crc kubenswrapper[4986]: I1203 13:18:59.932041 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 03 13:18:59 crc kubenswrapper[4986]: I1203 13:18:59.947558 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Dec 03 13:18:59 crc kubenswrapper[4986]: I1203 13:18:59.947894 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 03 13:18:59 crc kubenswrapper[4986]: I1203 13:18:59.948005 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Dec 03 13:18:59 crc kubenswrapper[4986]: I1203 13:18:59.954968 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-kj54w" Dec 03 13:18:59 crc kubenswrapper[4986]: I1203 13:18:59.977360 4986 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/143eb6df-5711-4616-baaa-42417113bfed-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 03 13:18:59 crc kubenswrapper[4986]: I1203 13:18:59.977393 4986 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/143eb6df-5711-4616-baaa-42417113bfed-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 13:18:59 crc kubenswrapper[4986]: I1203 13:18:59.977406 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gpz5g\" (UniqueName: \"kubernetes.io/projected/143eb6df-5711-4616-baaa-42417113bfed-kube-api-access-gpz5g\") on node \"crc\" DevicePath \"\"" Dec 03 13:19:00 crc kubenswrapper[4986]: I1203 13:19:00.045434 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 03 13:19:00 crc kubenswrapper[4986]: I1203 13:19:00.080217 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4dae8bb-1f5e-446c-9820-769e77414c04-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d4dae8bb-1f5e-446c-9820-769e77414c04\") " pod="openstack/glance-default-external-api-0" Dec 03 13:19:00 crc kubenswrapper[4986]: I1203 13:19:00.080269 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4dae8bb-1f5e-446c-9820-769e77414c04-logs\") pod \"glance-default-external-api-0\" (UID: \"d4dae8bb-1f5e-446c-9820-769e77414c04\") " pod="openstack/glance-default-external-api-0" Dec 03 13:19:00 crc kubenswrapper[4986]: I1203 13:19:00.080315 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4dae8bb-1f5e-446c-9820-769e77414c04-scripts\") pod \"glance-default-external-api-0\" (UID: \"d4dae8bb-1f5e-446c-9820-769e77414c04\") " pod="openstack/glance-default-external-api-0" Dec 03 13:19:00 crc kubenswrapper[4986]: I1203 13:19:00.080334 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4dae8bb-1f5e-446c-9820-769e77414c04-config-data\") pod \"glance-default-external-api-0\" (UID: \"d4dae8bb-1f5e-446c-9820-769e77414c04\") " pod="openstack/glance-default-external-api-0" Dec 03 13:19:00 crc kubenswrapper[4986]: I1203 13:19:00.080362 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4dae8bb-1f5e-446c-9820-769e77414c04-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"d4dae8bb-1f5e-446c-9820-769e77414c04\") " pod="openstack/glance-default-external-api-0" Dec 03 13:19:00 crc kubenswrapper[4986]: I1203 13:19:00.080383 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mdff\" (UniqueName: \"kubernetes.io/projected/d4dae8bb-1f5e-446c-9820-769e77414c04-kube-api-access-2mdff\") pod \"glance-default-external-api-0\" (UID: \"d4dae8bb-1f5e-446c-9820-769e77414c04\") " pod="openstack/glance-default-external-api-0" Dec 03 13:19:00 crc kubenswrapper[4986]: I1203 13:19:00.080452 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d4dae8bb-1f5e-446c-9820-769e77414c04-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d4dae8bb-1f5e-446c-9820-769e77414c04\") " pod="openstack/glance-default-external-api-0" Dec 03 13:19:00 crc kubenswrapper[4986]: I1203 13:19:00.080532 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"d4dae8bb-1f5e-446c-9820-769e77414c04\") " pod="openstack/glance-default-external-api-0" Dec 03 13:19:00 crc kubenswrapper[4986]: I1203 13:19:00.118972 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 03 13:19:00 crc kubenswrapper[4986]: I1203 13:19:00.150679 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 03 13:19:00 crc kubenswrapper[4986]: I1203 13:19:00.165044 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 03 13:19:00 crc kubenswrapper[4986]: I1203 13:19:00.166986 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 03 13:19:00 crc kubenswrapper[4986]: I1203 13:19:00.175407 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 03 13:19:00 crc kubenswrapper[4986]: I1203 13:19:00.175704 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Dec 03 13:19:00 crc kubenswrapper[4986]: I1203 13:19:00.179115 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 03 13:19:00 crc kubenswrapper[4986]: I1203 13:19:00.181907 4986 scope.go:117] "RemoveContainer" containerID="46a78bb19739ed1234d6044ae7c2279ef1e7c249ac32d33541d8b0de53690c02" Dec 03 13:19:00 crc kubenswrapper[4986]: I1203 13:19:00.182698 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d4dae8bb-1f5e-446c-9820-769e77414c04-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d4dae8bb-1f5e-446c-9820-769e77414c04\") " pod="openstack/glance-default-external-api-0" Dec 03 13:19:00 crc kubenswrapper[4986]: I1203 13:19:00.182778 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"d4dae8bb-1f5e-446c-9820-769e77414c04\") " pod="openstack/glance-default-external-api-0" Dec 03 13:19:00 crc kubenswrapper[4986]: I1203 13:19:00.182821 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4dae8bb-1f5e-446c-9820-769e77414c04-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d4dae8bb-1f5e-446c-9820-769e77414c04\") " pod="openstack/glance-default-external-api-0" Dec 03 13:19:00 crc kubenswrapper[4986]: I1203 13:19:00.182842 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4dae8bb-1f5e-446c-9820-769e77414c04-logs\") pod \"glance-default-external-api-0\" (UID: \"d4dae8bb-1f5e-446c-9820-769e77414c04\") " pod="openstack/glance-default-external-api-0" Dec 03 13:19:00 crc kubenswrapper[4986]: I1203 13:19:00.182867 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4dae8bb-1f5e-446c-9820-769e77414c04-scripts\") pod \"glance-default-external-api-0\" (UID: \"d4dae8bb-1f5e-446c-9820-769e77414c04\") " pod="openstack/glance-default-external-api-0" Dec 03 13:19:00 crc kubenswrapper[4986]: I1203 13:19:00.182885 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4dae8bb-1f5e-446c-9820-769e77414c04-config-data\") pod \"glance-default-external-api-0\" (UID: \"d4dae8bb-1f5e-446c-9820-769e77414c04\") " pod="openstack/glance-default-external-api-0" Dec 03 13:19:00 crc kubenswrapper[4986]: I1203 13:19:00.182914 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4dae8bb-1f5e-446c-9820-769e77414c04-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"d4dae8bb-1f5e-446c-9820-769e77414c04\") " pod="openstack/glance-default-external-api-0" Dec 03 13:19:00 crc kubenswrapper[4986]: I1203 13:19:00.182935 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mdff\" (UniqueName: \"kubernetes.io/projected/d4dae8bb-1f5e-446c-9820-769e77414c04-kube-api-access-2mdff\") pod \"glance-default-external-api-0\" (UID: \"d4dae8bb-1f5e-446c-9820-769e77414c04\") " pod="openstack/glance-default-external-api-0" Dec 03 13:19:00 crc kubenswrapper[4986]: I1203 13:19:00.183592 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d4dae8bb-1f5e-446c-9820-769e77414c04-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d4dae8bb-1f5e-446c-9820-769e77414c04\") " pod="openstack/glance-default-external-api-0" Dec 03 13:19:00 crc kubenswrapper[4986]: I1203 13:19:00.183937 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4dae8bb-1f5e-446c-9820-769e77414c04-logs\") pod \"glance-default-external-api-0\" (UID: \"d4dae8bb-1f5e-446c-9820-769e77414c04\") " pod="openstack/glance-default-external-api-0" Dec 03 13:19:00 crc kubenswrapper[4986]: I1203 13:19:00.184371 4986 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"d4dae8bb-1f5e-446c-9820-769e77414c04\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-external-api-0" Dec 03 13:19:00 crc kubenswrapper[4986]: I1203 13:19:00.194084 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4dae8bb-1f5e-446c-9820-769e77414c04-scripts\") pod \"glance-default-external-api-0\" (UID: \"d4dae8bb-1f5e-446c-9820-769e77414c04\") " pod="openstack/glance-default-external-api-0" Dec 03 13:19:00 crc kubenswrapper[4986]: I1203 13:19:00.196362 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4dae8bb-1f5e-446c-9820-769e77414c04-config-data\") pod \"glance-default-external-api-0\" (UID: \"d4dae8bb-1f5e-446c-9820-769e77414c04\") " pod="openstack/glance-default-external-api-0" Dec 03 13:19:00 crc kubenswrapper[4986]: I1203 13:19:00.197097 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4dae8bb-1f5e-446c-9820-769e77414c04-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d4dae8bb-1f5e-446c-9820-769e77414c04\") " pod="openstack/glance-default-external-api-0" Dec 03 13:19:00 crc kubenswrapper[4986]: I1203 13:19:00.207652 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4dae8bb-1f5e-446c-9820-769e77414c04-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"d4dae8bb-1f5e-446c-9820-769e77414c04\") " pod="openstack/glance-default-external-api-0" Dec 03 13:19:00 crc kubenswrapper[4986]: I1203 13:19:00.215313 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mdff\" (UniqueName: \"kubernetes.io/projected/d4dae8bb-1f5e-446c-9820-769e77414c04-kube-api-access-2mdff\") pod \"glance-default-external-api-0\" (UID: \"d4dae8bb-1f5e-446c-9820-769e77414c04\") " pod="openstack/glance-default-external-api-0" Dec 03 13:19:00 crc kubenswrapper[4986]: I1203 13:19:00.246234 4986 scope.go:117] "RemoveContainer" containerID="66dd85225308b34c6a81db82d34abc5ae62cc79acc8eb6bf19ae153e4b9dbef9" Dec 03 13:19:00 crc kubenswrapper[4986]: I1203 13:19:00.248527 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"d4dae8bb-1f5e-446c-9820-769e77414c04\") " pod="openstack/glance-default-external-api-0" Dec 03 13:19:00 crc kubenswrapper[4986]: I1203 13:19:00.271452 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/143eb6df-5711-4616-baaa-42417113bfed-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "143eb6df-5711-4616-baaa-42417113bfed" (UID: "143eb6df-5711-4616-baaa-42417113bfed"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:19:00 crc kubenswrapper[4986]: I1203 13:19:00.279821 4986 scope.go:117] "RemoveContainer" containerID="a1a8f1bb0109a39a22c7d1184ef9832ec739c4597a7672dc07759640a1d91ad2" Dec 03 13:19:00 crc kubenswrapper[4986]: I1203 13:19:00.284898 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5c981f3-a559-4641-aaae-2e90c5d0d543-logs\") pod \"glance-default-internal-api-0\" (UID: \"d5c981f3-a559-4641-aaae-2e90c5d0d543\") " pod="openstack/glance-default-internal-api-0" Dec 03 13:19:00 crc kubenswrapper[4986]: I1203 13:19:00.284967 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5c981f3-a559-4641-aaae-2e90c5d0d543-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d5c981f3-a559-4641-aaae-2e90c5d0d543\") " pod="openstack/glance-default-internal-api-0" Dec 03 13:19:00 crc kubenswrapper[4986]: I1203 13:19:00.285043 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d5c981f3-a559-4641-aaae-2e90c5d0d543-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d5c981f3-a559-4641-aaae-2e90c5d0d543\") " pod="openstack/glance-default-internal-api-0" Dec 03 13:19:00 crc kubenswrapper[4986]: I1203 13:19:00.285107 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5c981f3-a559-4641-aaae-2e90c5d0d543-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d5c981f3-a559-4641-aaae-2e90c5d0d543\") " pod="openstack/glance-default-internal-api-0" Dec 03 13:19:00 crc kubenswrapper[4986]: I1203 13:19:00.285147 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"d5c981f3-a559-4641-aaae-2e90c5d0d543\") " pod="openstack/glance-default-internal-api-0" Dec 03 13:19:00 crc kubenswrapper[4986]: I1203 13:19:00.285306 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5c981f3-a559-4641-aaae-2e90c5d0d543-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d5c981f3-a559-4641-aaae-2e90c5d0d543\") " pod="openstack/glance-default-internal-api-0" Dec 03 13:19:00 crc kubenswrapper[4986]: I1203 13:19:00.285396 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqtsg\" (UniqueName: \"kubernetes.io/projected/d5c981f3-a559-4641-aaae-2e90c5d0d543-kube-api-access-fqtsg\") pod \"glance-default-internal-api-0\" (UID: \"d5c981f3-a559-4641-aaae-2e90c5d0d543\") " pod="openstack/glance-default-internal-api-0" Dec 03 13:19:00 crc kubenswrapper[4986]: I1203 13:19:00.285443 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5c981f3-a559-4641-aaae-2e90c5d0d543-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d5c981f3-a559-4641-aaae-2e90c5d0d543\") " pod="openstack/glance-default-internal-api-0" Dec 03 13:19:00 crc kubenswrapper[4986]: I1203 13:19:00.285512 4986 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/143eb6df-5711-4616-baaa-42417113bfed-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 13:19:00 crc kubenswrapper[4986]: I1203 13:19:00.313526 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/143eb6df-5711-4616-baaa-42417113bfed-config-data" (OuterVolumeSpecName: "config-data") pod "143eb6df-5711-4616-baaa-42417113bfed" (UID: "143eb6df-5711-4616-baaa-42417113bfed"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:19:00 crc kubenswrapper[4986]: I1203 13:19:00.316795 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 03 13:19:00 crc kubenswrapper[4986]: I1203 13:19:00.340433 4986 scope.go:117] "RemoveContainer" containerID="8fd2e1e494f36abbefe9886a1586a47aa16686c32a62d8f2e95942c2e0f8f466" Dec 03 13:19:00 crc kubenswrapper[4986]: I1203 13:19:00.388224 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5c981f3-a559-4641-aaae-2e90c5d0d543-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d5c981f3-a559-4641-aaae-2e90c5d0d543\") " pod="openstack/glance-default-internal-api-0" Dec 03 13:19:00 crc kubenswrapper[4986]: I1203 13:19:00.388333 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqtsg\" (UniqueName: \"kubernetes.io/projected/d5c981f3-a559-4641-aaae-2e90c5d0d543-kube-api-access-fqtsg\") pod \"glance-default-internal-api-0\" (UID: \"d5c981f3-a559-4641-aaae-2e90c5d0d543\") " pod="openstack/glance-default-internal-api-0" Dec 03 13:19:00 crc kubenswrapper[4986]: I1203 13:19:00.388365 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5c981f3-a559-4641-aaae-2e90c5d0d543-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d5c981f3-a559-4641-aaae-2e90c5d0d543\") " pod="openstack/glance-default-internal-api-0" Dec 03 13:19:00 crc kubenswrapper[4986]: I1203 13:19:00.388393 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5c981f3-a559-4641-aaae-2e90c5d0d543-logs\") pod \"glance-default-internal-api-0\" (UID: \"d5c981f3-a559-4641-aaae-2e90c5d0d543\") " pod="openstack/glance-default-internal-api-0" Dec 03 13:19:00 crc kubenswrapper[4986]: I1203 13:19:00.388427 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5c981f3-a559-4641-aaae-2e90c5d0d543-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d5c981f3-a559-4641-aaae-2e90c5d0d543\") " pod="openstack/glance-default-internal-api-0" Dec 03 13:19:00 crc kubenswrapper[4986]: I1203 13:19:00.388480 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d5c981f3-a559-4641-aaae-2e90c5d0d543-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d5c981f3-a559-4641-aaae-2e90c5d0d543\") " pod="openstack/glance-default-internal-api-0" Dec 03 13:19:00 crc kubenswrapper[4986]: I1203 13:19:00.388528 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5c981f3-a559-4641-aaae-2e90c5d0d543-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d5c981f3-a559-4641-aaae-2e90c5d0d543\") " pod="openstack/glance-default-internal-api-0" Dec 03 13:19:00 crc kubenswrapper[4986]: I1203 13:19:00.388561 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"d5c981f3-a559-4641-aaae-2e90c5d0d543\") " pod="openstack/glance-default-internal-api-0" Dec 03 13:19:00 crc kubenswrapper[4986]: I1203 13:19:00.388658 4986 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/143eb6df-5711-4616-baaa-42417113bfed-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 13:19:00 crc kubenswrapper[4986]: I1203 13:19:00.388953 4986 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"d5c981f3-a559-4641-aaae-2e90c5d0d543\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-internal-api-0" Dec 03 13:19:00 crc kubenswrapper[4986]: I1203 13:19:00.391768 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5c981f3-a559-4641-aaae-2e90c5d0d543-logs\") pod \"glance-default-internal-api-0\" (UID: \"d5c981f3-a559-4641-aaae-2e90c5d0d543\") " pod="openstack/glance-default-internal-api-0" Dec 03 13:19:00 crc kubenswrapper[4986]: I1203 13:19:00.392002 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d5c981f3-a559-4641-aaae-2e90c5d0d543-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d5c981f3-a559-4641-aaae-2e90c5d0d543\") " pod="openstack/glance-default-internal-api-0" Dec 03 13:19:00 crc kubenswrapper[4986]: I1203 13:19:00.393987 4986 scope.go:117] "RemoveContainer" containerID="31623ea22a4e33dc46e24291324cbd81da55ddc262f3a20d8b8d61d022e9f667" Dec 03 13:19:00 crc kubenswrapper[4986]: I1203 13:19:00.394089 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5c981f3-a559-4641-aaae-2e90c5d0d543-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d5c981f3-a559-4641-aaae-2e90c5d0d543\") " pod="openstack/glance-default-internal-api-0" Dec 03 13:19:00 crc kubenswrapper[4986]: I1203 13:19:00.400009 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5c981f3-a559-4641-aaae-2e90c5d0d543-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d5c981f3-a559-4641-aaae-2e90c5d0d543\") " pod="openstack/glance-default-internal-api-0" Dec 03 13:19:00 crc kubenswrapper[4986]: I1203 13:19:00.400337 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5c981f3-a559-4641-aaae-2e90c5d0d543-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d5c981f3-a559-4641-aaae-2e90c5d0d543\") " pod="openstack/glance-default-internal-api-0" Dec 03 13:19:00 crc kubenswrapper[4986]: I1203 13:19:00.402632 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5c981f3-a559-4641-aaae-2e90c5d0d543-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d5c981f3-a559-4641-aaae-2e90c5d0d543\") " pod="openstack/glance-default-internal-api-0" Dec 03 13:19:00 crc kubenswrapper[4986]: I1203 13:19:00.412793 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqtsg\" (UniqueName: \"kubernetes.io/projected/d5c981f3-a559-4641-aaae-2e90c5d0d543-kube-api-access-fqtsg\") pod \"glance-default-internal-api-0\" (UID: \"d5c981f3-a559-4641-aaae-2e90c5d0d543\") " pod="openstack/glance-default-internal-api-0" Dec 03 13:19:00 crc kubenswrapper[4986]: I1203 13:19:00.435223 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"d5c981f3-a559-4641-aaae-2e90c5d0d543\") " pod="openstack/glance-default-internal-api-0" Dec 03 13:19:00 crc kubenswrapper[4986]: W1203 13:19:00.505049 4986 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4d708ae2_3c0a_4134_97fe_36270231010f.slice/crio-conmon-e221479e09edabe1d4433296404302474b808fd5df11d6fedd4a03e7a8e8e042.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4d708ae2_3c0a_4134_97fe_36270231010f.slice/crio-conmon-e221479e09edabe1d4433296404302474b808fd5df11d6fedd4a03e7a8e8e042.scope: no such file or directory Dec 03 13:19:00 crc kubenswrapper[4986]: W1203 13:19:00.505175 4986 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4d708ae2_3c0a_4134_97fe_36270231010f.slice/crio-e221479e09edabe1d4433296404302474b808fd5df11d6fedd4a03e7a8e8e042.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4d708ae2_3c0a_4134_97fe_36270231010f.slice/crio-e221479e09edabe1d4433296404302474b808fd5df11d6fedd4a03e7a8e8e042.scope: no such file or directory Dec 03 13:19:00 crc kubenswrapper[4986]: I1203 13:19:00.513402 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 03 13:19:00 crc kubenswrapper[4986]: I1203 13:19:00.685803 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 03 13:19:00 crc kubenswrapper[4986]: I1203 13:19:00.838521 4986 scope.go:117] "RemoveContainer" containerID="66dd85225308b34c6a81db82d34abc5ae62cc79acc8eb6bf19ae153e4b9dbef9" Dec 03 13:19:00 crc kubenswrapper[4986]: E1203 13:19:00.843644 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66dd85225308b34c6a81db82d34abc5ae62cc79acc8eb6bf19ae153e4b9dbef9\": container with ID starting with 66dd85225308b34c6a81db82d34abc5ae62cc79acc8eb6bf19ae153e4b9dbef9 not found: ID does not exist" containerID="66dd85225308b34c6a81db82d34abc5ae62cc79acc8eb6bf19ae153e4b9dbef9" Dec 03 13:19:00 crc kubenswrapper[4986]: I1203 13:19:00.843695 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66dd85225308b34c6a81db82d34abc5ae62cc79acc8eb6bf19ae153e4b9dbef9"} err="failed to get container status \"66dd85225308b34c6a81db82d34abc5ae62cc79acc8eb6bf19ae153e4b9dbef9\": rpc error: code = NotFound desc = could not find container \"66dd85225308b34c6a81db82d34abc5ae62cc79acc8eb6bf19ae153e4b9dbef9\": container with ID starting with 66dd85225308b34c6a81db82d34abc5ae62cc79acc8eb6bf19ae153e4b9dbef9 not found: ID does not exist" Dec 03 13:19:00 crc kubenswrapper[4986]: I1203 13:19:00.843725 4986 scope.go:117] "RemoveContainer" containerID="a1a8f1bb0109a39a22c7d1184ef9832ec739c4597a7672dc07759640a1d91ad2" Dec 03 13:19:00 crc kubenswrapper[4986]: E1203 13:19:00.853526 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1a8f1bb0109a39a22c7d1184ef9832ec739c4597a7672dc07759640a1d91ad2\": container with ID starting with a1a8f1bb0109a39a22c7d1184ef9832ec739c4597a7672dc07759640a1d91ad2 not found: ID does not exist" containerID="a1a8f1bb0109a39a22c7d1184ef9832ec739c4597a7672dc07759640a1d91ad2" Dec 03 13:19:00 crc kubenswrapper[4986]: I1203 13:19:00.853569 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1a8f1bb0109a39a22c7d1184ef9832ec739c4597a7672dc07759640a1d91ad2"} err="failed to get container status \"a1a8f1bb0109a39a22c7d1184ef9832ec739c4597a7672dc07759640a1d91ad2\": rpc error: code = NotFound desc = could not find container \"a1a8f1bb0109a39a22c7d1184ef9832ec739c4597a7672dc07759640a1d91ad2\": container with ID starting with a1a8f1bb0109a39a22c7d1184ef9832ec739c4597a7672dc07759640a1d91ad2 not found: ID does not exist" Dec 03 13:19:00 crc kubenswrapper[4986]: I1203 13:19:00.853595 4986 scope.go:117] "RemoveContainer" containerID="8fd2e1e494f36abbefe9886a1586a47aa16686c32a62d8f2e95942c2e0f8f466" Dec 03 13:19:00 crc kubenswrapper[4986]: E1203 13:19:00.854610 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8fd2e1e494f36abbefe9886a1586a47aa16686c32a62d8f2e95942c2e0f8f466\": container with ID starting with 8fd2e1e494f36abbefe9886a1586a47aa16686c32a62d8f2e95942c2e0f8f466 not found: ID does not exist" containerID="8fd2e1e494f36abbefe9886a1586a47aa16686c32a62d8f2e95942c2e0f8f466" Dec 03 13:19:00 crc kubenswrapper[4986]: I1203 13:19:00.854657 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fd2e1e494f36abbefe9886a1586a47aa16686c32a62d8f2e95942c2e0f8f466"} err="failed to get container status \"8fd2e1e494f36abbefe9886a1586a47aa16686c32a62d8f2e95942c2e0f8f466\": rpc error: code = NotFound desc = could not find container \"8fd2e1e494f36abbefe9886a1586a47aa16686c32a62d8f2e95942c2e0f8f466\": container with ID starting with 8fd2e1e494f36abbefe9886a1586a47aa16686c32a62d8f2e95942c2e0f8f466 not found: ID does not exist" Dec 03 13:19:00 crc kubenswrapper[4986]: I1203 13:19:00.854686 4986 scope.go:117] "RemoveContainer" containerID="31623ea22a4e33dc46e24291324cbd81da55ddc262f3a20d8b8d61d022e9f667" Dec 03 13:19:00 crc kubenswrapper[4986]: I1203 13:19:00.854936 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c8ddd69c-xcsvv" Dec 03 13:19:00 crc kubenswrapper[4986]: E1203 13:19:00.856680 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31623ea22a4e33dc46e24291324cbd81da55ddc262f3a20d8b8d61d022e9f667\": container with ID starting with 31623ea22a4e33dc46e24291324cbd81da55ddc262f3a20d8b8d61d022e9f667 not found: ID does not exist" containerID="31623ea22a4e33dc46e24291324cbd81da55ddc262f3a20d8b8d61d022e9f667" Dec 03 13:19:00 crc kubenswrapper[4986]: I1203 13:19:00.856714 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31623ea22a4e33dc46e24291324cbd81da55ddc262f3a20d8b8d61d022e9f667"} err="failed to get container status \"31623ea22a4e33dc46e24291324cbd81da55ddc262f3a20d8b8d61d022e9f667\": rpc error: code = NotFound desc = could not find container \"31623ea22a4e33dc46e24291324cbd81da55ddc262f3a20d8b8d61d022e9f667\": container with ID starting with 31623ea22a4e33dc46e24291324cbd81da55ddc262f3a20d8b8d61d022e9f667 not found: ID does not exist" Dec 03 13:19:00 crc kubenswrapper[4986]: I1203 13:19:00.856739 4986 scope.go:117] "RemoveContainer" containerID="eac9e098543c89e6f66d111eb8925b1bbaf1d91b0d2c46856ab77df08fce9a7a" Dec 03 13:19:00 crc kubenswrapper[4986]: I1203 13:19:00.872592 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 03 13:19:00 crc kubenswrapper[4986]: I1203 13:19:00.883656 4986 generic.go:334] "Generic (PLEG): container finished" podID="dc9fa065-2e88-4181-ab5a-be64336bed7d" containerID="d9d2a9780ee6c796b5953539bd3cdbf854011c09938f122a7a2b048d2fe551c3" exitCode=137 Dec 03 13:19:00 crc kubenswrapper[4986]: I1203 13:19:00.883687 4986 generic.go:334] "Generic (PLEG): container finished" podID="dc9fa065-2e88-4181-ab5a-be64336bed7d" containerID="1159886bb01a7d0dde71c42912ecd0c27e1490067409676edf00d3431f8fe430" exitCode=137 Dec 03 13:19:00 crc kubenswrapper[4986]: I1203 13:19:00.883768 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-76c69b8dbf-kzlpf" event={"ID":"dc9fa065-2e88-4181-ab5a-be64336bed7d","Type":"ContainerDied","Data":"d9d2a9780ee6c796b5953539bd3cdbf854011c09938f122a7a2b048d2fe551c3"} Dec 03 13:19:00 crc kubenswrapper[4986]: I1203 13:19:00.883802 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-76c69b8dbf-kzlpf" event={"ID":"dc9fa065-2e88-4181-ab5a-be64336bed7d","Type":"ContainerDied","Data":"1159886bb01a7d0dde71c42912ecd0c27e1490067409676edf00d3431f8fe430"} Dec 03 13:19:00 crc kubenswrapper[4986]: I1203 13:19:00.886052 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 03 13:19:00 crc kubenswrapper[4986]: I1203 13:19:00.899332 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 03 13:19:00 crc kubenswrapper[4986]: E1203 13:19:00.899680 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="578cfbdf-da14-435d-a13f-e525d482a66d" containerName="dnsmasq-dns" Dec 03 13:19:00 crc kubenswrapper[4986]: I1203 13:19:00.899692 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="578cfbdf-da14-435d-a13f-e525d482a66d" containerName="dnsmasq-dns" Dec 03 13:19:00 crc kubenswrapper[4986]: E1203 13:19:00.899711 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="578cfbdf-da14-435d-a13f-e525d482a66d" containerName="init" Dec 03 13:19:00 crc kubenswrapper[4986]: I1203 13:19:00.899717 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="578cfbdf-da14-435d-a13f-e525d482a66d" containerName="init" Dec 03 13:19:00 crc kubenswrapper[4986]: I1203 13:19:00.899883 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="578cfbdf-da14-435d-a13f-e525d482a66d" containerName="dnsmasq-dns" Dec 03 13:19:00 crc kubenswrapper[4986]: I1203 13:19:00.901489 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 13:19:00 crc kubenswrapper[4986]: I1203 13:19:00.906566 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/578cfbdf-da14-435d-a13f-e525d482a66d-ovsdbserver-sb\") pod \"578cfbdf-da14-435d-a13f-e525d482a66d\" (UID: \"578cfbdf-da14-435d-a13f-e525d482a66d\") " Dec 03 13:19:00 crc kubenswrapper[4986]: I1203 13:19:00.906644 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/578cfbdf-da14-435d-a13f-e525d482a66d-ovsdbserver-nb\") pod \"578cfbdf-da14-435d-a13f-e525d482a66d\" (UID: \"578cfbdf-da14-435d-a13f-e525d482a66d\") " Dec 03 13:19:00 crc kubenswrapper[4986]: I1203 13:19:00.906737 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/578cfbdf-da14-435d-a13f-e525d482a66d-config\") pod \"578cfbdf-da14-435d-a13f-e525d482a66d\" (UID: \"578cfbdf-da14-435d-a13f-e525d482a66d\") " Dec 03 13:19:00 crc kubenswrapper[4986]: I1203 13:19:00.906762 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/578cfbdf-da14-435d-a13f-e525d482a66d-dns-swift-storage-0\") pod \"578cfbdf-da14-435d-a13f-e525d482a66d\" (UID: \"578cfbdf-da14-435d-a13f-e525d482a66d\") " Dec 03 13:19:00 crc kubenswrapper[4986]: I1203 13:19:00.906860 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gkwps\" (UniqueName: \"kubernetes.io/projected/578cfbdf-da14-435d-a13f-e525d482a66d-kube-api-access-gkwps\") pod \"578cfbdf-da14-435d-a13f-e525d482a66d\" (UID: \"578cfbdf-da14-435d-a13f-e525d482a66d\") " Dec 03 13:19:00 crc kubenswrapper[4986]: I1203 13:19:00.906921 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/578cfbdf-da14-435d-a13f-e525d482a66d-dns-svc\") pod \"578cfbdf-da14-435d-a13f-e525d482a66d\" (UID: \"578cfbdf-da14-435d-a13f-e525d482a66d\") " Dec 03 13:19:00 crc kubenswrapper[4986]: I1203 13:19:00.924776 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 03 13:19:00 crc kubenswrapper[4986]: I1203 13:19:00.925952 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 03 13:19:00 crc kubenswrapper[4986]: I1203 13:19:00.933016 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 03 13:19:00 crc kubenswrapper[4986]: I1203 13:19:00.947409 4986 generic.go:334] "Generic (PLEG): container finished" podID="578cfbdf-da14-435d-a13f-e525d482a66d" containerID="7f54dd37064d434913a38b90d2ce79ca46e7d951907cd62e1d7c9b3fe3067a7f" exitCode=0 Dec 03 13:19:00 crc kubenswrapper[4986]: I1203 13:19:00.947521 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c8ddd69c-xcsvv" Dec 03 13:19:00 crc kubenswrapper[4986]: I1203 13:19:00.967056 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/578cfbdf-da14-435d-a13f-e525d482a66d-kube-api-access-gkwps" (OuterVolumeSpecName: "kube-api-access-gkwps") pod "578cfbdf-da14-435d-a13f-e525d482a66d" (UID: "578cfbdf-da14-435d-a13f-e525d482a66d"). InnerVolumeSpecName "kube-api-access-gkwps". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:19:01 crc kubenswrapper[4986]: I1203 13:19:01.005892 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/578cfbdf-da14-435d-a13f-e525d482a66d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "578cfbdf-da14-435d-a13f-e525d482a66d" (UID: "578cfbdf-da14-435d-a13f-e525d482a66d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:19:01 crc kubenswrapper[4986]: I1203 13:19:01.009309 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b482987-51ed-43dd-9099-c54d824588bf-log-httpd\") pod \"ceilometer-0\" (UID: \"0b482987-51ed-43dd-9099-c54d824588bf\") " pod="openstack/ceilometer-0" Dec 03 13:19:01 crc kubenswrapper[4986]: I1203 13:19:01.009407 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7nmj\" (UniqueName: \"kubernetes.io/projected/0b482987-51ed-43dd-9099-c54d824588bf-kube-api-access-x7nmj\") pod \"ceilometer-0\" (UID: \"0b482987-51ed-43dd-9099-c54d824588bf\") " pod="openstack/ceilometer-0" Dec 03 13:19:01 crc kubenswrapper[4986]: I1203 13:19:01.009461 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b482987-51ed-43dd-9099-c54d824588bf-config-data\") pod \"ceilometer-0\" (UID: \"0b482987-51ed-43dd-9099-c54d824588bf\") " pod="openstack/ceilometer-0" Dec 03 13:19:01 crc kubenswrapper[4986]: I1203 13:19:01.009500 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0b482987-51ed-43dd-9099-c54d824588bf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0b482987-51ed-43dd-9099-c54d824588bf\") " pod="openstack/ceilometer-0" Dec 03 13:19:01 crc kubenswrapper[4986]: I1203 13:19:01.009540 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b482987-51ed-43dd-9099-c54d824588bf-scripts\") pod \"ceilometer-0\" (UID: \"0b482987-51ed-43dd-9099-c54d824588bf\") " pod="openstack/ceilometer-0" Dec 03 13:19:01 crc kubenswrapper[4986]: I1203 13:19:01.009561 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b482987-51ed-43dd-9099-c54d824588bf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0b482987-51ed-43dd-9099-c54d824588bf\") " pod="openstack/ceilometer-0" Dec 03 13:19:01 crc kubenswrapper[4986]: I1203 13:19:01.009596 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b482987-51ed-43dd-9099-c54d824588bf-run-httpd\") pod \"ceilometer-0\" (UID: \"0b482987-51ed-43dd-9099-c54d824588bf\") " pod="openstack/ceilometer-0" Dec 03 13:19:01 crc kubenswrapper[4986]: I1203 13:19:01.009720 4986 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/578cfbdf-da14-435d-a13f-e525d482a66d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 03 13:19:01 crc kubenswrapper[4986]: I1203 13:19:01.009736 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gkwps\" (UniqueName: \"kubernetes.io/projected/578cfbdf-da14-435d-a13f-e525d482a66d-kube-api-access-gkwps\") on node \"crc\" DevicePath \"\"" Dec 03 13:19:01 crc kubenswrapper[4986]: I1203 13:19:01.041756 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="143eb6df-5711-4616-baaa-42417113bfed" path="/var/lib/kubelet/pods/143eb6df-5711-4616-baaa-42417113bfed/volumes" Dec 03 13:19:01 crc kubenswrapper[4986]: I1203 13:19:01.042599 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67af2ee4-2a0e-4095-b731-5123b5f54053" path="/var/lib/kubelet/pods/67af2ee4-2a0e-4095-b731-5123b5f54053/volumes" Dec 03 13:19:01 crc kubenswrapper[4986]: I1203 13:19:01.044109 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/578cfbdf-da14-435d-a13f-e525d482a66d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "578cfbdf-da14-435d-a13f-e525d482a66d" (UID: "578cfbdf-da14-435d-a13f-e525d482a66d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:19:01 crc kubenswrapper[4986]: I1203 13:19:01.050055 4986 scope.go:117] "RemoveContainer" containerID="9474eb41c1f71ecec7edb3764be46ed6eeb981c0ddc4034e9c74ba1875199c8a" Dec 03 13:19:01 crc kubenswrapper[4986]: I1203 13:19:01.065732 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82935ead-8451-43d6-bd7a-c0444aadc886" path="/var/lib/kubelet/pods/82935ead-8451-43d6-bd7a-c0444aadc886/volumes" Dec 03 13:19:01 crc kubenswrapper[4986]: E1203 13:19:01.077445 4986 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd09ba1cc_e27c_4ea3_9f4a_2f7791afce95.slice/crio-9fe313453051a4a52ad6520df7954c1f9c273f1440f54886bb07673f3e5478a2\": RecentStats: unable to find data in memory cache]" Dec 03 13:19:01 crc kubenswrapper[4986]: I1203 13:19:01.078244 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/578cfbdf-da14-435d-a13f-e525d482a66d-config" (OuterVolumeSpecName: "config") pod "578cfbdf-da14-435d-a13f-e525d482a66d" (UID: "578cfbdf-da14-435d-a13f-e525d482a66d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:19:01 crc kubenswrapper[4986]: I1203 13:19:01.079887 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c8ddd69c-xcsvv" event={"ID":"578cfbdf-da14-435d-a13f-e525d482a66d","Type":"ContainerDied","Data":"7f54dd37064d434913a38b90d2ce79ca46e7d951907cd62e1d7c9b3fe3067a7f"} Dec 03 13:19:01 crc kubenswrapper[4986]: I1203 13:19:01.079932 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c8ddd69c-xcsvv" event={"ID":"578cfbdf-da14-435d-a13f-e525d482a66d","Type":"ContainerDied","Data":"f9951143d0e42f80cf52b9d384d51012b1dec395057f65c53858d2e7f8aabce0"} Dec 03 13:19:01 crc kubenswrapper[4986]: I1203 13:19:01.101737 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 03 13:19:01 crc kubenswrapper[4986]: I1203 13:19:01.101970 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/578cfbdf-da14-435d-a13f-e525d482a66d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "578cfbdf-da14-435d-a13f-e525d482a66d" (UID: "578cfbdf-da14-435d-a13f-e525d482a66d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:19:01 crc kubenswrapper[4986]: I1203 13:19:01.113538 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b482987-51ed-43dd-9099-c54d824588bf-log-httpd\") pod \"ceilometer-0\" (UID: \"0b482987-51ed-43dd-9099-c54d824588bf\") " pod="openstack/ceilometer-0" Dec 03 13:19:01 crc kubenswrapper[4986]: I1203 13:19:01.113620 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7nmj\" (UniqueName: \"kubernetes.io/projected/0b482987-51ed-43dd-9099-c54d824588bf-kube-api-access-x7nmj\") pod \"ceilometer-0\" (UID: \"0b482987-51ed-43dd-9099-c54d824588bf\") " pod="openstack/ceilometer-0" Dec 03 13:19:01 crc kubenswrapper[4986]: I1203 13:19:01.113661 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b482987-51ed-43dd-9099-c54d824588bf-config-data\") pod \"ceilometer-0\" (UID: \"0b482987-51ed-43dd-9099-c54d824588bf\") " pod="openstack/ceilometer-0" Dec 03 13:19:01 crc kubenswrapper[4986]: I1203 13:19:01.113688 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0b482987-51ed-43dd-9099-c54d824588bf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0b482987-51ed-43dd-9099-c54d824588bf\") " pod="openstack/ceilometer-0" Dec 03 13:19:01 crc kubenswrapper[4986]: I1203 13:19:01.113713 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b482987-51ed-43dd-9099-c54d824588bf-scripts\") pod \"ceilometer-0\" (UID: \"0b482987-51ed-43dd-9099-c54d824588bf\") " pod="openstack/ceilometer-0" Dec 03 13:19:01 crc kubenswrapper[4986]: I1203 13:19:01.113731 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b482987-51ed-43dd-9099-c54d824588bf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0b482987-51ed-43dd-9099-c54d824588bf\") " pod="openstack/ceilometer-0" Dec 03 13:19:01 crc kubenswrapper[4986]: I1203 13:19:01.113754 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b482987-51ed-43dd-9099-c54d824588bf-run-httpd\") pod \"ceilometer-0\" (UID: \"0b482987-51ed-43dd-9099-c54d824588bf\") " pod="openstack/ceilometer-0" Dec 03 13:19:01 crc kubenswrapper[4986]: I1203 13:19:01.113821 4986 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/578cfbdf-da14-435d-a13f-e525d482a66d-config\") on node \"crc\" DevicePath \"\"" Dec 03 13:19:01 crc kubenswrapper[4986]: I1203 13:19:01.113832 4986 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/578cfbdf-da14-435d-a13f-e525d482a66d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 03 13:19:01 crc kubenswrapper[4986]: I1203 13:19:01.113844 4986 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/578cfbdf-da14-435d-a13f-e525d482a66d-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 03 13:19:01 crc kubenswrapper[4986]: I1203 13:19:01.114211 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b482987-51ed-43dd-9099-c54d824588bf-run-httpd\") pod \"ceilometer-0\" (UID: \"0b482987-51ed-43dd-9099-c54d824588bf\") " pod="openstack/ceilometer-0" Dec 03 13:19:01 crc kubenswrapper[4986]: I1203 13:19:01.114345 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/578cfbdf-da14-435d-a13f-e525d482a66d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "578cfbdf-da14-435d-a13f-e525d482a66d" (UID: "578cfbdf-da14-435d-a13f-e525d482a66d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:19:01 crc kubenswrapper[4986]: I1203 13:19:01.114444 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b482987-51ed-43dd-9099-c54d824588bf-log-httpd\") pod \"ceilometer-0\" (UID: \"0b482987-51ed-43dd-9099-c54d824588bf\") " pod="openstack/ceilometer-0" Dec 03 13:19:01 crc kubenswrapper[4986]: I1203 13:19:01.120972 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b482987-51ed-43dd-9099-c54d824588bf-scripts\") pod \"ceilometer-0\" (UID: \"0b482987-51ed-43dd-9099-c54d824588bf\") " pod="openstack/ceilometer-0" Dec 03 13:19:01 crc kubenswrapper[4986]: I1203 13:19:01.124079 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b482987-51ed-43dd-9099-c54d824588bf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0b482987-51ed-43dd-9099-c54d824588bf\") " pod="openstack/ceilometer-0" Dec 03 13:19:01 crc kubenswrapper[4986]: I1203 13:19:01.130689 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0b482987-51ed-43dd-9099-c54d824588bf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0b482987-51ed-43dd-9099-c54d824588bf\") " pod="openstack/ceilometer-0" Dec 03 13:19:01 crc kubenswrapper[4986]: I1203 13:19:01.134695 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b482987-51ed-43dd-9099-c54d824588bf-config-data\") pod \"ceilometer-0\" (UID: \"0b482987-51ed-43dd-9099-c54d824588bf\") " pod="openstack/ceilometer-0" Dec 03 13:19:01 crc kubenswrapper[4986]: I1203 13:19:01.134876 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7nmj\" (UniqueName: \"kubernetes.io/projected/0b482987-51ed-43dd-9099-c54d824588bf-kube-api-access-x7nmj\") pod \"ceilometer-0\" (UID: \"0b482987-51ed-43dd-9099-c54d824588bf\") " pod="openstack/ceilometer-0" Dec 03 13:19:01 crc kubenswrapper[4986]: I1203 13:19:01.178787 4986 scope.go:117] "RemoveContainer" containerID="7f54dd37064d434913a38b90d2ce79ca46e7d951907cd62e1d7c9b3fe3067a7f" Dec 03 13:19:01 crc kubenswrapper[4986]: I1203 13:19:01.236134 4986 scope.go:117] "RemoveContainer" containerID="c72b696c32d9adda237c39f64e841e2c05b2a3c118ed19e6afbff6ef78ecc88c" Dec 03 13:19:01 crc kubenswrapper[4986]: I1203 13:19:01.237049 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 13:19:01 crc kubenswrapper[4986]: I1203 13:19:01.237776 4986 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/578cfbdf-da14-435d-a13f-e525d482a66d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 03 13:19:01 crc kubenswrapper[4986]: I1203 13:19:01.259511 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-76c69b8dbf-kzlpf" Dec 03 13:19:01 crc kubenswrapper[4986]: I1203 13:19:01.319890 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-xcsvv"] Dec 03 13:19:01 crc kubenswrapper[4986]: I1203 13:19:01.330694 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-xcsvv"] Dec 03 13:19:01 crc kubenswrapper[4986]: I1203 13:19:01.338978 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/dc9fa065-2e88-4181-ab5a-be64336bed7d-horizon-secret-key\") pod \"dc9fa065-2e88-4181-ab5a-be64336bed7d\" (UID: \"dc9fa065-2e88-4181-ab5a-be64336bed7d\") " Dec 03 13:19:01 crc kubenswrapper[4986]: I1203 13:19:01.339021 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc9fa065-2e88-4181-ab5a-be64336bed7d-logs\") pod \"dc9fa065-2e88-4181-ab5a-be64336bed7d\" (UID: \"dc9fa065-2e88-4181-ab5a-be64336bed7d\") " Dec 03 13:19:01 crc kubenswrapper[4986]: I1203 13:19:01.339152 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dc9fa065-2e88-4181-ab5a-be64336bed7d-config-data\") pod \"dc9fa065-2e88-4181-ab5a-be64336bed7d\" (UID: \"dc9fa065-2e88-4181-ab5a-be64336bed7d\") " Dec 03 13:19:01 crc kubenswrapper[4986]: I1203 13:19:01.339231 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dc9fa065-2e88-4181-ab5a-be64336bed7d-scripts\") pod \"dc9fa065-2e88-4181-ab5a-be64336bed7d\" (UID: \"dc9fa065-2e88-4181-ab5a-be64336bed7d\") " Dec 03 13:19:01 crc kubenswrapper[4986]: I1203 13:19:01.339258 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2n7mh\" (UniqueName: \"kubernetes.io/projected/dc9fa065-2e88-4181-ab5a-be64336bed7d-kube-api-access-2n7mh\") pod \"dc9fa065-2e88-4181-ab5a-be64336bed7d\" (UID: \"dc9fa065-2e88-4181-ab5a-be64336bed7d\") " Dec 03 13:19:01 crc kubenswrapper[4986]: I1203 13:19:01.344005 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc9fa065-2e88-4181-ab5a-be64336bed7d-logs" (OuterVolumeSpecName: "logs") pod "dc9fa065-2e88-4181-ab5a-be64336bed7d" (UID: "dc9fa065-2e88-4181-ab5a-be64336bed7d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 13:19:01 crc kubenswrapper[4986]: I1203 13:19:01.344481 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc9fa065-2e88-4181-ab5a-be64336bed7d-kube-api-access-2n7mh" (OuterVolumeSpecName: "kube-api-access-2n7mh") pod "dc9fa065-2e88-4181-ab5a-be64336bed7d" (UID: "dc9fa065-2e88-4181-ab5a-be64336bed7d"). InnerVolumeSpecName "kube-api-access-2n7mh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:19:01 crc kubenswrapper[4986]: I1203 13:19:01.344473 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc9fa065-2e88-4181-ab5a-be64336bed7d-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "dc9fa065-2e88-4181-ab5a-be64336bed7d" (UID: "dc9fa065-2e88-4181-ab5a-be64336bed7d"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:19:01 crc kubenswrapper[4986]: I1203 13:19:01.354746 4986 scope.go:117] "RemoveContainer" containerID="7f54dd37064d434913a38b90d2ce79ca46e7d951907cd62e1d7c9b3fe3067a7f" Dec 03 13:19:01 crc kubenswrapper[4986]: E1203 13:19:01.355135 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f54dd37064d434913a38b90d2ce79ca46e7d951907cd62e1d7c9b3fe3067a7f\": container with ID starting with 7f54dd37064d434913a38b90d2ce79ca46e7d951907cd62e1d7c9b3fe3067a7f not found: ID does not exist" containerID="7f54dd37064d434913a38b90d2ce79ca46e7d951907cd62e1d7c9b3fe3067a7f" Dec 03 13:19:01 crc kubenswrapper[4986]: I1203 13:19:01.355160 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f54dd37064d434913a38b90d2ce79ca46e7d951907cd62e1d7c9b3fe3067a7f"} err="failed to get container status \"7f54dd37064d434913a38b90d2ce79ca46e7d951907cd62e1d7c9b3fe3067a7f\": rpc error: code = NotFound desc = could not find container \"7f54dd37064d434913a38b90d2ce79ca46e7d951907cd62e1d7c9b3fe3067a7f\": container with ID starting with 7f54dd37064d434913a38b90d2ce79ca46e7d951907cd62e1d7c9b3fe3067a7f not found: ID does not exist" Dec 03 13:19:01 crc kubenswrapper[4986]: I1203 13:19:01.355179 4986 scope.go:117] "RemoveContainer" containerID="c72b696c32d9adda237c39f64e841e2c05b2a3c118ed19e6afbff6ef78ecc88c" Dec 03 13:19:01 crc kubenswrapper[4986]: E1203 13:19:01.355733 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c72b696c32d9adda237c39f64e841e2c05b2a3c118ed19e6afbff6ef78ecc88c\": container with ID starting with c72b696c32d9adda237c39f64e841e2c05b2a3c118ed19e6afbff6ef78ecc88c not found: ID does not exist" containerID="c72b696c32d9adda237c39f64e841e2c05b2a3c118ed19e6afbff6ef78ecc88c" Dec 03 13:19:01 crc kubenswrapper[4986]: I1203 13:19:01.355755 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c72b696c32d9adda237c39f64e841e2c05b2a3c118ed19e6afbff6ef78ecc88c"} err="failed to get container status \"c72b696c32d9adda237c39f64e841e2c05b2a3c118ed19e6afbff6ef78ecc88c\": rpc error: code = NotFound desc = could not find container \"c72b696c32d9adda237c39f64e841e2c05b2a3c118ed19e6afbff6ef78ecc88c\": container with ID starting with c72b696c32d9adda237c39f64e841e2c05b2a3c118ed19e6afbff6ef78ecc88c not found: ID does not exist" Dec 03 13:19:01 crc kubenswrapper[4986]: I1203 13:19:01.387260 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc9fa065-2e88-4181-ab5a-be64336bed7d-config-data" (OuterVolumeSpecName: "config-data") pod "dc9fa065-2e88-4181-ab5a-be64336bed7d" (UID: "dc9fa065-2e88-4181-ab5a-be64336bed7d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:19:01 crc kubenswrapper[4986]: I1203 13:19:01.418951 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc9fa065-2e88-4181-ab5a-be64336bed7d-scripts" (OuterVolumeSpecName: "scripts") pod "dc9fa065-2e88-4181-ab5a-be64336bed7d" (UID: "dc9fa065-2e88-4181-ab5a-be64336bed7d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:19:01 crc kubenswrapper[4986]: I1203 13:19:01.443392 4986 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/dc9fa065-2e88-4181-ab5a-be64336bed7d-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 03 13:19:01 crc kubenswrapper[4986]: I1203 13:19:01.443436 4986 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc9fa065-2e88-4181-ab5a-be64336bed7d-logs\") on node \"crc\" DevicePath \"\"" Dec 03 13:19:01 crc kubenswrapper[4986]: I1203 13:19:01.443449 4986 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dc9fa065-2e88-4181-ab5a-be64336bed7d-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 13:19:01 crc kubenswrapper[4986]: I1203 13:19:01.443462 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2n7mh\" (UniqueName: \"kubernetes.io/projected/dc9fa065-2e88-4181-ab5a-be64336bed7d-kube-api-access-2n7mh\") on node \"crc\" DevicePath \"\"" Dec 03 13:19:01 crc kubenswrapper[4986]: I1203 13:19:01.443474 4986 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dc9fa065-2e88-4181-ab5a-be64336bed7d-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 13:19:01 crc kubenswrapper[4986]: I1203 13:19:01.547486 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 03 13:19:01 crc kubenswrapper[4986]: I1203 13:19:01.841693 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-cc774c568-phcpp" Dec 03 13:19:01 crc kubenswrapper[4986]: I1203 13:19:01.938494 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 03 13:19:01 crc kubenswrapper[4986]: I1203 13:19:01.959220 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7f4d4dfb-2e4f-42b0-8760-01287415424c","Type":"ContainerStarted","Data":"1b3aa0f81d51d89319d9406ea9de951810633de716b605ebb2e38caac979c969"} Dec 03 13:19:01 crc kubenswrapper[4986]: I1203 13:19:01.964790 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-xrr75" event={"ID":"e622eeb7-7b89-4328-a775-bc363cb7ebd5","Type":"ContainerStarted","Data":"ff502cffb9a9a0fe3917269c6f052b0c2801688cba77eb00d830b4a8272f5ba7"} Dec 03 13:19:01 crc kubenswrapper[4986]: I1203 13:19:01.964862 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5784cf869f-xrr75" Dec 03 13:19:01 crc kubenswrapper[4986]: I1203 13:19:01.974508 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d4dae8bb-1f5e-446c-9820-769e77414c04","Type":"ContainerStarted","Data":"df503ab7331baa7994fce6aff8cd2d41df28ec4a060afa8051c098e7a9fa05ac"} Dec 03 13:19:02 crc kubenswrapper[4986]: I1203 13:19:02.003031 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d5c981f3-a559-4641-aaae-2e90c5d0d543","Type":"ContainerStarted","Data":"2561718171dd0fae47bbdbb6c6452d8f695915abff555243477e39879275500d"} Dec 03 13:19:02 crc kubenswrapper[4986]: I1203 13:19:02.017225 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"aec700a6-d4c5-4c24-8726-16dc792d5658","Type":"ContainerStarted","Data":"691e7797cd057ca907ecec503fdb24521ec7774622ad57eae3fb3c829259d655"} Dec 03 13:19:02 crc kubenswrapper[4986]: I1203 13:19:02.021147 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-76c69b8dbf-kzlpf" event={"ID":"dc9fa065-2e88-4181-ab5a-be64336bed7d","Type":"ContainerDied","Data":"03befa4d5d3220caac2aee464f42be47b89fefdef2249da797bdd68538229b9f"} Dec 03 13:19:02 crc kubenswrapper[4986]: I1203 13:19:02.021212 4986 scope.go:117] "RemoveContainer" containerID="d9d2a9780ee6c796b5953539bd3cdbf854011c09938f122a7a2b048d2fe551c3" Dec 03 13:19:02 crc kubenswrapper[4986]: I1203 13:19:02.021341 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-76c69b8dbf-kzlpf" Dec 03 13:19:02 crc kubenswrapper[4986]: I1203 13:19:02.059326 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5784cf869f-xrr75" podStartSLOduration=5.059309036 podStartE2EDuration="5.059309036s" podCreationTimestamp="2025-12-03 13:18:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:19:01.997797636 +0000 UTC m=+1401.464228847" watchObservedRunningTime="2025-12-03 13:19:02.059309036 +0000 UTC m=+1401.525740227" Dec 03 13:19:02 crc kubenswrapper[4986]: I1203 13:19:02.068427 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-76c69b8dbf-kzlpf"] Dec 03 13:19:02 crc kubenswrapper[4986]: I1203 13:19:02.076852 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-76c69b8dbf-kzlpf"] Dec 03 13:19:02 crc kubenswrapper[4986]: I1203 13:19:02.302236 4986 scope.go:117] "RemoveContainer" containerID="1159886bb01a7d0dde71c42912ecd0c27e1490067409676edf00d3431f8fe430" Dec 03 13:19:02 crc kubenswrapper[4986]: I1203 13:19:02.343916 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-7797f969d4-6c2wn" Dec 03 13:19:02 crc kubenswrapper[4986]: E1203 13:19:02.380405 4986 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/770d5dce8f54ff01692cf0dab50881a978334fda9d1d348ab6658e84bf7482b3/diff" to get inode usage: stat /var/lib/containers/storage/overlay/770d5dce8f54ff01692cf0dab50881a978334fda9d1d348ab6658e84bf7482b3/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/openstack_dnsmasq-dns-76fcf4b695-sxknq_d09ba1cc-e27c-4ea3-9f4a-2f7791afce95/dnsmasq-dns/0.log" to get inode usage: stat /var/log/pods/openstack_dnsmasq-dns-76fcf4b695-sxknq_d09ba1cc-e27c-4ea3-9f4a-2f7791afce95/dnsmasq-dns/0.log: no such file or directory Dec 03 13:19:02 crc kubenswrapper[4986]: I1203 13:19:02.838665 4986 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7bc8976f54-hn8pf" podUID="4d708ae2-3c0a-4134-97fe-36270231010f" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.152:9311/healthcheck\": read tcp 10.217.0.2:58120->10.217.0.152:9311: read: connection reset by peer" Dec 03 13:19:02 crc kubenswrapper[4986]: I1203 13:19:02.838730 4986 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7bc8976f54-hn8pf" podUID="4d708ae2-3c0a-4134-97fe-36270231010f" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.152:9311/healthcheck\": read tcp 10.217.0.2:58110->10.217.0.152:9311: read: connection reset by peer" Dec 03 13:19:02 crc kubenswrapper[4986]: W1203 13:19:02.879967 4986 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7b8ed42a_89ce_4098_9489_5291e678bf18.slice/crio-conmon-18886cff3842efc33ba6818a9946242f28ff95bfae0ee8564fed6be9b265c47f.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7b8ed42a_89ce_4098_9489_5291e678bf18.slice/crio-conmon-18886cff3842efc33ba6818a9946242f28ff95bfae0ee8564fed6be9b265c47f.scope: no such file or directory Dec 03 13:19:02 crc kubenswrapper[4986]: W1203 13:19:02.880033 4986 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7b8ed42a_89ce_4098_9489_5291e678bf18.slice/crio-18886cff3842efc33ba6818a9946242f28ff95bfae0ee8564fed6be9b265c47f.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7b8ed42a_89ce_4098_9489_5291e678bf18.slice/crio-18886cff3842efc33ba6818a9946242f28ff95bfae0ee8564fed6be9b265c47f.scope: no such file or directory Dec 03 13:19:02 crc kubenswrapper[4986]: W1203 13:19:02.886943 4986 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod578cfbdf_da14_435d_a13f_e525d482a66d.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod578cfbdf_da14_435d_a13f_e525d482a66d.slice: no such file or directory Dec 03 13:19:02 crc kubenswrapper[4986]: W1203 13:19:02.886990 4986 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod67af2ee4_2a0e_4095_b731_5123b5f54053.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod67af2ee4_2a0e_4095_b731_5123b5f54053.slice: no such file or directory Dec 03 13:19:02 crc kubenswrapper[4986]: W1203 13:19:02.887007 4986 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod82935ead_8451_43d6_bd7a_c0444aadc886.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod82935ead_8451_43d6_bd7a_c0444aadc886.slice: no such file or directory Dec 03 13:19:02 crc kubenswrapper[4986]: W1203 13:19:02.887026 4986 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod143eb6df_5711_4616_baaa_42417113bfed.slice/crio-conmon-66dd85225308b34c6a81db82d34abc5ae62cc79acc8eb6bf19ae153e4b9dbef9.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod143eb6df_5711_4616_baaa_42417113bfed.slice/crio-conmon-66dd85225308b34c6a81db82d34abc5ae62cc79acc8eb6bf19ae153e4b9dbef9.scope: no such file or directory Dec 03 13:19:02 crc kubenswrapper[4986]: W1203 13:19:02.887040 4986 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod143eb6df_5711_4616_baaa_42417113bfed.slice/crio-66dd85225308b34c6a81db82d34abc5ae62cc79acc8eb6bf19ae153e4b9dbef9.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod143eb6df_5711_4616_baaa_42417113bfed.slice/crio-66dd85225308b34c6a81db82d34abc5ae62cc79acc8eb6bf19ae153e4b9dbef9.scope: no such file or directory Dec 03 13:19:03 crc kubenswrapper[4986]: I1203 13:19:03.021946 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="578cfbdf-da14-435d-a13f-e525d482a66d" path="/var/lib/kubelet/pods/578cfbdf-da14-435d-a13f-e525d482a66d/volumes" Dec 03 13:19:03 crc kubenswrapper[4986]: I1203 13:19:03.022814 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc9fa065-2e88-4181-ab5a-be64336bed7d" path="/var/lib/kubelet/pods/dc9fa065-2e88-4181-ab5a-be64336bed7d/volumes" Dec 03 13:19:03 crc kubenswrapper[4986]: I1203 13:19:03.136771 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0b482987-51ed-43dd-9099-c54d824588bf","Type":"ContainerStarted","Data":"5e5cc346550a3fec515bf0c4791854b5def68d62d3eb0b944e57908e69062b1a"} Dec 03 13:19:03 crc kubenswrapper[4986]: I1203 13:19:03.136823 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0b482987-51ed-43dd-9099-c54d824588bf","Type":"ContainerStarted","Data":"4d44214afd786f28616bb237b9c4b6d1d10a40b70cee28886e298e397236f9d4"} Dec 03 13:19:03 crc kubenswrapper[4986]: I1203 13:19:03.193631 4986 generic.go:334] "Generic (PLEG): container finished" podID="4d708ae2-3c0a-4134-97fe-36270231010f" containerID="639d14405922b129b66585e80277451a8daf2c9acad37a002013359435e61394" exitCode=0 Dec 03 13:19:03 crc kubenswrapper[4986]: I1203 13:19:03.193702 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7bc8976f54-hn8pf" event={"ID":"4d708ae2-3c0a-4134-97fe-36270231010f","Type":"ContainerDied","Data":"639d14405922b129b66585e80277451a8daf2c9acad37a002013359435e61394"} Dec 03 13:19:03 crc kubenswrapper[4986]: I1203 13:19:03.202743 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d5c981f3-a559-4641-aaae-2e90c5d0d543","Type":"ContainerStarted","Data":"9a5fbe41f6ab0b06a7618b9c833dbabf3623c8d0201e3e708bde7bf5c6a5d49f"} Dec 03 13:19:03 crc kubenswrapper[4986]: I1203 13:19:03.204847 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7f4d4dfb-2e4f-42b0-8760-01287415424c","Type":"ContainerStarted","Data":"685c695c044df2a0d17f10a4ee88e1e8199866581388b876113c8d3beac82856"} Dec 03 13:19:03 crc kubenswrapper[4986]: I1203 13:19:03.204985 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="7f4d4dfb-2e4f-42b0-8760-01287415424c" containerName="cinder-api-log" containerID="cri-o://1b3aa0f81d51d89319d9406ea9de951810633de716b605ebb2e38caac979c969" gracePeriod=30 Dec 03 13:19:03 crc kubenswrapper[4986]: I1203 13:19:03.205205 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Dec 03 13:19:03 crc kubenswrapper[4986]: I1203 13:19:03.205396 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="7f4d4dfb-2e4f-42b0-8760-01287415424c" containerName="cinder-api" containerID="cri-o://685c695c044df2a0d17f10a4ee88e1e8199866581388b876113c8d3beac82856" gracePeriod=30 Dec 03 13:19:03 crc kubenswrapper[4986]: I1203 13:19:03.218996 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"aec700a6-d4c5-4c24-8726-16dc792d5658","Type":"ContainerStarted","Data":"0da6ae24d795357e9372ccd524bb0486a2f41686c79f6226cec1d9938efe4612"} Dec 03 13:19:03 crc kubenswrapper[4986]: I1203 13:19:03.224522 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d4dae8bb-1f5e-446c-9820-769e77414c04","Type":"ContainerStarted","Data":"4c8bc6062e737df3ded631623dd55f56169f875e8ff065cfeb88edc561757def"} Dec 03 13:19:03 crc kubenswrapper[4986]: I1203 13:19:03.245863 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.2458461530000005 podStartE2EDuration="5.245846153s" podCreationTimestamp="2025-12-03 13:18:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:19:03.241996699 +0000 UTC m=+1402.708427890" watchObservedRunningTime="2025-12-03 13:19:03.245846153 +0000 UTC m=+1402.712277334" Dec 03 13:19:03 crc kubenswrapper[4986]: I1203 13:19:03.312217 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=5.101721331 podStartE2EDuration="6.312164372s" podCreationTimestamp="2025-12-03 13:18:57 +0000 UTC" firstStartedPulling="2025-12-03 13:18:59.069010947 +0000 UTC m=+1398.535442138" lastFinishedPulling="2025-12-03 13:19:00.279453988 +0000 UTC m=+1399.745885179" observedRunningTime="2025-12-03 13:19:03.271963198 +0000 UTC m=+1402.738394389" watchObservedRunningTime="2025-12-03 13:19:03.312164372 +0000 UTC m=+1402.778595573" Dec 03 13:19:03 crc kubenswrapper[4986]: I1203 13:19:03.654905 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7bc8976f54-hn8pf" Dec 03 13:19:03 crc kubenswrapper[4986]: I1203 13:19:03.708947 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4d708ae2-3c0a-4134-97fe-36270231010f-config-data-custom\") pod \"4d708ae2-3c0a-4134-97fe-36270231010f\" (UID: \"4d708ae2-3c0a-4134-97fe-36270231010f\") " Dec 03 13:19:03 crc kubenswrapper[4986]: I1203 13:19:03.709062 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d708ae2-3c0a-4134-97fe-36270231010f-combined-ca-bundle\") pod \"4d708ae2-3c0a-4134-97fe-36270231010f\" (UID: \"4d708ae2-3c0a-4134-97fe-36270231010f\") " Dec 03 13:19:03 crc kubenswrapper[4986]: I1203 13:19:03.709120 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9p2jz\" (UniqueName: \"kubernetes.io/projected/4d708ae2-3c0a-4134-97fe-36270231010f-kube-api-access-9p2jz\") pod \"4d708ae2-3c0a-4134-97fe-36270231010f\" (UID: \"4d708ae2-3c0a-4134-97fe-36270231010f\") " Dec 03 13:19:03 crc kubenswrapper[4986]: I1203 13:19:03.709206 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d708ae2-3c0a-4134-97fe-36270231010f-config-data\") pod \"4d708ae2-3c0a-4134-97fe-36270231010f\" (UID: \"4d708ae2-3c0a-4134-97fe-36270231010f\") " Dec 03 13:19:03 crc kubenswrapper[4986]: I1203 13:19:03.709243 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4d708ae2-3c0a-4134-97fe-36270231010f-logs\") pod \"4d708ae2-3c0a-4134-97fe-36270231010f\" (UID: \"4d708ae2-3c0a-4134-97fe-36270231010f\") " Dec 03 13:19:03 crc kubenswrapper[4986]: I1203 13:19:03.710536 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d708ae2-3c0a-4134-97fe-36270231010f-logs" (OuterVolumeSpecName: "logs") pod "4d708ae2-3c0a-4134-97fe-36270231010f" (UID: "4d708ae2-3c0a-4134-97fe-36270231010f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 13:19:03 crc kubenswrapper[4986]: I1203 13:19:03.724585 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d708ae2-3c0a-4134-97fe-36270231010f-kube-api-access-9p2jz" (OuterVolumeSpecName: "kube-api-access-9p2jz") pod "4d708ae2-3c0a-4134-97fe-36270231010f" (UID: "4d708ae2-3c0a-4134-97fe-36270231010f"). InnerVolumeSpecName "kube-api-access-9p2jz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:19:03 crc kubenswrapper[4986]: I1203 13:19:03.735427 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d708ae2-3c0a-4134-97fe-36270231010f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "4d708ae2-3c0a-4134-97fe-36270231010f" (UID: "4d708ae2-3c0a-4134-97fe-36270231010f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:19:03 crc kubenswrapper[4986]: I1203 13:19:03.785672 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d708ae2-3c0a-4134-97fe-36270231010f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4d708ae2-3c0a-4134-97fe-36270231010f" (UID: "4d708ae2-3c0a-4134-97fe-36270231010f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:19:03 crc kubenswrapper[4986]: I1203 13:19:03.813377 4986 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4d708ae2-3c0a-4134-97fe-36270231010f-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 03 13:19:03 crc kubenswrapper[4986]: I1203 13:19:03.813406 4986 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d708ae2-3c0a-4134-97fe-36270231010f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 13:19:03 crc kubenswrapper[4986]: I1203 13:19:03.813415 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9p2jz\" (UniqueName: \"kubernetes.io/projected/4d708ae2-3c0a-4134-97fe-36270231010f-kube-api-access-9p2jz\") on node \"crc\" DevicePath \"\"" Dec 03 13:19:03 crc kubenswrapper[4986]: I1203 13:19:03.813425 4986 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4d708ae2-3c0a-4134-97fe-36270231010f-logs\") on node \"crc\" DevicePath \"\"" Dec 03 13:19:03 crc kubenswrapper[4986]: I1203 13:19:03.836419 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d708ae2-3c0a-4134-97fe-36270231010f-config-data" (OuterVolumeSpecName: "config-data") pod "4d708ae2-3c0a-4134-97fe-36270231010f" (UID: "4d708ae2-3c0a-4134-97fe-36270231010f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:19:03 crc kubenswrapper[4986]: I1203 13:19:03.880392 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5dc7cf89f4-56jhq" Dec 03 13:19:03 crc kubenswrapper[4986]: I1203 13:19:03.914554 4986 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d708ae2-3c0a-4134-97fe-36270231010f-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 13:19:03 crc kubenswrapper[4986]: I1203 13:19:03.966067 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 03 13:19:04 crc kubenswrapper[4986]: I1203 13:19:04.017827 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f4d4dfb-2e4f-42b0-8760-01287415424c-logs\") pod \"7f4d4dfb-2e4f-42b0-8760-01287415424c\" (UID: \"7f4d4dfb-2e4f-42b0-8760-01287415424c\") " Dec 03 13:19:04 crc kubenswrapper[4986]: I1203 13:19:04.017949 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f4d4dfb-2e4f-42b0-8760-01287415424c-scripts\") pod \"7f4d4dfb-2e4f-42b0-8760-01287415424c\" (UID: \"7f4d4dfb-2e4f-42b0-8760-01287415424c\") " Dec 03 13:19:04 crc kubenswrapper[4986]: I1203 13:19:04.018020 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f4d4dfb-2e4f-42b0-8760-01287415424c-config-data\") pod \"7f4d4dfb-2e4f-42b0-8760-01287415424c\" (UID: \"7f4d4dfb-2e4f-42b0-8760-01287415424c\") " Dec 03 13:19:04 crc kubenswrapper[4986]: I1203 13:19:04.018070 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gpxn9\" (UniqueName: \"kubernetes.io/projected/7f4d4dfb-2e4f-42b0-8760-01287415424c-kube-api-access-gpxn9\") pod \"7f4d4dfb-2e4f-42b0-8760-01287415424c\" (UID: \"7f4d4dfb-2e4f-42b0-8760-01287415424c\") " Dec 03 13:19:04 crc kubenswrapper[4986]: I1203 13:19:04.018115 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f4d4dfb-2e4f-42b0-8760-01287415424c-combined-ca-bundle\") pod \"7f4d4dfb-2e4f-42b0-8760-01287415424c\" (UID: \"7f4d4dfb-2e4f-42b0-8760-01287415424c\") " Dec 03 13:19:04 crc kubenswrapper[4986]: I1203 13:19:04.018168 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7f4d4dfb-2e4f-42b0-8760-01287415424c-config-data-custom\") pod \"7f4d4dfb-2e4f-42b0-8760-01287415424c\" (UID: \"7f4d4dfb-2e4f-42b0-8760-01287415424c\") " Dec 03 13:19:04 crc kubenswrapper[4986]: I1203 13:19:04.018186 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7f4d4dfb-2e4f-42b0-8760-01287415424c-etc-machine-id\") pod \"7f4d4dfb-2e4f-42b0-8760-01287415424c\" (UID: \"7f4d4dfb-2e4f-42b0-8760-01287415424c\") " Dec 03 13:19:04 crc kubenswrapper[4986]: I1203 13:19:04.019228 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f4d4dfb-2e4f-42b0-8760-01287415424c-logs" (OuterVolumeSpecName: "logs") pod "7f4d4dfb-2e4f-42b0-8760-01287415424c" (UID: "7f4d4dfb-2e4f-42b0-8760-01287415424c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 13:19:04 crc kubenswrapper[4986]: I1203 13:19:04.020393 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f4d4dfb-2e4f-42b0-8760-01287415424c-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "7f4d4dfb-2e4f-42b0-8760-01287415424c" (UID: "7f4d4dfb-2e4f-42b0-8760-01287415424c"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 13:19:04 crc kubenswrapper[4986]: I1203 13:19:04.034885 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f4d4dfb-2e4f-42b0-8760-01287415424c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "7f4d4dfb-2e4f-42b0-8760-01287415424c" (UID: "7f4d4dfb-2e4f-42b0-8760-01287415424c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:19:04 crc kubenswrapper[4986]: I1203 13:19:04.036693 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f4d4dfb-2e4f-42b0-8760-01287415424c-kube-api-access-gpxn9" (OuterVolumeSpecName: "kube-api-access-gpxn9") pod "7f4d4dfb-2e4f-42b0-8760-01287415424c" (UID: "7f4d4dfb-2e4f-42b0-8760-01287415424c"). InnerVolumeSpecName "kube-api-access-gpxn9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:19:04 crc kubenswrapper[4986]: I1203 13:19:04.042621 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f4d4dfb-2e4f-42b0-8760-01287415424c-scripts" (OuterVolumeSpecName: "scripts") pod "7f4d4dfb-2e4f-42b0-8760-01287415424c" (UID: "7f4d4dfb-2e4f-42b0-8760-01287415424c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:19:04 crc kubenswrapper[4986]: I1203 13:19:04.081038 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f4d4dfb-2e4f-42b0-8760-01287415424c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7f4d4dfb-2e4f-42b0-8760-01287415424c" (UID: "7f4d4dfb-2e4f-42b0-8760-01287415424c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:19:04 crc kubenswrapper[4986]: I1203 13:19:04.120427 4986 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7f4d4dfb-2e4f-42b0-8760-01287415424c-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 03 13:19:04 crc kubenswrapper[4986]: I1203 13:19:04.120472 4986 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7f4d4dfb-2e4f-42b0-8760-01287415424c-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 03 13:19:04 crc kubenswrapper[4986]: I1203 13:19:04.120488 4986 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f4d4dfb-2e4f-42b0-8760-01287415424c-logs\") on node \"crc\" DevicePath \"\"" Dec 03 13:19:04 crc kubenswrapper[4986]: I1203 13:19:04.120501 4986 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f4d4dfb-2e4f-42b0-8760-01287415424c-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 13:19:04 crc kubenswrapper[4986]: I1203 13:19:04.120514 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gpxn9\" (UniqueName: \"kubernetes.io/projected/7f4d4dfb-2e4f-42b0-8760-01287415424c-kube-api-access-gpxn9\") on node \"crc\" DevicePath \"\"" Dec 03 13:19:04 crc kubenswrapper[4986]: I1203 13:19:04.120528 4986 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f4d4dfb-2e4f-42b0-8760-01287415424c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 13:19:04 crc kubenswrapper[4986]: I1203 13:19:04.128406 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f4d4dfb-2e4f-42b0-8760-01287415424c-config-data" (OuterVolumeSpecName: "config-data") pod "7f4d4dfb-2e4f-42b0-8760-01287415424c" (UID: "7f4d4dfb-2e4f-42b0-8760-01287415424c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:19:04 crc kubenswrapper[4986]: I1203 13:19:04.221652 4986 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f4d4dfb-2e4f-42b0-8760-01287415424c-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 13:19:04 crc kubenswrapper[4986]: I1203 13:19:04.233661 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d4dae8bb-1f5e-446c-9820-769e77414c04","Type":"ContainerStarted","Data":"cad60ee62b6c93be511446f0006c6f3f5fac38db2d6eabae98e5c3439c853b7b"} Dec 03 13:19:04 crc kubenswrapper[4986]: I1203 13:19:04.235819 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0b482987-51ed-43dd-9099-c54d824588bf","Type":"ContainerStarted","Data":"4f3957e372e4fd6e32d5915c5d40b448972c4b354892124fb0ec0cb57465fcf9"} Dec 03 13:19:04 crc kubenswrapper[4986]: I1203 13:19:04.237586 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7bc8976f54-hn8pf" event={"ID":"4d708ae2-3c0a-4134-97fe-36270231010f","Type":"ContainerDied","Data":"998122e1b817fdf5a5b212f9bd8251fdc89f312f87c34ec8b24f827705a7b221"} Dec 03 13:19:04 crc kubenswrapper[4986]: I1203 13:19:04.237618 4986 scope.go:117] "RemoveContainer" containerID="639d14405922b129b66585e80277451a8daf2c9acad37a002013359435e61394" Dec 03 13:19:04 crc kubenswrapper[4986]: I1203 13:19:04.237679 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7bc8976f54-hn8pf" Dec 03 13:19:04 crc kubenswrapper[4986]: I1203 13:19:04.248966 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d5c981f3-a559-4641-aaae-2e90c5d0d543","Type":"ContainerStarted","Data":"152f15955d2a2d608a7a6b95d81e9ec4a2cedda44a9d160e00d09e6b300ba6fd"} Dec 03 13:19:04 crc kubenswrapper[4986]: I1203 13:19:04.259694 4986 generic.go:334] "Generic (PLEG): container finished" podID="7f4d4dfb-2e4f-42b0-8760-01287415424c" containerID="685c695c044df2a0d17f10a4ee88e1e8199866581388b876113c8d3beac82856" exitCode=0 Dec 03 13:19:04 crc kubenswrapper[4986]: I1203 13:19:04.259730 4986 generic.go:334] "Generic (PLEG): container finished" podID="7f4d4dfb-2e4f-42b0-8760-01287415424c" containerID="1b3aa0f81d51d89319d9406ea9de951810633de716b605ebb2e38caac979c969" exitCode=143 Dec 03 13:19:04 crc kubenswrapper[4986]: I1203 13:19:04.259764 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 03 13:19:04 crc kubenswrapper[4986]: I1203 13:19:04.259814 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7f4d4dfb-2e4f-42b0-8760-01287415424c","Type":"ContainerDied","Data":"685c695c044df2a0d17f10a4ee88e1e8199866581388b876113c8d3beac82856"} Dec 03 13:19:04 crc kubenswrapper[4986]: I1203 13:19:04.259838 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7f4d4dfb-2e4f-42b0-8760-01287415424c","Type":"ContainerDied","Data":"1b3aa0f81d51d89319d9406ea9de951810633de716b605ebb2e38caac979c969"} Dec 03 13:19:04 crc kubenswrapper[4986]: I1203 13:19:04.259850 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7f4d4dfb-2e4f-42b0-8760-01287415424c","Type":"ContainerDied","Data":"7f5554334360aaea690d8db623d7d69c91b93cfc4490fb6ee985fdd879b81a3d"} Dec 03 13:19:04 crc kubenswrapper[4986]: I1203 13:19:04.268045 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.268027415 podStartE2EDuration="5.268027415s" podCreationTimestamp="2025-12-03 13:18:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:19:04.262297021 +0000 UTC m=+1403.728728212" watchObservedRunningTime="2025-12-03 13:19:04.268027415 +0000 UTC m=+1403.734458606" Dec 03 13:19:04 crc kubenswrapper[4986]: I1203 13:19:04.270395 4986 scope.go:117] "RemoveContainer" containerID="e221479e09edabe1d4433296404302474b808fd5df11d6fedd4a03e7a8e8e042" Dec 03 13:19:04 crc kubenswrapper[4986]: I1203 13:19:04.346906 4986 scope.go:117] "RemoveContainer" containerID="685c695c044df2a0d17f10a4ee88e1e8199866581388b876113c8d3beac82856" Dec 03 13:19:04 crc kubenswrapper[4986]: I1203 13:19:04.355259 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7bc8976f54-hn8pf"] Dec 03 13:19:04 crc kubenswrapper[4986]: I1203 13:19:04.411657 4986 scope.go:117] "RemoveContainer" containerID="1b3aa0f81d51d89319d9406ea9de951810633de716b605ebb2e38caac979c969" Dec 03 13:19:04 crc kubenswrapper[4986]: I1203 13:19:04.411817 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-7bc8976f54-hn8pf"] Dec 03 13:19:04 crc kubenswrapper[4986]: I1203 13:19:04.412803 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.412785741 podStartE2EDuration="5.412785741s" podCreationTimestamp="2025-12-03 13:18:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:19:04.332256908 +0000 UTC m=+1403.798688099" watchObservedRunningTime="2025-12-03 13:19:04.412785741 +0000 UTC m=+1403.879216932" Dec 03 13:19:04 crc kubenswrapper[4986]: I1203 13:19:04.468302 4986 scope.go:117] "RemoveContainer" containerID="685c695c044df2a0d17f10a4ee88e1e8199866581388b876113c8d3beac82856" Dec 03 13:19:04 crc kubenswrapper[4986]: E1203 13:19:04.470197 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"685c695c044df2a0d17f10a4ee88e1e8199866581388b876113c8d3beac82856\": container with ID starting with 685c695c044df2a0d17f10a4ee88e1e8199866581388b876113c8d3beac82856 not found: ID does not exist" containerID="685c695c044df2a0d17f10a4ee88e1e8199866581388b876113c8d3beac82856" Dec 03 13:19:04 crc kubenswrapper[4986]: I1203 13:19:04.470258 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"685c695c044df2a0d17f10a4ee88e1e8199866581388b876113c8d3beac82856"} err="failed to get container status \"685c695c044df2a0d17f10a4ee88e1e8199866581388b876113c8d3beac82856\": rpc error: code = NotFound desc = could not find container \"685c695c044df2a0d17f10a4ee88e1e8199866581388b876113c8d3beac82856\": container with ID starting with 685c695c044df2a0d17f10a4ee88e1e8199866581388b876113c8d3beac82856 not found: ID does not exist" Dec 03 13:19:04 crc kubenswrapper[4986]: I1203 13:19:04.470308 4986 scope.go:117] "RemoveContainer" containerID="1b3aa0f81d51d89319d9406ea9de951810633de716b605ebb2e38caac979c969" Dec 03 13:19:04 crc kubenswrapper[4986]: E1203 13:19:04.470576 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b3aa0f81d51d89319d9406ea9de951810633de716b605ebb2e38caac979c969\": container with ID starting with 1b3aa0f81d51d89319d9406ea9de951810633de716b605ebb2e38caac979c969 not found: ID does not exist" containerID="1b3aa0f81d51d89319d9406ea9de951810633de716b605ebb2e38caac979c969" Dec 03 13:19:04 crc kubenswrapper[4986]: I1203 13:19:04.470603 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b3aa0f81d51d89319d9406ea9de951810633de716b605ebb2e38caac979c969"} err="failed to get container status \"1b3aa0f81d51d89319d9406ea9de951810633de716b605ebb2e38caac979c969\": rpc error: code = NotFound desc = could not find container \"1b3aa0f81d51d89319d9406ea9de951810633de716b605ebb2e38caac979c969\": container with ID starting with 1b3aa0f81d51d89319d9406ea9de951810633de716b605ebb2e38caac979c969 not found: ID does not exist" Dec 03 13:19:04 crc kubenswrapper[4986]: I1203 13:19:04.470620 4986 scope.go:117] "RemoveContainer" containerID="685c695c044df2a0d17f10a4ee88e1e8199866581388b876113c8d3beac82856" Dec 03 13:19:04 crc kubenswrapper[4986]: I1203 13:19:04.470767 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"685c695c044df2a0d17f10a4ee88e1e8199866581388b876113c8d3beac82856"} err="failed to get container status \"685c695c044df2a0d17f10a4ee88e1e8199866581388b876113c8d3beac82856\": rpc error: code = NotFound desc = could not find container \"685c695c044df2a0d17f10a4ee88e1e8199866581388b876113c8d3beac82856\": container with ID starting with 685c695c044df2a0d17f10a4ee88e1e8199866581388b876113c8d3beac82856 not found: ID does not exist" Dec 03 13:19:04 crc kubenswrapper[4986]: I1203 13:19:04.470784 4986 scope.go:117] "RemoveContainer" containerID="1b3aa0f81d51d89319d9406ea9de951810633de716b605ebb2e38caac979c969" Dec 03 13:19:04 crc kubenswrapper[4986]: I1203 13:19:04.470916 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b3aa0f81d51d89319d9406ea9de951810633de716b605ebb2e38caac979c969"} err="failed to get container status \"1b3aa0f81d51d89319d9406ea9de951810633de716b605ebb2e38caac979c969\": rpc error: code = NotFound desc = could not find container \"1b3aa0f81d51d89319d9406ea9de951810633de716b605ebb2e38caac979c969\": container with ID starting with 1b3aa0f81d51d89319d9406ea9de951810633de716b605ebb2e38caac979c969 not found: ID does not exist" Dec 03 13:19:04 crc kubenswrapper[4986]: I1203 13:19:04.491155 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 03 13:19:04 crc kubenswrapper[4986]: I1203 13:19:04.506357 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Dec 03 13:19:04 crc kubenswrapper[4986]: I1203 13:19:04.519558 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Dec 03 13:19:04 crc kubenswrapper[4986]: E1203 13:19:04.520243 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d708ae2-3c0a-4134-97fe-36270231010f" containerName="barbican-api-log" Dec 03 13:19:04 crc kubenswrapper[4986]: I1203 13:19:04.520264 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d708ae2-3c0a-4134-97fe-36270231010f" containerName="barbican-api-log" Dec 03 13:19:04 crc kubenswrapper[4986]: E1203 13:19:04.520386 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc9fa065-2e88-4181-ab5a-be64336bed7d" containerName="horizon" Dec 03 13:19:04 crc kubenswrapper[4986]: I1203 13:19:04.520400 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc9fa065-2e88-4181-ab5a-be64336bed7d" containerName="horizon" Dec 03 13:19:04 crc kubenswrapper[4986]: E1203 13:19:04.520415 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc9fa065-2e88-4181-ab5a-be64336bed7d" containerName="horizon-log" Dec 03 13:19:04 crc kubenswrapper[4986]: I1203 13:19:04.520421 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc9fa065-2e88-4181-ab5a-be64336bed7d" containerName="horizon-log" Dec 03 13:19:04 crc kubenswrapper[4986]: E1203 13:19:04.520548 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f4d4dfb-2e4f-42b0-8760-01287415424c" containerName="cinder-api" Dec 03 13:19:04 crc kubenswrapper[4986]: I1203 13:19:04.520561 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f4d4dfb-2e4f-42b0-8760-01287415424c" containerName="cinder-api" Dec 03 13:19:04 crc kubenswrapper[4986]: E1203 13:19:04.520597 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f4d4dfb-2e4f-42b0-8760-01287415424c" containerName="cinder-api-log" Dec 03 13:19:04 crc kubenswrapper[4986]: I1203 13:19:04.520637 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f4d4dfb-2e4f-42b0-8760-01287415424c" containerName="cinder-api-log" Dec 03 13:19:04 crc kubenswrapper[4986]: E1203 13:19:04.520652 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d708ae2-3c0a-4134-97fe-36270231010f" containerName="barbican-api" Dec 03 13:19:04 crc kubenswrapper[4986]: I1203 13:19:04.520660 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d708ae2-3c0a-4134-97fe-36270231010f" containerName="barbican-api" Dec 03 13:19:04 crc kubenswrapper[4986]: I1203 13:19:04.520901 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc9fa065-2e88-4181-ab5a-be64336bed7d" containerName="horizon-log" Dec 03 13:19:04 crc kubenswrapper[4986]: I1203 13:19:04.520925 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f4d4dfb-2e4f-42b0-8760-01287415424c" containerName="cinder-api" Dec 03 13:19:04 crc kubenswrapper[4986]: I1203 13:19:04.520946 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d708ae2-3c0a-4134-97fe-36270231010f" containerName="barbican-api-log" Dec 03 13:19:04 crc kubenswrapper[4986]: I1203 13:19:04.520957 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f4d4dfb-2e4f-42b0-8760-01287415424c" containerName="cinder-api-log" Dec 03 13:19:04 crc kubenswrapper[4986]: I1203 13:19:04.520971 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d708ae2-3c0a-4134-97fe-36270231010f" containerName="barbican-api" Dec 03 13:19:04 crc kubenswrapper[4986]: I1203 13:19:04.520982 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc9fa065-2e88-4181-ab5a-be64336bed7d" containerName="horizon" Dec 03 13:19:04 crc kubenswrapper[4986]: I1203 13:19:04.522747 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 03 13:19:04 crc kubenswrapper[4986]: I1203 13:19:04.529458 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 03 13:19:04 crc kubenswrapper[4986]: I1203 13:19:04.531880 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Dec 03 13:19:04 crc kubenswrapper[4986]: I1203 13:19:04.532204 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Dec 03 13:19:04 crc kubenswrapper[4986]: I1203 13:19:04.533070 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Dec 03 13:19:04 crc kubenswrapper[4986]: I1203 13:19:04.554627 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/297881ac-9b99-4d7d-9e59-4fb75c103648-config-data-custom\") pod \"cinder-api-0\" (UID: \"297881ac-9b99-4d7d-9e59-4fb75c103648\") " pod="openstack/cinder-api-0" Dec 03 13:19:04 crc kubenswrapper[4986]: I1203 13:19:04.554694 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/297881ac-9b99-4d7d-9e59-4fb75c103648-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"297881ac-9b99-4d7d-9e59-4fb75c103648\") " pod="openstack/cinder-api-0" Dec 03 13:19:04 crc kubenswrapper[4986]: I1203 13:19:04.554736 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/297881ac-9b99-4d7d-9e59-4fb75c103648-logs\") pod \"cinder-api-0\" (UID: \"297881ac-9b99-4d7d-9e59-4fb75c103648\") " pod="openstack/cinder-api-0" Dec 03 13:19:04 crc kubenswrapper[4986]: I1203 13:19:04.554767 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/297881ac-9b99-4d7d-9e59-4fb75c103648-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"297881ac-9b99-4d7d-9e59-4fb75c103648\") " pod="openstack/cinder-api-0" Dec 03 13:19:04 crc kubenswrapper[4986]: I1203 13:19:04.554784 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/297881ac-9b99-4d7d-9e59-4fb75c103648-config-data\") pod \"cinder-api-0\" (UID: \"297881ac-9b99-4d7d-9e59-4fb75c103648\") " pod="openstack/cinder-api-0" Dec 03 13:19:04 crc kubenswrapper[4986]: I1203 13:19:04.554812 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/297881ac-9b99-4d7d-9e59-4fb75c103648-scripts\") pod \"cinder-api-0\" (UID: \"297881ac-9b99-4d7d-9e59-4fb75c103648\") " pod="openstack/cinder-api-0" Dec 03 13:19:04 crc kubenswrapper[4986]: I1203 13:19:04.554841 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/297881ac-9b99-4d7d-9e59-4fb75c103648-etc-machine-id\") pod \"cinder-api-0\" (UID: \"297881ac-9b99-4d7d-9e59-4fb75c103648\") " pod="openstack/cinder-api-0" Dec 03 13:19:04 crc kubenswrapper[4986]: I1203 13:19:04.554866 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/297881ac-9b99-4d7d-9e59-4fb75c103648-public-tls-certs\") pod \"cinder-api-0\" (UID: \"297881ac-9b99-4d7d-9e59-4fb75c103648\") " pod="openstack/cinder-api-0" Dec 03 13:19:04 crc kubenswrapper[4986]: I1203 13:19:04.554883 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zv9dr\" (UniqueName: \"kubernetes.io/projected/297881ac-9b99-4d7d-9e59-4fb75c103648-kube-api-access-zv9dr\") pod \"cinder-api-0\" (UID: \"297881ac-9b99-4d7d-9e59-4fb75c103648\") " pod="openstack/cinder-api-0" Dec 03 13:19:04 crc kubenswrapper[4986]: I1203 13:19:04.656540 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/297881ac-9b99-4d7d-9e59-4fb75c103648-etc-machine-id\") pod \"cinder-api-0\" (UID: \"297881ac-9b99-4d7d-9e59-4fb75c103648\") " pod="openstack/cinder-api-0" Dec 03 13:19:04 crc kubenswrapper[4986]: I1203 13:19:04.656613 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/297881ac-9b99-4d7d-9e59-4fb75c103648-public-tls-certs\") pod \"cinder-api-0\" (UID: \"297881ac-9b99-4d7d-9e59-4fb75c103648\") " pod="openstack/cinder-api-0" Dec 03 13:19:04 crc kubenswrapper[4986]: I1203 13:19:04.656641 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zv9dr\" (UniqueName: \"kubernetes.io/projected/297881ac-9b99-4d7d-9e59-4fb75c103648-kube-api-access-zv9dr\") pod \"cinder-api-0\" (UID: \"297881ac-9b99-4d7d-9e59-4fb75c103648\") " pod="openstack/cinder-api-0" Dec 03 13:19:04 crc kubenswrapper[4986]: I1203 13:19:04.656678 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/297881ac-9b99-4d7d-9e59-4fb75c103648-etc-machine-id\") pod \"cinder-api-0\" (UID: \"297881ac-9b99-4d7d-9e59-4fb75c103648\") " pod="openstack/cinder-api-0" Dec 03 13:19:04 crc kubenswrapper[4986]: I1203 13:19:04.656717 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/297881ac-9b99-4d7d-9e59-4fb75c103648-config-data-custom\") pod \"cinder-api-0\" (UID: \"297881ac-9b99-4d7d-9e59-4fb75c103648\") " pod="openstack/cinder-api-0" Dec 03 13:19:04 crc kubenswrapper[4986]: I1203 13:19:04.656762 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/297881ac-9b99-4d7d-9e59-4fb75c103648-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"297881ac-9b99-4d7d-9e59-4fb75c103648\") " pod="openstack/cinder-api-0" Dec 03 13:19:04 crc kubenswrapper[4986]: I1203 13:19:04.657549 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/297881ac-9b99-4d7d-9e59-4fb75c103648-logs\") pod \"cinder-api-0\" (UID: \"297881ac-9b99-4d7d-9e59-4fb75c103648\") " pod="openstack/cinder-api-0" Dec 03 13:19:04 crc kubenswrapper[4986]: I1203 13:19:04.657625 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/297881ac-9b99-4d7d-9e59-4fb75c103648-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"297881ac-9b99-4d7d-9e59-4fb75c103648\") " pod="openstack/cinder-api-0" Dec 03 13:19:04 crc kubenswrapper[4986]: I1203 13:19:04.657671 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/297881ac-9b99-4d7d-9e59-4fb75c103648-config-data\") pod \"cinder-api-0\" (UID: \"297881ac-9b99-4d7d-9e59-4fb75c103648\") " pod="openstack/cinder-api-0" Dec 03 13:19:04 crc kubenswrapper[4986]: I1203 13:19:04.657726 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/297881ac-9b99-4d7d-9e59-4fb75c103648-scripts\") pod \"cinder-api-0\" (UID: \"297881ac-9b99-4d7d-9e59-4fb75c103648\") " pod="openstack/cinder-api-0" Dec 03 13:19:04 crc kubenswrapper[4986]: I1203 13:19:04.658752 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/297881ac-9b99-4d7d-9e59-4fb75c103648-logs\") pod \"cinder-api-0\" (UID: \"297881ac-9b99-4d7d-9e59-4fb75c103648\") " pod="openstack/cinder-api-0" Dec 03 13:19:04 crc kubenswrapper[4986]: I1203 13:19:04.660626 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/297881ac-9b99-4d7d-9e59-4fb75c103648-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"297881ac-9b99-4d7d-9e59-4fb75c103648\") " pod="openstack/cinder-api-0" Dec 03 13:19:04 crc kubenswrapper[4986]: I1203 13:19:04.662883 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/297881ac-9b99-4d7d-9e59-4fb75c103648-public-tls-certs\") pod \"cinder-api-0\" (UID: \"297881ac-9b99-4d7d-9e59-4fb75c103648\") " pod="openstack/cinder-api-0" Dec 03 13:19:04 crc kubenswrapper[4986]: I1203 13:19:04.663592 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/297881ac-9b99-4d7d-9e59-4fb75c103648-config-data-custom\") pod \"cinder-api-0\" (UID: \"297881ac-9b99-4d7d-9e59-4fb75c103648\") " pod="openstack/cinder-api-0" Dec 03 13:19:04 crc kubenswrapper[4986]: I1203 13:19:04.664365 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/297881ac-9b99-4d7d-9e59-4fb75c103648-config-data\") pod \"cinder-api-0\" (UID: \"297881ac-9b99-4d7d-9e59-4fb75c103648\") " pod="openstack/cinder-api-0" Dec 03 13:19:04 crc kubenswrapper[4986]: I1203 13:19:04.666149 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/297881ac-9b99-4d7d-9e59-4fb75c103648-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"297881ac-9b99-4d7d-9e59-4fb75c103648\") " pod="openstack/cinder-api-0" Dec 03 13:19:04 crc kubenswrapper[4986]: I1203 13:19:04.667859 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/297881ac-9b99-4d7d-9e59-4fb75c103648-scripts\") pod \"cinder-api-0\" (UID: \"297881ac-9b99-4d7d-9e59-4fb75c103648\") " pod="openstack/cinder-api-0" Dec 03 13:19:04 crc kubenswrapper[4986]: I1203 13:19:04.672070 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-cc774c568-phcpp" Dec 03 13:19:04 crc kubenswrapper[4986]: I1203 13:19:04.679546 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zv9dr\" (UniqueName: \"kubernetes.io/projected/297881ac-9b99-4d7d-9e59-4fb75c103648-kube-api-access-zv9dr\") pod \"cinder-api-0\" (UID: \"297881ac-9b99-4d7d-9e59-4fb75c103648\") " pod="openstack/cinder-api-0" Dec 03 13:19:04 crc kubenswrapper[4986]: I1203 13:19:04.883508 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 03 13:19:04 crc kubenswrapper[4986]: I1203 13:19:04.956121 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d708ae2-3c0a-4134-97fe-36270231010f" path="/var/lib/kubelet/pods/4d708ae2-3c0a-4134-97fe-36270231010f/volumes" Dec 03 13:19:04 crc kubenswrapper[4986]: I1203 13:19:04.956976 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f4d4dfb-2e4f-42b0-8760-01287415424c" path="/var/lib/kubelet/pods/7f4d4dfb-2e4f-42b0-8760-01287415424c/volumes" Dec 03 13:19:04 crc kubenswrapper[4986]: I1203 13:19:04.977475 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-7797f969d4-6c2wn" Dec 03 13:19:05 crc kubenswrapper[4986]: I1203 13:19:05.058125 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-cc774c568-phcpp"] Dec 03 13:19:05 crc kubenswrapper[4986]: I1203 13:19:05.278634 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0b482987-51ed-43dd-9099-c54d824588bf","Type":"ContainerStarted","Data":"eb0afe609b2a9c9c8c4a8eacdf3df3343aa2f4ce2503fa8629c7c87e0396c0f2"} Dec 03 13:19:05 crc kubenswrapper[4986]: I1203 13:19:05.282422 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-cc774c568-phcpp" podUID="cee5d9b6-d11e-4bad-b013-196a4f401404" containerName="horizon-log" containerID="cri-o://d7d30e426c52ae0b45fc8f0203633e1d5f3166da333a8d50729b7513d223357e" gracePeriod=30 Dec 03 13:19:05 crc kubenswrapper[4986]: I1203 13:19:05.282524 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-cc774c568-phcpp" podUID="cee5d9b6-d11e-4bad-b013-196a4f401404" containerName="horizon" containerID="cri-o://b3a510936fd6eb78b8694de1aa0785b123dd535c543cc43a92b46a0c5eb19968" gracePeriod=30 Dec 03 13:19:05 crc kubenswrapper[4986]: I1203 13:19:05.347937 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 03 13:19:06 crc kubenswrapper[4986]: I1203 13:19:06.377826 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"297881ac-9b99-4d7d-9e59-4fb75c103648","Type":"ContainerStarted","Data":"3072c11eb696d3d7fa4689423374837726288a087ca5840b76937889f35b49e6"} Dec 03 13:19:06 crc kubenswrapper[4986]: I1203 13:19:06.378121 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"297881ac-9b99-4d7d-9e59-4fb75c103648","Type":"ContainerStarted","Data":"d846747bce8a52900f6329528d3cd5c167454609203817b3d46bfb2bbe0e94bc"} Dec 03 13:19:06 crc kubenswrapper[4986]: I1203 13:19:06.396829 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-59f49d79c7-qt4rk" Dec 03 13:19:06 crc kubenswrapper[4986]: I1203 13:19:06.468210 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5dc7cf89f4-56jhq"] Dec 03 13:19:06 crc kubenswrapper[4986]: I1203 13:19:06.469465 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5dc7cf89f4-56jhq" podUID="db075f00-81d9-4306-b054-968590fecd46" containerName="neutron-api" containerID="cri-o://e30eb661b49ebecd2db2d2fbfe82cf1580347da1a929b604b8f10d6570737631" gracePeriod=30 Dec 03 13:19:06 crc kubenswrapper[4986]: I1203 13:19:06.469963 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5dc7cf89f4-56jhq" podUID="db075f00-81d9-4306-b054-968590fecd46" containerName="neutron-httpd" containerID="cri-o://ba96fbeff228dff458eed02a6d65b1dd391f097349bb77b73501b560c516fc52" gracePeriod=30 Dec 03 13:19:07 crc kubenswrapper[4986]: I1203 13:19:07.418521 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0b482987-51ed-43dd-9099-c54d824588bf","Type":"ContainerStarted","Data":"354617f24609bb90f74a5d56c92fdeb07833a70a411fb023342eb61f620a5844"} Dec 03 13:19:07 crc kubenswrapper[4986]: I1203 13:19:07.419027 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 03 13:19:07 crc kubenswrapper[4986]: I1203 13:19:07.421119 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"297881ac-9b99-4d7d-9e59-4fb75c103648","Type":"ContainerStarted","Data":"36e60787baacf78407a78351b54aff94c1be39480fce1e0278e4abecba1a2189"} Dec 03 13:19:07 crc kubenswrapper[4986]: I1203 13:19:07.426114 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Dec 03 13:19:07 crc kubenswrapper[4986]: I1203 13:19:07.437439 4986 generic.go:334] "Generic (PLEG): container finished" podID="db075f00-81d9-4306-b054-968590fecd46" containerID="ba96fbeff228dff458eed02a6d65b1dd391f097349bb77b73501b560c516fc52" exitCode=0 Dec 03 13:19:07 crc kubenswrapper[4986]: I1203 13:19:07.437496 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5dc7cf89f4-56jhq" event={"ID":"db075f00-81d9-4306-b054-968590fecd46","Type":"ContainerDied","Data":"ba96fbeff228dff458eed02a6d65b1dd391f097349bb77b73501b560c516fc52"} Dec 03 13:19:07 crc kubenswrapper[4986]: I1203 13:19:07.446426 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.38322259 podStartE2EDuration="7.44641266s" podCreationTimestamp="2025-12-03 13:19:00 +0000 UTC" firstStartedPulling="2025-12-03 13:19:01.968823624 +0000 UTC m=+1401.435254815" lastFinishedPulling="2025-12-03 13:19:06.032013694 +0000 UTC m=+1405.498444885" observedRunningTime="2025-12-03 13:19:07.441819996 +0000 UTC m=+1406.908251197" watchObservedRunningTime="2025-12-03 13:19:07.44641266 +0000 UTC m=+1406.912843851" Dec 03 13:19:07 crc kubenswrapper[4986]: I1203 13:19:07.469090 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.469073601 podStartE2EDuration="3.469073601s" podCreationTimestamp="2025-12-03 13:19:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:19:07.464399785 +0000 UTC m=+1406.930830986" watchObservedRunningTime="2025-12-03 13:19:07.469073601 +0000 UTC m=+1406.935504792" Dec 03 13:19:08 crc kubenswrapper[4986]: I1203 13:19:08.212999 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 03 13:19:08 crc kubenswrapper[4986]: I1203 13:19:08.279434 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5784cf869f-xrr75" Dec 03 13:19:08 crc kubenswrapper[4986]: I1203 13:19:08.360460 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-789c5c5cb7-4722d"] Dec 03 13:19:08 crc kubenswrapper[4986]: I1203 13:19:08.360829 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-789c5c5cb7-4722d" podUID="b1a7aa7a-cf10-4bc1-8e84-fc83ea863d28" containerName="dnsmasq-dns" containerID="cri-o://941d7e6ffa8dcf723671459f75d3cab253afd3f5e8f93f0904aca73f7bd6be06" gracePeriod=10 Dec 03 13:19:08 crc kubenswrapper[4986]: I1203 13:19:08.372160 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-db8cd8b46-ffl2g" Dec 03 13:19:08 crc kubenswrapper[4986]: I1203 13:19:08.402656 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-db8cd8b46-ffl2g" Dec 03 13:19:08 crc kubenswrapper[4986]: I1203 13:19:08.456222 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Dec 03 13:19:08 crc kubenswrapper[4986]: I1203 13:19:08.527198 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 03 13:19:08 crc kubenswrapper[4986]: I1203 13:19:08.700545 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-868bb78845-npjxs" Dec 03 13:19:08 crc kubenswrapper[4986]: I1203 13:19:08.929796 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-789c5c5cb7-4722d" Dec 03 13:19:08 crc kubenswrapper[4986]: I1203 13:19:08.998112 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b1a7aa7a-cf10-4bc1-8e84-fc83ea863d28-ovsdbserver-sb\") pod \"b1a7aa7a-cf10-4bc1-8e84-fc83ea863d28\" (UID: \"b1a7aa7a-cf10-4bc1-8e84-fc83ea863d28\") " Dec 03 13:19:08 crc kubenswrapper[4986]: I1203 13:19:08.998247 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b1a7aa7a-cf10-4bc1-8e84-fc83ea863d28-dns-swift-storage-0\") pod \"b1a7aa7a-cf10-4bc1-8e84-fc83ea863d28\" (UID: \"b1a7aa7a-cf10-4bc1-8e84-fc83ea863d28\") " Dec 03 13:19:08 crc kubenswrapper[4986]: I1203 13:19:08.998272 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b1a7aa7a-cf10-4bc1-8e84-fc83ea863d28-ovsdbserver-nb\") pod \"b1a7aa7a-cf10-4bc1-8e84-fc83ea863d28\" (UID: \"b1a7aa7a-cf10-4bc1-8e84-fc83ea863d28\") " Dec 03 13:19:08 crc kubenswrapper[4986]: I1203 13:19:08.998333 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b1a7aa7a-cf10-4bc1-8e84-fc83ea863d28-dns-svc\") pod \"b1a7aa7a-cf10-4bc1-8e84-fc83ea863d28\" (UID: \"b1a7aa7a-cf10-4bc1-8e84-fc83ea863d28\") " Dec 03 13:19:08 crc kubenswrapper[4986]: I1203 13:19:08.998361 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1a7aa7a-cf10-4bc1-8e84-fc83ea863d28-config\") pod \"b1a7aa7a-cf10-4bc1-8e84-fc83ea863d28\" (UID: \"b1a7aa7a-cf10-4bc1-8e84-fc83ea863d28\") " Dec 03 13:19:08 crc kubenswrapper[4986]: I1203 13:19:08.998378 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28zkr\" (UniqueName: \"kubernetes.io/projected/b1a7aa7a-cf10-4bc1-8e84-fc83ea863d28-kube-api-access-28zkr\") pod \"b1a7aa7a-cf10-4bc1-8e84-fc83ea863d28\" (UID: \"b1a7aa7a-cf10-4bc1-8e84-fc83ea863d28\") " Dec 03 13:19:09 crc kubenswrapper[4986]: I1203 13:19:09.017173 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1a7aa7a-cf10-4bc1-8e84-fc83ea863d28-kube-api-access-28zkr" (OuterVolumeSpecName: "kube-api-access-28zkr") pod "b1a7aa7a-cf10-4bc1-8e84-fc83ea863d28" (UID: "b1a7aa7a-cf10-4bc1-8e84-fc83ea863d28"). InnerVolumeSpecName "kube-api-access-28zkr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:19:09 crc kubenswrapper[4986]: I1203 13:19:09.064182 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1a7aa7a-cf10-4bc1-8e84-fc83ea863d28-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b1a7aa7a-cf10-4bc1-8e84-fc83ea863d28" (UID: "b1a7aa7a-cf10-4bc1-8e84-fc83ea863d28"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:19:09 crc kubenswrapper[4986]: I1203 13:19:09.067017 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1a7aa7a-cf10-4bc1-8e84-fc83ea863d28-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b1a7aa7a-cf10-4bc1-8e84-fc83ea863d28" (UID: "b1a7aa7a-cf10-4bc1-8e84-fc83ea863d28"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:19:09 crc kubenswrapper[4986]: I1203 13:19:09.071927 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1a7aa7a-cf10-4bc1-8e84-fc83ea863d28-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b1a7aa7a-cf10-4bc1-8e84-fc83ea863d28" (UID: "b1a7aa7a-cf10-4bc1-8e84-fc83ea863d28"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:19:09 crc kubenswrapper[4986]: I1203 13:19:09.074560 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1a7aa7a-cf10-4bc1-8e84-fc83ea863d28-config" (OuterVolumeSpecName: "config") pod "b1a7aa7a-cf10-4bc1-8e84-fc83ea863d28" (UID: "b1a7aa7a-cf10-4bc1-8e84-fc83ea863d28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:19:09 crc kubenswrapper[4986]: I1203 13:19:09.076310 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1a7aa7a-cf10-4bc1-8e84-fc83ea863d28-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b1a7aa7a-cf10-4bc1-8e84-fc83ea863d28" (UID: "b1a7aa7a-cf10-4bc1-8e84-fc83ea863d28"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:19:09 crc kubenswrapper[4986]: I1203 13:19:09.100426 4986 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b1a7aa7a-cf10-4bc1-8e84-fc83ea863d28-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 03 13:19:09 crc kubenswrapper[4986]: I1203 13:19:09.100456 4986 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b1a7aa7a-cf10-4bc1-8e84-fc83ea863d28-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 03 13:19:09 crc kubenswrapper[4986]: I1203 13:19:09.100466 4986 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b1a7aa7a-cf10-4bc1-8e84-fc83ea863d28-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 03 13:19:09 crc kubenswrapper[4986]: I1203 13:19:09.100475 4986 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1a7aa7a-cf10-4bc1-8e84-fc83ea863d28-config\") on node \"crc\" DevicePath \"\"" Dec 03 13:19:09 crc kubenswrapper[4986]: I1203 13:19:09.100484 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-28zkr\" (UniqueName: \"kubernetes.io/projected/b1a7aa7a-cf10-4bc1-8e84-fc83ea863d28-kube-api-access-28zkr\") on node \"crc\" DevicePath \"\"" Dec 03 13:19:09 crc kubenswrapper[4986]: I1203 13:19:09.100493 4986 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b1a7aa7a-cf10-4bc1-8e84-fc83ea863d28-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 03 13:19:09 crc kubenswrapper[4986]: I1203 13:19:09.122635 4986 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-cc774c568-phcpp" podUID="cee5d9b6-d11e-4bad-b013-196a4f401404" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.141:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.141:8443: connect: connection refused" Dec 03 13:19:09 crc kubenswrapper[4986]: I1203 13:19:09.455750 4986 generic.go:334] "Generic (PLEG): container finished" podID="cee5d9b6-d11e-4bad-b013-196a4f401404" containerID="b3a510936fd6eb78b8694de1aa0785b123dd535c543cc43a92b46a0c5eb19968" exitCode=0 Dec 03 13:19:09 crc kubenswrapper[4986]: I1203 13:19:09.455786 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-cc774c568-phcpp" event={"ID":"cee5d9b6-d11e-4bad-b013-196a4f401404","Type":"ContainerDied","Data":"b3a510936fd6eb78b8694de1aa0785b123dd535c543cc43a92b46a0c5eb19968"} Dec 03 13:19:09 crc kubenswrapper[4986]: I1203 13:19:09.458813 4986 generic.go:334] "Generic (PLEG): container finished" podID="b1a7aa7a-cf10-4bc1-8e84-fc83ea863d28" containerID="941d7e6ffa8dcf723671459f75d3cab253afd3f5e8f93f0904aca73f7bd6be06" exitCode=0 Dec 03 13:19:09 crc kubenswrapper[4986]: I1203 13:19:09.458852 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-789c5c5cb7-4722d" event={"ID":"b1a7aa7a-cf10-4bc1-8e84-fc83ea863d28","Type":"ContainerDied","Data":"941d7e6ffa8dcf723671459f75d3cab253afd3f5e8f93f0904aca73f7bd6be06"} Dec 03 13:19:09 crc kubenswrapper[4986]: I1203 13:19:09.459635 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-789c5c5cb7-4722d" event={"ID":"b1a7aa7a-cf10-4bc1-8e84-fc83ea863d28","Type":"ContainerDied","Data":"63d700d4a1f4a731dcbf762e1c5948ffdb8fc96b0ae9c649604f0a5f3d0c9cb7"} Dec 03 13:19:09 crc kubenswrapper[4986]: I1203 13:19:09.458872 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-789c5c5cb7-4722d" Dec 03 13:19:09 crc kubenswrapper[4986]: I1203 13:19:09.459676 4986 scope.go:117] "RemoveContainer" containerID="941d7e6ffa8dcf723671459f75d3cab253afd3f5e8f93f0904aca73f7bd6be06" Dec 03 13:19:09 crc kubenswrapper[4986]: I1203 13:19:09.460956 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="aec700a6-d4c5-4c24-8726-16dc792d5658" containerName="cinder-scheduler" containerID="cri-o://691e7797cd057ca907ecec503fdb24521ec7774622ad57eae3fb3c829259d655" gracePeriod=30 Dec 03 13:19:09 crc kubenswrapper[4986]: I1203 13:19:09.461122 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="aec700a6-d4c5-4c24-8726-16dc792d5658" containerName="probe" containerID="cri-o://0da6ae24d795357e9372ccd524bb0486a2f41686c79f6226cec1d9938efe4612" gracePeriod=30 Dec 03 13:19:09 crc kubenswrapper[4986]: I1203 13:19:09.536591 4986 scope.go:117] "RemoveContainer" containerID="af7585f513bae018e351dcf4d3e5a466b52e06383153fcb7da49251b017d8b92" Dec 03 13:19:09 crc kubenswrapper[4986]: I1203 13:19:09.542001 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-789c5c5cb7-4722d"] Dec 03 13:19:09 crc kubenswrapper[4986]: I1203 13:19:09.549455 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-789c5c5cb7-4722d"] Dec 03 13:19:09 crc kubenswrapper[4986]: I1203 13:19:09.559094 4986 scope.go:117] "RemoveContainer" containerID="941d7e6ffa8dcf723671459f75d3cab253afd3f5e8f93f0904aca73f7bd6be06" Dec 03 13:19:09 crc kubenswrapper[4986]: E1203 13:19:09.559472 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"941d7e6ffa8dcf723671459f75d3cab253afd3f5e8f93f0904aca73f7bd6be06\": container with ID starting with 941d7e6ffa8dcf723671459f75d3cab253afd3f5e8f93f0904aca73f7bd6be06 not found: ID does not exist" containerID="941d7e6ffa8dcf723671459f75d3cab253afd3f5e8f93f0904aca73f7bd6be06" Dec 03 13:19:09 crc kubenswrapper[4986]: I1203 13:19:09.559508 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"941d7e6ffa8dcf723671459f75d3cab253afd3f5e8f93f0904aca73f7bd6be06"} err="failed to get container status \"941d7e6ffa8dcf723671459f75d3cab253afd3f5e8f93f0904aca73f7bd6be06\": rpc error: code = NotFound desc = could not find container \"941d7e6ffa8dcf723671459f75d3cab253afd3f5e8f93f0904aca73f7bd6be06\": container with ID starting with 941d7e6ffa8dcf723671459f75d3cab253afd3f5e8f93f0904aca73f7bd6be06 not found: ID does not exist" Dec 03 13:19:09 crc kubenswrapper[4986]: I1203 13:19:09.559534 4986 scope.go:117] "RemoveContainer" containerID="af7585f513bae018e351dcf4d3e5a466b52e06383153fcb7da49251b017d8b92" Dec 03 13:19:09 crc kubenswrapper[4986]: E1203 13:19:09.559819 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af7585f513bae018e351dcf4d3e5a466b52e06383153fcb7da49251b017d8b92\": container with ID starting with af7585f513bae018e351dcf4d3e5a466b52e06383153fcb7da49251b017d8b92 not found: ID does not exist" containerID="af7585f513bae018e351dcf4d3e5a466b52e06383153fcb7da49251b017d8b92" Dec 03 13:19:09 crc kubenswrapper[4986]: I1203 13:19:09.559871 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af7585f513bae018e351dcf4d3e5a466b52e06383153fcb7da49251b017d8b92"} err="failed to get container status \"af7585f513bae018e351dcf4d3e5a466b52e06383153fcb7da49251b017d8b92\": rpc error: code = NotFound desc = could not find container \"af7585f513bae018e351dcf4d3e5a466b52e06383153fcb7da49251b017d8b92\": container with ID starting with af7585f513bae018e351dcf4d3e5a466b52e06383153fcb7da49251b017d8b92 not found: ID does not exist" Dec 03 13:19:10 crc kubenswrapper[4986]: I1203 13:19:10.317772 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 03 13:19:10 crc kubenswrapper[4986]: I1203 13:19:10.318013 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 03 13:19:10 crc kubenswrapper[4986]: I1203 13:19:10.391958 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 03 13:19:10 crc kubenswrapper[4986]: I1203 13:19:10.395874 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 03 13:19:10 crc kubenswrapper[4986]: I1203 13:19:10.475427 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 03 13:19:10 crc kubenswrapper[4986]: I1203 13:19:10.475480 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 03 13:19:10 crc kubenswrapper[4986]: I1203 13:19:10.514534 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 03 13:19:10 crc kubenswrapper[4986]: I1203 13:19:10.514580 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 03 13:19:10 crc kubenswrapper[4986]: I1203 13:19:10.562073 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 03 13:19:10 crc kubenswrapper[4986]: I1203 13:19:10.562929 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 03 13:19:10 crc kubenswrapper[4986]: I1203 13:19:10.955418 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1a7aa7a-cf10-4bc1-8e84-fc83ea863d28" path="/var/lib/kubelet/pods/b1a7aa7a-cf10-4bc1-8e84-fc83ea863d28/volumes" Dec 03 13:19:11 crc kubenswrapper[4986]: I1203 13:19:11.490197 4986 generic.go:334] "Generic (PLEG): container finished" podID="aec700a6-d4c5-4c24-8726-16dc792d5658" containerID="0da6ae24d795357e9372ccd524bb0486a2f41686c79f6226cec1d9938efe4612" exitCode=0 Dec 03 13:19:11 crc kubenswrapper[4986]: I1203 13:19:11.490345 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"aec700a6-d4c5-4c24-8726-16dc792d5658","Type":"ContainerDied","Data":"0da6ae24d795357e9372ccd524bb0486a2f41686c79f6226cec1d9938efe4612"} Dec 03 13:19:11 crc kubenswrapper[4986]: I1203 13:19:11.491211 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 03 13:19:11 crc kubenswrapper[4986]: I1203 13:19:11.491254 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 03 13:19:12 crc kubenswrapper[4986]: I1203 13:19:12.392227 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 03 13:19:12 crc kubenswrapper[4986]: I1203 13:19:12.424877 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 03 13:19:12 crc kubenswrapper[4986]: I1203 13:19:12.723481 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Dec 03 13:19:12 crc kubenswrapper[4986]: E1203 13:19:12.723893 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1a7aa7a-cf10-4bc1-8e84-fc83ea863d28" containerName="init" Dec 03 13:19:12 crc kubenswrapper[4986]: I1203 13:19:12.723907 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1a7aa7a-cf10-4bc1-8e84-fc83ea863d28" containerName="init" Dec 03 13:19:12 crc kubenswrapper[4986]: E1203 13:19:12.723927 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1a7aa7a-cf10-4bc1-8e84-fc83ea863d28" containerName="dnsmasq-dns" Dec 03 13:19:12 crc kubenswrapper[4986]: I1203 13:19:12.723935 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1a7aa7a-cf10-4bc1-8e84-fc83ea863d28" containerName="dnsmasq-dns" Dec 03 13:19:12 crc kubenswrapper[4986]: I1203 13:19:12.727421 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1a7aa7a-cf10-4bc1-8e84-fc83ea863d28" containerName="dnsmasq-dns" Dec 03 13:19:12 crc kubenswrapper[4986]: I1203 13:19:12.728248 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 03 13:19:12 crc kubenswrapper[4986]: I1203 13:19:12.731707 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Dec 03 13:19:12 crc kubenswrapper[4986]: I1203 13:19:12.731993 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Dec 03 13:19:12 crc kubenswrapper[4986]: I1203 13:19:12.732568 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-rnkqt" Dec 03 13:19:12 crc kubenswrapper[4986]: I1203 13:19:12.745887 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 03 13:19:12 crc kubenswrapper[4986]: I1203 13:19:12.882674 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-549gw\" (UniqueName: \"kubernetes.io/projected/fffc079e-8f2e-4f96-b7cd-362e90b8d375-kube-api-access-549gw\") pod \"openstackclient\" (UID: \"fffc079e-8f2e-4f96-b7cd-362e90b8d375\") " pod="openstack/openstackclient" Dec 03 13:19:12 crc kubenswrapper[4986]: I1203 13:19:12.883054 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fffc079e-8f2e-4f96-b7cd-362e90b8d375-combined-ca-bundle\") pod \"openstackclient\" (UID: \"fffc079e-8f2e-4f96-b7cd-362e90b8d375\") " pod="openstack/openstackclient" Dec 03 13:19:12 crc kubenswrapper[4986]: I1203 13:19:12.883140 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/fffc079e-8f2e-4f96-b7cd-362e90b8d375-openstack-config-secret\") pod \"openstackclient\" (UID: \"fffc079e-8f2e-4f96-b7cd-362e90b8d375\") " pod="openstack/openstackclient" Dec 03 13:19:12 crc kubenswrapper[4986]: I1203 13:19:12.883178 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/fffc079e-8f2e-4f96-b7cd-362e90b8d375-openstack-config\") pod \"openstackclient\" (UID: \"fffc079e-8f2e-4f96-b7cd-362e90b8d375\") " pod="openstack/openstackclient" Dec 03 13:19:12 crc kubenswrapper[4986]: I1203 13:19:12.976990 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Dec 03 13:19:12 crc kubenswrapper[4986]: E1203 13:19:12.977664 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle kube-api-access-549gw openstack-config openstack-config-secret], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/openstackclient" podUID="fffc079e-8f2e-4f96-b7cd-362e90b8d375" Dec 03 13:19:12 crc kubenswrapper[4986]: I1203 13:19:12.986364 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-549gw\" (UniqueName: \"kubernetes.io/projected/fffc079e-8f2e-4f96-b7cd-362e90b8d375-kube-api-access-549gw\") pod \"openstackclient\" (UID: \"fffc079e-8f2e-4f96-b7cd-362e90b8d375\") " pod="openstack/openstackclient" Dec 03 13:19:12 crc kubenswrapper[4986]: I1203 13:19:12.986431 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fffc079e-8f2e-4f96-b7cd-362e90b8d375-combined-ca-bundle\") pod \"openstackclient\" (UID: \"fffc079e-8f2e-4f96-b7cd-362e90b8d375\") " pod="openstack/openstackclient" Dec 03 13:19:12 crc kubenswrapper[4986]: I1203 13:19:12.986506 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/fffc079e-8f2e-4f96-b7cd-362e90b8d375-openstack-config-secret\") pod \"openstackclient\" (UID: \"fffc079e-8f2e-4f96-b7cd-362e90b8d375\") " pod="openstack/openstackclient" Dec 03 13:19:12 crc kubenswrapper[4986]: I1203 13:19:12.986544 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/fffc079e-8f2e-4f96-b7cd-362e90b8d375-openstack-config\") pod \"openstackclient\" (UID: \"fffc079e-8f2e-4f96-b7cd-362e90b8d375\") " pod="openstack/openstackclient" Dec 03 13:19:12 crc kubenswrapper[4986]: I1203 13:19:12.987267 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/fffc079e-8f2e-4f96-b7cd-362e90b8d375-openstack-config\") pod \"openstackclient\" (UID: \"fffc079e-8f2e-4f96-b7cd-362e90b8d375\") " pod="openstack/openstackclient" Dec 03 13:19:12 crc kubenswrapper[4986]: I1203 13:19:12.988894 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Dec 03 13:19:12 crc kubenswrapper[4986]: I1203 13:19:12.996917 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/fffc079e-8f2e-4f96-b7cd-362e90b8d375-openstack-config-secret\") pod \"openstackclient\" (UID: \"fffc079e-8f2e-4f96-b7cd-362e90b8d375\") " pod="openstack/openstackclient" Dec 03 13:19:12 crc kubenswrapper[4986]: I1203 13:19:12.997053 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fffc079e-8f2e-4f96-b7cd-362e90b8d375-combined-ca-bundle\") pod \"openstackclient\" (UID: \"fffc079e-8f2e-4f96-b7cd-362e90b8d375\") " pod="openstack/openstackclient" Dec 03 13:19:12 crc kubenswrapper[4986]: E1203 13:19:12.997374 4986 projected.go:194] Error preparing data for projected volume kube-api-access-549gw for pod openstack/openstackclient: failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: User "system:node:crc" cannot create resource "serviceaccounts/token" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Dec 03 13:19:12 crc kubenswrapper[4986]: E1203 13:19:12.997469 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fffc079e-8f2e-4f96-b7cd-362e90b8d375-kube-api-access-549gw podName:fffc079e-8f2e-4f96-b7cd-362e90b8d375 nodeName:}" failed. No retries permitted until 2025-12-03 13:19:13.497438987 +0000 UTC m=+1412.963870248 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-549gw" (UniqueName: "kubernetes.io/projected/fffc079e-8f2e-4f96-b7cd-362e90b8d375-kube-api-access-549gw") pod "openstackclient" (UID: "fffc079e-8f2e-4f96-b7cd-362e90b8d375") : failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: User "system:node:crc" cannot create resource "serviceaccounts/token" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Dec 03 13:19:13 crc kubenswrapper[4986]: I1203 13:19:13.103259 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Dec 03 13:19:13 crc kubenswrapper[4986]: I1203 13:19:13.104668 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 03 13:19:13 crc kubenswrapper[4986]: I1203 13:19:13.108231 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5dc7cf89f4-56jhq" Dec 03 13:19:13 crc kubenswrapper[4986]: I1203 13:19:13.119350 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 03 13:19:13 crc kubenswrapper[4986]: I1203 13:19:13.190505 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a2644ca5-1db7-491d-949e-e8810934a296-openstack-config-secret\") pod \"openstackclient\" (UID: \"a2644ca5-1db7-491d-949e-e8810934a296\") " pod="openstack/openstackclient" Dec 03 13:19:13 crc kubenswrapper[4986]: I1203 13:19:13.190553 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a2644ca5-1db7-491d-949e-e8810934a296-openstack-config\") pod \"openstackclient\" (UID: \"a2644ca5-1db7-491d-949e-e8810934a296\") " pod="openstack/openstackclient" Dec 03 13:19:13 crc kubenswrapper[4986]: I1203 13:19:13.190600 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2644ca5-1db7-491d-949e-e8810934a296-combined-ca-bundle\") pod \"openstackclient\" (UID: \"a2644ca5-1db7-491d-949e-e8810934a296\") " pod="openstack/openstackclient" Dec 03 13:19:13 crc kubenswrapper[4986]: I1203 13:19:13.190636 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zz796\" (UniqueName: \"kubernetes.io/projected/a2644ca5-1db7-491d-949e-e8810934a296-kube-api-access-zz796\") pod \"openstackclient\" (UID: \"a2644ca5-1db7-491d-949e-e8810934a296\") " pod="openstack/openstackclient" Dec 03 13:19:13 crc kubenswrapper[4986]: I1203 13:19:13.291778 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/db075f00-81d9-4306-b054-968590fecd46-config\") pod \"db075f00-81d9-4306-b054-968590fecd46\" (UID: \"db075f00-81d9-4306-b054-968590fecd46\") " Dec 03 13:19:13 crc kubenswrapper[4986]: I1203 13:19:13.292198 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db075f00-81d9-4306-b054-968590fecd46-combined-ca-bundle\") pod \"db075f00-81d9-4306-b054-968590fecd46\" (UID: \"db075f00-81d9-4306-b054-968590fecd46\") " Dec 03 13:19:13 crc kubenswrapper[4986]: I1203 13:19:13.292221 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6sv6\" (UniqueName: \"kubernetes.io/projected/db075f00-81d9-4306-b054-968590fecd46-kube-api-access-b6sv6\") pod \"db075f00-81d9-4306-b054-968590fecd46\" (UID: \"db075f00-81d9-4306-b054-968590fecd46\") " Dec 03 13:19:13 crc kubenswrapper[4986]: I1203 13:19:13.292273 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/db075f00-81d9-4306-b054-968590fecd46-httpd-config\") pod \"db075f00-81d9-4306-b054-968590fecd46\" (UID: \"db075f00-81d9-4306-b054-968590fecd46\") " Dec 03 13:19:13 crc kubenswrapper[4986]: I1203 13:19:13.292379 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/db075f00-81d9-4306-b054-968590fecd46-ovndb-tls-certs\") pod \"db075f00-81d9-4306-b054-968590fecd46\" (UID: \"db075f00-81d9-4306-b054-968590fecd46\") " Dec 03 13:19:13 crc kubenswrapper[4986]: I1203 13:19:13.292643 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a2644ca5-1db7-491d-949e-e8810934a296-openstack-config-secret\") pod \"openstackclient\" (UID: \"a2644ca5-1db7-491d-949e-e8810934a296\") " pod="openstack/openstackclient" Dec 03 13:19:13 crc kubenswrapper[4986]: I1203 13:19:13.292679 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a2644ca5-1db7-491d-949e-e8810934a296-openstack-config\") pod \"openstackclient\" (UID: \"a2644ca5-1db7-491d-949e-e8810934a296\") " pod="openstack/openstackclient" Dec 03 13:19:13 crc kubenswrapper[4986]: I1203 13:19:13.292721 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2644ca5-1db7-491d-949e-e8810934a296-combined-ca-bundle\") pod \"openstackclient\" (UID: \"a2644ca5-1db7-491d-949e-e8810934a296\") " pod="openstack/openstackclient" Dec 03 13:19:13 crc kubenswrapper[4986]: I1203 13:19:13.292757 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zz796\" (UniqueName: \"kubernetes.io/projected/a2644ca5-1db7-491d-949e-e8810934a296-kube-api-access-zz796\") pod \"openstackclient\" (UID: \"a2644ca5-1db7-491d-949e-e8810934a296\") " pod="openstack/openstackclient" Dec 03 13:19:13 crc kubenswrapper[4986]: I1203 13:19:13.294640 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a2644ca5-1db7-491d-949e-e8810934a296-openstack-config\") pod \"openstackclient\" (UID: \"a2644ca5-1db7-491d-949e-e8810934a296\") " pod="openstack/openstackclient" Dec 03 13:19:13 crc kubenswrapper[4986]: I1203 13:19:13.296492 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2644ca5-1db7-491d-949e-e8810934a296-combined-ca-bundle\") pod \"openstackclient\" (UID: \"a2644ca5-1db7-491d-949e-e8810934a296\") " pod="openstack/openstackclient" Dec 03 13:19:13 crc kubenswrapper[4986]: I1203 13:19:13.296720 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a2644ca5-1db7-491d-949e-e8810934a296-openstack-config-secret\") pod \"openstackclient\" (UID: \"a2644ca5-1db7-491d-949e-e8810934a296\") " pod="openstack/openstackclient" Dec 03 13:19:13 crc kubenswrapper[4986]: I1203 13:19:13.303378 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db075f00-81d9-4306-b054-968590fecd46-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "db075f00-81d9-4306-b054-968590fecd46" (UID: "db075f00-81d9-4306-b054-968590fecd46"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:19:13 crc kubenswrapper[4986]: I1203 13:19:13.303437 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db075f00-81d9-4306-b054-968590fecd46-kube-api-access-b6sv6" (OuterVolumeSpecName: "kube-api-access-b6sv6") pod "db075f00-81d9-4306-b054-968590fecd46" (UID: "db075f00-81d9-4306-b054-968590fecd46"). InnerVolumeSpecName "kube-api-access-b6sv6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:19:13 crc kubenswrapper[4986]: I1203 13:19:13.312812 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zz796\" (UniqueName: \"kubernetes.io/projected/a2644ca5-1db7-491d-949e-e8810934a296-kube-api-access-zz796\") pod \"openstackclient\" (UID: \"a2644ca5-1db7-491d-949e-e8810934a296\") " pod="openstack/openstackclient" Dec 03 13:19:13 crc kubenswrapper[4986]: I1203 13:19:13.354150 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db075f00-81d9-4306-b054-968590fecd46-config" (OuterVolumeSpecName: "config") pod "db075f00-81d9-4306-b054-968590fecd46" (UID: "db075f00-81d9-4306-b054-968590fecd46"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:19:13 crc kubenswrapper[4986]: I1203 13:19:13.356843 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db075f00-81d9-4306-b054-968590fecd46-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "db075f00-81d9-4306-b054-968590fecd46" (UID: "db075f00-81d9-4306-b054-968590fecd46"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:19:13 crc kubenswrapper[4986]: I1203 13:19:13.394135 4986 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db075f00-81d9-4306-b054-968590fecd46-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 13:19:13 crc kubenswrapper[4986]: I1203 13:19:13.394397 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6sv6\" (UniqueName: \"kubernetes.io/projected/db075f00-81d9-4306-b054-968590fecd46-kube-api-access-b6sv6\") on node \"crc\" DevicePath \"\"" Dec 03 13:19:13 crc kubenswrapper[4986]: I1203 13:19:13.394531 4986 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/db075f00-81d9-4306-b054-968590fecd46-httpd-config\") on node \"crc\" DevicePath \"\"" Dec 03 13:19:13 crc kubenswrapper[4986]: I1203 13:19:13.394620 4986 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/db075f00-81d9-4306-b054-968590fecd46-config\") on node \"crc\" DevicePath \"\"" Dec 03 13:19:13 crc kubenswrapper[4986]: I1203 13:19:13.409937 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db075f00-81d9-4306-b054-968590fecd46-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "db075f00-81d9-4306-b054-968590fecd46" (UID: "db075f00-81d9-4306-b054-968590fecd46"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:19:13 crc kubenswrapper[4986]: I1203 13:19:13.426729 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 03 13:19:13 crc kubenswrapper[4986]: I1203 13:19:13.496045 4986 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/db075f00-81d9-4306-b054-968590fecd46-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 03 13:19:13 crc kubenswrapper[4986]: I1203 13:19:13.519471 4986 generic.go:334] "Generic (PLEG): container finished" podID="db075f00-81d9-4306-b054-968590fecd46" containerID="e30eb661b49ebecd2db2d2fbfe82cf1580347da1a929b604b8f10d6570737631" exitCode=0 Dec 03 13:19:13 crc kubenswrapper[4986]: I1203 13:19:13.519568 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 03 13:19:13 crc kubenswrapper[4986]: I1203 13:19:13.520442 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5dc7cf89f4-56jhq" Dec 03 13:19:13 crc kubenswrapper[4986]: I1203 13:19:13.530000 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5dc7cf89f4-56jhq" event={"ID":"db075f00-81d9-4306-b054-968590fecd46","Type":"ContainerDied","Data":"e30eb661b49ebecd2db2d2fbfe82cf1580347da1a929b604b8f10d6570737631"} Dec 03 13:19:13 crc kubenswrapper[4986]: I1203 13:19:13.530041 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5dc7cf89f4-56jhq" event={"ID":"db075f00-81d9-4306-b054-968590fecd46","Type":"ContainerDied","Data":"1c60ba0b171ea799ca25d0b61f1cb785fac05bd48caf6803e642c78db42cc520"} Dec 03 13:19:13 crc kubenswrapper[4986]: I1203 13:19:13.530058 4986 scope.go:117] "RemoveContainer" containerID="ba96fbeff228dff458eed02a6d65b1dd391f097349bb77b73501b560c516fc52" Dec 03 13:19:13 crc kubenswrapper[4986]: I1203 13:19:13.534469 4986 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 03 13:19:13 crc kubenswrapper[4986]: I1203 13:19:13.534690 4986 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 03 13:19:13 crc kubenswrapper[4986]: I1203 13:19:13.542137 4986 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="fffc079e-8f2e-4f96-b7cd-362e90b8d375" podUID="a2644ca5-1db7-491d-949e-e8810934a296" Dec 03 13:19:13 crc kubenswrapper[4986]: I1203 13:19:13.558704 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 03 13:19:13 crc kubenswrapper[4986]: I1203 13:19:13.578578 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5dc7cf89f4-56jhq"] Dec 03 13:19:13 crc kubenswrapper[4986]: I1203 13:19:13.587002 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-5dc7cf89f4-56jhq"] Dec 03 13:19:13 crc kubenswrapper[4986]: I1203 13:19:13.593738 4986 scope.go:117] "RemoveContainer" containerID="e30eb661b49ebecd2db2d2fbfe82cf1580347da1a929b604b8f10d6570737631" Dec 03 13:19:13 crc kubenswrapper[4986]: I1203 13:19:13.599947 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-549gw\" (UniqueName: \"kubernetes.io/projected/fffc079e-8f2e-4f96-b7cd-362e90b8d375-kube-api-access-549gw\") pod \"openstackclient\" (UID: \"fffc079e-8f2e-4f96-b7cd-362e90b8d375\") " pod="openstack/openstackclient" Dec 03 13:19:13 crc kubenswrapper[4986]: E1203 13:19:13.601873 4986 projected.go:194] Error preparing data for projected volume kube-api-access-549gw for pod openstack/openstackclient: failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (fffc079e-8f2e-4f96-b7cd-362e90b8d375) does not match the UID in record. The object might have been deleted and then recreated Dec 03 13:19:13 crc kubenswrapper[4986]: E1203 13:19:13.601942 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fffc079e-8f2e-4f96-b7cd-362e90b8d375-kube-api-access-549gw podName:fffc079e-8f2e-4f96-b7cd-362e90b8d375 nodeName:}" failed. No retries permitted until 2025-12-03 13:19:14.601921017 +0000 UTC m=+1414.068352208 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-549gw" (UniqueName: "kubernetes.io/projected/fffc079e-8f2e-4f96-b7cd-362e90b8d375-kube-api-access-549gw") pod "openstackclient" (UID: "fffc079e-8f2e-4f96-b7cd-362e90b8d375") : failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (fffc079e-8f2e-4f96-b7cd-362e90b8d375) does not match the UID in record. The object might have been deleted and then recreated Dec 03 13:19:13 crc kubenswrapper[4986]: I1203 13:19:13.701646 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 03 13:19:13 crc kubenswrapper[4986]: I1203 13:19:13.701821 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fffc079e-8f2e-4f96-b7cd-362e90b8d375-combined-ca-bundle\") pod \"fffc079e-8f2e-4f96-b7cd-362e90b8d375\" (UID: \"fffc079e-8f2e-4f96-b7cd-362e90b8d375\") " Dec 03 13:19:13 crc kubenswrapper[4986]: I1203 13:19:13.701915 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/fffc079e-8f2e-4f96-b7cd-362e90b8d375-openstack-config-secret\") pod \"fffc079e-8f2e-4f96-b7cd-362e90b8d375\" (UID: \"fffc079e-8f2e-4f96-b7cd-362e90b8d375\") " Dec 03 13:19:13 crc kubenswrapper[4986]: I1203 13:19:13.702988 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/fffc079e-8f2e-4f96-b7cd-362e90b8d375-openstack-config\") pod \"fffc079e-8f2e-4f96-b7cd-362e90b8d375\" (UID: \"fffc079e-8f2e-4f96-b7cd-362e90b8d375\") " Dec 03 13:19:13 crc kubenswrapper[4986]: I1203 13:19:13.703826 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-549gw\" (UniqueName: \"kubernetes.io/projected/fffc079e-8f2e-4f96-b7cd-362e90b8d375-kube-api-access-549gw\") on node \"crc\" DevicePath \"\"" Dec 03 13:19:13 crc kubenswrapper[4986]: I1203 13:19:13.705625 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fffc079e-8f2e-4f96-b7cd-362e90b8d375-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "fffc079e-8f2e-4f96-b7cd-362e90b8d375" (UID: "fffc079e-8f2e-4f96-b7cd-362e90b8d375"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:19:13 crc kubenswrapper[4986]: I1203 13:19:13.708570 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fffc079e-8f2e-4f96-b7cd-362e90b8d375-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "fffc079e-8f2e-4f96-b7cd-362e90b8d375" (UID: "fffc079e-8f2e-4f96-b7cd-362e90b8d375"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:19:13 crc kubenswrapper[4986]: I1203 13:19:13.709333 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fffc079e-8f2e-4f96-b7cd-362e90b8d375-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fffc079e-8f2e-4f96-b7cd-362e90b8d375" (UID: "fffc079e-8f2e-4f96-b7cd-362e90b8d375"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:19:13 crc kubenswrapper[4986]: I1203 13:19:13.773410 4986 scope.go:117] "RemoveContainer" containerID="ba96fbeff228dff458eed02a6d65b1dd391f097349bb77b73501b560c516fc52" Dec 03 13:19:13 crc kubenswrapper[4986]: E1203 13:19:13.773760 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba96fbeff228dff458eed02a6d65b1dd391f097349bb77b73501b560c516fc52\": container with ID starting with ba96fbeff228dff458eed02a6d65b1dd391f097349bb77b73501b560c516fc52 not found: ID does not exist" containerID="ba96fbeff228dff458eed02a6d65b1dd391f097349bb77b73501b560c516fc52" Dec 03 13:19:13 crc kubenswrapper[4986]: I1203 13:19:13.773783 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba96fbeff228dff458eed02a6d65b1dd391f097349bb77b73501b560c516fc52"} err="failed to get container status \"ba96fbeff228dff458eed02a6d65b1dd391f097349bb77b73501b560c516fc52\": rpc error: code = NotFound desc = could not find container \"ba96fbeff228dff458eed02a6d65b1dd391f097349bb77b73501b560c516fc52\": container with ID starting with ba96fbeff228dff458eed02a6d65b1dd391f097349bb77b73501b560c516fc52 not found: ID does not exist" Dec 03 13:19:13 crc kubenswrapper[4986]: I1203 13:19:13.773802 4986 scope.go:117] "RemoveContainer" containerID="e30eb661b49ebecd2db2d2fbfe82cf1580347da1a929b604b8f10d6570737631" Dec 03 13:19:13 crc kubenswrapper[4986]: E1203 13:19:13.774214 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e30eb661b49ebecd2db2d2fbfe82cf1580347da1a929b604b8f10d6570737631\": container with ID starting with e30eb661b49ebecd2db2d2fbfe82cf1580347da1a929b604b8f10d6570737631 not found: ID does not exist" containerID="e30eb661b49ebecd2db2d2fbfe82cf1580347da1a929b604b8f10d6570737631" Dec 03 13:19:13 crc kubenswrapper[4986]: I1203 13:19:13.774233 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e30eb661b49ebecd2db2d2fbfe82cf1580347da1a929b604b8f10d6570737631"} err="failed to get container status \"e30eb661b49ebecd2db2d2fbfe82cf1580347da1a929b604b8f10d6570737631\": rpc error: code = NotFound desc = could not find container \"e30eb661b49ebecd2db2d2fbfe82cf1580347da1a929b604b8f10d6570737631\": container with ID starting with e30eb661b49ebecd2db2d2fbfe82cf1580347da1a929b604b8f10d6570737631 not found: ID does not exist" Dec 03 13:19:13 crc kubenswrapper[4986]: I1203 13:19:13.805583 4986 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/fffc079e-8f2e-4f96-b7cd-362e90b8d375-openstack-config\") on node \"crc\" DevicePath \"\"" Dec 03 13:19:13 crc kubenswrapper[4986]: I1203 13:19:13.805615 4986 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fffc079e-8f2e-4f96-b7cd-362e90b8d375-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 13:19:13 crc kubenswrapper[4986]: I1203 13:19:13.805624 4986 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/fffc079e-8f2e-4f96-b7cd-362e90b8d375-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Dec 03 13:19:13 crc kubenswrapper[4986]: I1203 13:19:13.925018 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 03 13:19:13 crc kubenswrapper[4986]: W1203 13:19:13.929972 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda2644ca5_1db7_491d_949e_e8810934a296.slice/crio-ac297f1be675345886aade4529cd961a3fbd38d487925bdd035dc3e6b9d35d0a WatchSource:0}: Error finding container ac297f1be675345886aade4529cd961a3fbd38d487925bdd035dc3e6b9d35d0a: Status 404 returned error can't find the container with id ac297f1be675345886aade4529cd961a3fbd38d487925bdd035dc3e6b9d35d0a Dec 03 13:19:13 crc kubenswrapper[4986]: I1203 13:19:13.940721 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 03 13:19:14 crc kubenswrapper[4986]: I1203 13:19:14.405703 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 03 13:19:14 crc kubenswrapper[4986]: I1203 13:19:14.518923 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/aec700a6-d4c5-4c24-8726-16dc792d5658-etc-machine-id\") pod \"aec700a6-d4c5-4c24-8726-16dc792d5658\" (UID: \"aec700a6-d4c5-4c24-8726-16dc792d5658\") " Dec 03 13:19:14 crc kubenswrapper[4986]: I1203 13:19:14.519046 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aec700a6-d4c5-4c24-8726-16dc792d5658-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "aec700a6-d4c5-4c24-8726-16dc792d5658" (UID: "aec700a6-d4c5-4c24-8726-16dc792d5658"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 13:19:14 crc kubenswrapper[4986]: I1203 13:19:14.519319 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aec700a6-d4c5-4c24-8726-16dc792d5658-combined-ca-bundle\") pod \"aec700a6-d4c5-4c24-8726-16dc792d5658\" (UID: \"aec700a6-d4c5-4c24-8726-16dc792d5658\") " Dec 03 13:19:14 crc kubenswrapper[4986]: I1203 13:19:14.519359 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aec700a6-d4c5-4c24-8726-16dc792d5658-scripts\") pod \"aec700a6-d4c5-4c24-8726-16dc792d5658\" (UID: \"aec700a6-d4c5-4c24-8726-16dc792d5658\") " Dec 03 13:19:14 crc kubenswrapper[4986]: I1203 13:19:14.519392 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aec700a6-d4c5-4c24-8726-16dc792d5658-config-data\") pod \"aec700a6-d4c5-4c24-8726-16dc792d5658\" (UID: \"aec700a6-d4c5-4c24-8726-16dc792d5658\") " Dec 03 13:19:14 crc kubenswrapper[4986]: I1203 13:19:14.519515 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r2ctx\" (UniqueName: \"kubernetes.io/projected/aec700a6-d4c5-4c24-8726-16dc792d5658-kube-api-access-r2ctx\") pod \"aec700a6-d4c5-4c24-8726-16dc792d5658\" (UID: \"aec700a6-d4c5-4c24-8726-16dc792d5658\") " Dec 03 13:19:14 crc kubenswrapper[4986]: I1203 13:19:14.519814 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aec700a6-d4c5-4c24-8726-16dc792d5658-config-data-custom\") pod \"aec700a6-d4c5-4c24-8726-16dc792d5658\" (UID: \"aec700a6-d4c5-4c24-8726-16dc792d5658\") " Dec 03 13:19:14 crc kubenswrapper[4986]: I1203 13:19:14.520154 4986 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/aec700a6-d4c5-4c24-8726-16dc792d5658-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 03 13:19:14 crc kubenswrapper[4986]: I1203 13:19:14.524212 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aec700a6-d4c5-4c24-8726-16dc792d5658-kube-api-access-r2ctx" (OuterVolumeSpecName: "kube-api-access-r2ctx") pod "aec700a6-d4c5-4c24-8726-16dc792d5658" (UID: "aec700a6-d4c5-4c24-8726-16dc792d5658"). InnerVolumeSpecName "kube-api-access-r2ctx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:19:14 crc kubenswrapper[4986]: I1203 13:19:14.529354 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aec700a6-d4c5-4c24-8726-16dc792d5658-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "aec700a6-d4c5-4c24-8726-16dc792d5658" (UID: "aec700a6-d4c5-4c24-8726-16dc792d5658"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:19:14 crc kubenswrapper[4986]: I1203 13:19:14.530427 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aec700a6-d4c5-4c24-8726-16dc792d5658-scripts" (OuterVolumeSpecName: "scripts") pod "aec700a6-d4c5-4c24-8726-16dc792d5658" (UID: "aec700a6-d4c5-4c24-8726-16dc792d5658"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:19:14 crc kubenswrapper[4986]: I1203 13:19:14.547514 4986 generic.go:334] "Generic (PLEG): container finished" podID="aec700a6-d4c5-4c24-8726-16dc792d5658" containerID="691e7797cd057ca907ecec503fdb24521ec7774622ad57eae3fb3c829259d655" exitCode=0 Dec 03 13:19:14 crc kubenswrapper[4986]: I1203 13:19:14.547605 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"aec700a6-d4c5-4c24-8726-16dc792d5658","Type":"ContainerDied","Data":"691e7797cd057ca907ecec503fdb24521ec7774622ad57eae3fb3c829259d655"} Dec 03 13:19:14 crc kubenswrapper[4986]: I1203 13:19:14.547634 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"aec700a6-d4c5-4c24-8726-16dc792d5658","Type":"ContainerDied","Data":"f3fbe8fa122668c911f3927a6c787ae5482c0c9eab5469b3a441a00e13708810"} Dec 03 13:19:14 crc kubenswrapper[4986]: I1203 13:19:14.547650 4986 scope.go:117] "RemoveContainer" containerID="0da6ae24d795357e9372ccd524bb0486a2f41686c79f6226cec1d9938efe4612" Dec 03 13:19:14 crc kubenswrapper[4986]: I1203 13:19:14.548123 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 03 13:19:14 crc kubenswrapper[4986]: I1203 13:19:14.567897 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"a2644ca5-1db7-491d-949e-e8810934a296","Type":"ContainerStarted","Data":"ac297f1be675345886aade4529cd961a3fbd38d487925bdd035dc3e6b9d35d0a"} Dec 03 13:19:14 crc kubenswrapper[4986]: I1203 13:19:14.567918 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 03 13:19:14 crc kubenswrapper[4986]: I1203 13:19:14.572324 4986 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="fffc079e-8f2e-4f96-b7cd-362e90b8d375" podUID="a2644ca5-1db7-491d-949e-e8810934a296" Dec 03 13:19:14 crc kubenswrapper[4986]: I1203 13:19:14.577180 4986 scope.go:117] "RemoveContainer" containerID="691e7797cd057ca907ecec503fdb24521ec7774622ad57eae3fb3c829259d655" Dec 03 13:19:14 crc kubenswrapper[4986]: I1203 13:19:14.579639 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aec700a6-d4c5-4c24-8726-16dc792d5658-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aec700a6-d4c5-4c24-8726-16dc792d5658" (UID: "aec700a6-d4c5-4c24-8726-16dc792d5658"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:19:14 crc kubenswrapper[4986]: I1203 13:19:14.616362 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aec700a6-d4c5-4c24-8726-16dc792d5658-config-data" (OuterVolumeSpecName: "config-data") pod "aec700a6-d4c5-4c24-8726-16dc792d5658" (UID: "aec700a6-d4c5-4c24-8726-16dc792d5658"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:19:14 crc kubenswrapper[4986]: I1203 13:19:14.623055 4986 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aec700a6-d4c5-4c24-8726-16dc792d5658-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 03 13:19:14 crc kubenswrapper[4986]: I1203 13:19:14.623249 4986 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aec700a6-d4c5-4c24-8726-16dc792d5658-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 13:19:14 crc kubenswrapper[4986]: I1203 13:19:14.623354 4986 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aec700a6-d4c5-4c24-8726-16dc792d5658-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 13:19:14 crc kubenswrapper[4986]: I1203 13:19:14.623440 4986 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aec700a6-d4c5-4c24-8726-16dc792d5658-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 13:19:14 crc kubenswrapper[4986]: I1203 13:19:14.623558 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r2ctx\" (UniqueName: \"kubernetes.io/projected/aec700a6-d4c5-4c24-8726-16dc792d5658-kube-api-access-r2ctx\") on node \"crc\" DevicePath \"\"" Dec 03 13:19:14 crc kubenswrapper[4986]: I1203 13:19:14.721220 4986 scope.go:117] "RemoveContainer" containerID="0da6ae24d795357e9372ccd524bb0486a2f41686c79f6226cec1d9938efe4612" Dec 03 13:19:14 crc kubenswrapper[4986]: E1203 13:19:14.721713 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0da6ae24d795357e9372ccd524bb0486a2f41686c79f6226cec1d9938efe4612\": container with ID starting with 0da6ae24d795357e9372ccd524bb0486a2f41686c79f6226cec1d9938efe4612 not found: ID does not exist" containerID="0da6ae24d795357e9372ccd524bb0486a2f41686c79f6226cec1d9938efe4612" Dec 03 13:19:14 crc kubenswrapper[4986]: I1203 13:19:14.721764 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0da6ae24d795357e9372ccd524bb0486a2f41686c79f6226cec1d9938efe4612"} err="failed to get container status \"0da6ae24d795357e9372ccd524bb0486a2f41686c79f6226cec1d9938efe4612\": rpc error: code = NotFound desc = could not find container \"0da6ae24d795357e9372ccd524bb0486a2f41686c79f6226cec1d9938efe4612\": container with ID starting with 0da6ae24d795357e9372ccd524bb0486a2f41686c79f6226cec1d9938efe4612 not found: ID does not exist" Dec 03 13:19:14 crc kubenswrapper[4986]: I1203 13:19:14.721785 4986 scope.go:117] "RemoveContainer" containerID="691e7797cd057ca907ecec503fdb24521ec7774622ad57eae3fb3c829259d655" Dec 03 13:19:14 crc kubenswrapper[4986]: E1203 13:19:14.722298 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"691e7797cd057ca907ecec503fdb24521ec7774622ad57eae3fb3c829259d655\": container with ID starting with 691e7797cd057ca907ecec503fdb24521ec7774622ad57eae3fb3c829259d655 not found: ID does not exist" containerID="691e7797cd057ca907ecec503fdb24521ec7774622ad57eae3fb3c829259d655" Dec 03 13:19:14 crc kubenswrapper[4986]: I1203 13:19:14.722347 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"691e7797cd057ca907ecec503fdb24521ec7774622ad57eae3fb3c829259d655"} err="failed to get container status \"691e7797cd057ca907ecec503fdb24521ec7774622ad57eae3fb3c829259d655\": rpc error: code = NotFound desc = could not find container \"691e7797cd057ca907ecec503fdb24521ec7774622ad57eae3fb3c829259d655\": container with ID starting with 691e7797cd057ca907ecec503fdb24521ec7774622ad57eae3fb3c829259d655 not found: ID does not exist" Dec 03 13:19:14 crc kubenswrapper[4986]: I1203 13:19:14.884020 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 03 13:19:14 crc kubenswrapper[4986]: I1203 13:19:14.892218 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 03 13:19:14 crc kubenswrapper[4986]: I1203 13:19:14.908792 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Dec 03 13:19:14 crc kubenswrapper[4986]: E1203 13:19:14.909451 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aec700a6-d4c5-4c24-8726-16dc792d5658" containerName="cinder-scheduler" Dec 03 13:19:14 crc kubenswrapper[4986]: I1203 13:19:14.909515 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="aec700a6-d4c5-4c24-8726-16dc792d5658" containerName="cinder-scheduler" Dec 03 13:19:14 crc kubenswrapper[4986]: E1203 13:19:14.909597 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aec700a6-d4c5-4c24-8726-16dc792d5658" containerName="probe" Dec 03 13:19:14 crc kubenswrapper[4986]: I1203 13:19:14.909646 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="aec700a6-d4c5-4c24-8726-16dc792d5658" containerName="probe" Dec 03 13:19:14 crc kubenswrapper[4986]: E1203 13:19:14.909713 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db075f00-81d9-4306-b054-968590fecd46" containerName="neutron-httpd" Dec 03 13:19:14 crc kubenswrapper[4986]: I1203 13:19:14.909765 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="db075f00-81d9-4306-b054-968590fecd46" containerName="neutron-httpd" Dec 03 13:19:14 crc kubenswrapper[4986]: E1203 13:19:14.909848 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db075f00-81d9-4306-b054-968590fecd46" containerName="neutron-api" Dec 03 13:19:14 crc kubenswrapper[4986]: I1203 13:19:14.909901 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="db075f00-81d9-4306-b054-968590fecd46" containerName="neutron-api" Dec 03 13:19:14 crc kubenswrapper[4986]: I1203 13:19:14.914743 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="aec700a6-d4c5-4c24-8726-16dc792d5658" containerName="cinder-scheduler" Dec 03 13:19:14 crc kubenswrapper[4986]: I1203 13:19:14.914813 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="db075f00-81d9-4306-b054-968590fecd46" containerName="neutron-httpd" Dec 03 13:19:14 crc kubenswrapper[4986]: I1203 13:19:14.914878 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="db075f00-81d9-4306-b054-968590fecd46" containerName="neutron-api" Dec 03 13:19:14 crc kubenswrapper[4986]: I1203 13:19:14.914948 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="aec700a6-d4c5-4c24-8726-16dc792d5658" containerName="probe" Dec 03 13:19:14 crc kubenswrapper[4986]: I1203 13:19:14.915887 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 03 13:19:14 crc kubenswrapper[4986]: I1203 13:19:14.919151 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Dec 03 13:19:14 crc kubenswrapper[4986]: I1203 13:19:14.927777 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 03 13:19:14 crc kubenswrapper[4986]: I1203 13:19:14.955480 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aec700a6-d4c5-4c24-8726-16dc792d5658" path="/var/lib/kubelet/pods/aec700a6-d4c5-4c24-8726-16dc792d5658/volumes" Dec 03 13:19:14 crc kubenswrapper[4986]: I1203 13:19:14.956227 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db075f00-81d9-4306-b054-968590fecd46" path="/var/lib/kubelet/pods/db075f00-81d9-4306-b054-968590fecd46/volumes" Dec 03 13:19:14 crc kubenswrapper[4986]: I1203 13:19:14.956829 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fffc079e-8f2e-4f96-b7cd-362e90b8d375" path="/var/lib/kubelet/pods/fffc079e-8f2e-4f96-b7cd-362e90b8d375/volumes" Dec 03 13:19:15 crc kubenswrapper[4986]: I1203 13:19:15.029816 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf917ae1-694f-4e1e-8d85-6452ac6c4e0e-config-data\") pod \"cinder-scheduler-0\" (UID: \"cf917ae1-694f-4e1e-8d85-6452ac6c4e0e\") " pod="openstack/cinder-scheduler-0" Dec 03 13:19:15 crc kubenswrapper[4986]: I1203 13:19:15.030014 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf917ae1-694f-4e1e-8d85-6452ac6c4e0e-scripts\") pod \"cinder-scheduler-0\" (UID: \"cf917ae1-694f-4e1e-8d85-6452ac6c4e0e\") " pod="openstack/cinder-scheduler-0" Dec 03 13:19:15 crc kubenswrapper[4986]: I1203 13:19:15.030052 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf917ae1-694f-4e1e-8d85-6452ac6c4e0e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"cf917ae1-694f-4e1e-8d85-6452ac6c4e0e\") " pod="openstack/cinder-scheduler-0" Dec 03 13:19:15 crc kubenswrapper[4986]: I1203 13:19:15.030082 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkv24\" (UniqueName: \"kubernetes.io/projected/cf917ae1-694f-4e1e-8d85-6452ac6c4e0e-kube-api-access-kkv24\") pod \"cinder-scheduler-0\" (UID: \"cf917ae1-694f-4e1e-8d85-6452ac6c4e0e\") " pod="openstack/cinder-scheduler-0" Dec 03 13:19:15 crc kubenswrapper[4986]: I1203 13:19:15.030165 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cf917ae1-694f-4e1e-8d85-6452ac6c4e0e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"cf917ae1-694f-4e1e-8d85-6452ac6c4e0e\") " pod="openstack/cinder-scheduler-0" Dec 03 13:19:15 crc kubenswrapper[4986]: I1203 13:19:15.030210 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cf917ae1-694f-4e1e-8d85-6452ac6c4e0e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"cf917ae1-694f-4e1e-8d85-6452ac6c4e0e\") " pod="openstack/cinder-scheduler-0" Dec 03 13:19:15 crc kubenswrapper[4986]: I1203 13:19:15.134988 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cf917ae1-694f-4e1e-8d85-6452ac6c4e0e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"cf917ae1-694f-4e1e-8d85-6452ac6c4e0e\") " pod="openstack/cinder-scheduler-0" Dec 03 13:19:15 crc kubenswrapper[4986]: I1203 13:19:15.134607 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cf917ae1-694f-4e1e-8d85-6452ac6c4e0e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"cf917ae1-694f-4e1e-8d85-6452ac6c4e0e\") " pod="openstack/cinder-scheduler-0" Dec 03 13:19:15 crc kubenswrapper[4986]: I1203 13:19:15.139431 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf917ae1-694f-4e1e-8d85-6452ac6c4e0e-config-data\") pod \"cinder-scheduler-0\" (UID: \"cf917ae1-694f-4e1e-8d85-6452ac6c4e0e\") " pod="openstack/cinder-scheduler-0" Dec 03 13:19:15 crc kubenswrapper[4986]: I1203 13:19:15.144406 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf917ae1-694f-4e1e-8d85-6452ac6c4e0e-scripts\") pod \"cinder-scheduler-0\" (UID: \"cf917ae1-694f-4e1e-8d85-6452ac6c4e0e\") " pod="openstack/cinder-scheduler-0" Dec 03 13:19:15 crc kubenswrapper[4986]: I1203 13:19:15.144473 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf917ae1-694f-4e1e-8d85-6452ac6c4e0e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"cf917ae1-694f-4e1e-8d85-6452ac6c4e0e\") " pod="openstack/cinder-scheduler-0" Dec 03 13:19:15 crc kubenswrapper[4986]: I1203 13:19:15.144507 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkv24\" (UniqueName: \"kubernetes.io/projected/cf917ae1-694f-4e1e-8d85-6452ac6c4e0e-kube-api-access-kkv24\") pod \"cinder-scheduler-0\" (UID: \"cf917ae1-694f-4e1e-8d85-6452ac6c4e0e\") " pod="openstack/cinder-scheduler-0" Dec 03 13:19:15 crc kubenswrapper[4986]: I1203 13:19:15.144640 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cf917ae1-694f-4e1e-8d85-6452ac6c4e0e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"cf917ae1-694f-4e1e-8d85-6452ac6c4e0e\") " pod="openstack/cinder-scheduler-0" Dec 03 13:19:15 crc kubenswrapper[4986]: I1203 13:19:15.159194 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf917ae1-694f-4e1e-8d85-6452ac6c4e0e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"cf917ae1-694f-4e1e-8d85-6452ac6c4e0e\") " pod="openstack/cinder-scheduler-0" Dec 03 13:19:15 crc kubenswrapper[4986]: I1203 13:19:15.160098 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf917ae1-694f-4e1e-8d85-6452ac6c4e0e-config-data\") pod \"cinder-scheduler-0\" (UID: \"cf917ae1-694f-4e1e-8d85-6452ac6c4e0e\") " pod="openstack/cinder-scheduler-0" Dec 03 13:19:15 crc kubenswrapper[4986]: I1203 13:19:15.160182 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cf917ae1-694f-4e1e-8d85-6452ac6c4e0e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"cf917ae1-694f-4e1e-8d85-6452ac6c4e0e\") " pod="openstack/cinder-scheduler-0" Dec 03 13:19:15 crc kubenswrapper[4986]: I1203 13:19:15.162567 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf917ae1-694f-4e1e-8d85-6452ac6c4e0e-scripts\") pod \"cinder-scheduler-0\" (UID: \"cf917ae1-694f-4e1e-8d85-6452ac6c4e0e\") " pod="openstack/cinder-scheduler-0" Dec 03 13:19:15 crc kubenswrapper[4986]: I1203 13:19:15.163065 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkv24\" (UniqueName: \"kubernetes.io/projected/cf917ae1-694f-4e1e-8d85-6452ac6c4e0e-kube-api-access-kkv24\") pod \"cinder-scheduler-0\" (UID: \"cf917ae1-694f-4e1e-8d85-6452ac6c4e0e\") " pod="openstack/cinder-scheduler-0" Dec 03 13:19:15 crc kubenswrapper[4986]: I1203 13:19:15.232189 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 03 13:19:15 crc kubenswrapper[4986]: W1203 13:19:15.787673 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcf917ae1_694f_4e1e_8d85_6452ac6c4e0e.slice/crio-767f044997bcdf0f1fba1e032dc99bd7c58b65897ab02a1c553ae7567c5371b0 WatchSource:0}: Error finding container 767f044997bcdf0f1fba1e032dc99bd7c58b65897ab02a1c553ae7567c5371b0: Status 404 returned error can't find the container with id 767f044997bcdf0f1fba1e032dc99bd7c58b65897ab02a1c553ae7567c5371b0 Dec 03 13:19:15 crc kubenswrapper[4986]: I1203 13:19:15.790720 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 03 13:19:16 crc kubenswrapper[4986]: I1203 13:19:16.594235 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"cf917ae1-694f-4e1e-8d85-6452ac6c4e0e","Type":"ContainerStarted","Data":"6bbcbc3ad8b648fbe04eb5560e22b7984190755b9d9dd7a25f3d4857ba586a2e"} Dec 03 13:19:16 crc kubenswrapper[4986]: I1203 13:19:16.594502 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"cf917ae1-694f-4e1e-8d85-6452ac6c4e0e","Type":"ContainerStarted","Data":"767f044997bcdf0f1fba1e032dc99bd7c58b65897ab02a1c553ae7567c5371b0"} Dec 03 13:19:16 crc kubenswrapper[4986]: I1203 13:19:16.957432 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-6d976bf467-mjgvz"] Dec 03 13:19:16 crc kubenswrapper[4986]: I1203 13:19:16.958914 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6d976bf467-mjgvz"] Dec 03 13:19:16 crc kubenswrapper[4986]: I1203 13:19:16.959006 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6d976bf467-mjgvz" Dec 03 13:19:16 crc kubenswrapper[4986]: I1203 13:19:16.963828 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Dec 03 13:19:16 crc kubenswrapper[4986]: I1203 13:19:16.964110 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Dec 03 13:19:16 crc kubenswrapper[4986]: I1203 13:19:16.964257 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Dec 03 13:19:17 crc kubenswrapper[4986]: I1203 13:19:17.078567 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d70793fc-c91d-4ddc-8a21-bcd243434f73-public-tls-certs\") pod \"swift-proxy-6d976bf467-mjgvz\" (UID: \"d70793fc-c91d-4ddc-8a21-bcd243434f73\") " pod="openstack/swift-proxy-6d976bf467-mjgvz" Dec 03 13:19:17 crc kubenswrapper[4986]: I1203 13:19:17.079225 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d70793fc-c91d-4ddc-8a21-bcd243434f73-config-data\") pod \"swift-proxy-6d976bf467-mjgvz\" (UID: \"d70793fc-c91d-4ddc-8a21-bcd243434f73\") " pod="openstack/swift-proxy-6d976bf467-mjgvz" Dec 03 13:19:17 crc kubenswrapper[4986]: I1203 13:19:17.079304 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdxqq\" (UniqueName: \"kubernetes.io/projected/d70793fc-c91d-4ddc-8a21-bcd243434f73-kube-api-access-qdxqq\") pod \"swift-proxy-6d976bf467-mjgvz\" (UID: \"d70793fc-c91d-4ddc-8a21-bcd243434f73\") " pod="openstack/swift-proxy-6d976bf467-mjgvz" Dec 03 13:19:17 crc kubenswrapper[4986]: I1203 13:19:17.079348 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d70793fc-c91d-4ddc-8a21-bcd243434f73-etc-swift\") pod \"swift-proxy-6d976bf467-mjgvz\" (UID: \"d70793fc-c91d-4ddc-8a21-bcd243434f73\") " pod="openstack/swift-proxy-6d976bf467-mjgvz" Dec 03 13:19:17 crc kubenswrapper[4986]: I1203 13:19:17.079364 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d70793fc-c91d-4ddc-8a21-bcd243434f73-run-httpd\") pod \"swift-proxy-6d976bf467-mjgvz\" (UID: \"d70793fc-c91d-4ddc-8a21-bcd243434f73\") " pod="openstack/swift-proxy-6d976bf467-mjgvz" Dec 03 13:19:17 crc kubenswrapper[4986]: I1203 13:19:17.079438 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d70793fc-c91d-4ddc-8a21-bcd243434f73-combined-ca-bundle\") pod \"swift-proxy-6d976bf467-mjgvz\" (UID: \"d70793fc-c91d-4ddc-8a21-bcd243434f73\") " pod="openstack/swift-proxy-6d976bf467-mjgvz" Dec 03 13:19:17 crc kubenswrapper[4986]: I1203 13:19:17.079718 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d70793fc-c91d-4ddc-8a21-bcd243434f73-internal-tls-certs\") pod \"swift-proxy-6d976bf467-mjgvz\" (UID: \"d70793fc-c91d-4ddc-8a21-bcd243434f73\") " pod="openstack/swift-proxy-6d976bf467-mjgvz" Dec 03 13:19:17 crc kubenswrapper[4986]: I1203 13:19:17.079744 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d70793fc-c91d-4ddc-8a21-bcd243434f73-log-httpd\") pod \"swift-proxy-6d976bf467-mjgvz\" (UID: \"d70793fc-c91d-4ddc-8a21-bcd243434f73\") " pod="openstack/swift-proxy-6d976bf467-mjgvz" Dec 03 13:19:17 crc kubenswrapper[4986]: I1203 13:19:17.181681 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d70793fc-c91d-4ddc-8a21-bcd243434f73-combined-ca-bundle\") pod \"swift-proxy-6d976bf467-mjgvz\" (UID: \"d70793fc-c91d-4ddc-8a21-bcd243434f73\") " pod="openstack/swift-proxy-6d976bf467-mjgvz" Dec 03 13:19:17 crc kubenswrapper[4986]: I1203 13:19:17.181755 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d70793fc-c91d-4ddc-8a21-bcd243434f73-internal-tls-certs\") pod \"swift-proxy-6d976bf467-mjgvz\" (UID: \"d70793fc-c91d-4ddc-8a21-bcd243434f73\") " pod="openstack/swift-proxy-6d976bf467-mjgvz" Dec 03 13:19:17 crc kubenswrapper[4986]: I1203 13:19:17.181772 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d70793fc-c91d-4ddc-8a21-bcd243434f73-log-httpd\") pod \"swift-proxy-6d976bf467-mjgvz\" (UID: \"d70793fc-c91d-4ddc-8a21-bcd243434f73\") " pod="openstack/swift-proxy-6d976bf467-mjgvz" Dec 03 13:19:17 crc kubenswrapper[4986]: I1203 13:19:17.181813 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d70793fc-c91d-4ddc-8a21-bcd243434f73-public-tls-certs\") pod \"swift-proxy-6d976bf467-mjgvz\" (UID: \"d70793fc-c91d-4ddc-8a21-bcd243434f73\") " pod="openstack/swift-proxy-6d976bf467-mjgvz" Dec 03 13:19:17 crc kubenswrapper[4986]: I1203 13:19:17.181830 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d70793fc-c91d-4ddc-8a21-bcd243434f73-config-data\") pod \"swift-proxy-6d976bf467-mjgvz\" (UID: \"d70793fc-c91d-4ddc-8a21-bcd243434f73\") " pod="openstack/swift-proxy-6d976bf467-mjgvz" Dec 03 13:19:17 crc kubenswrapper[4986]: I1203 13:19:17.181863 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdxqq\" (UniqueName: \"kubernetes.io/projected/d70793fc-c91d-4ddc-8a21-bcd243434f73-kube-api-access-qdxqq\") pod \"swift-proxy-6d976bf467-mjgvz\" (UID: \"d70793fc-c91d-4ddc-8a21-bcd243434f73\") " pod="openstack/swift-proxy-6d976bf467-mjgvz" Dec 03 13:19:17 crc kubenswrapper[4986]: I1203 13:19:17.181891 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d70793fc-c91d-4ddc-8a21-bcd243434f73-etc-swift\") pod \"swift-proxy-6d976bf467-mjgvz\" (UID: \"d70793fc-c91d-4ddc-8a21-bcd243434f73\") " pod="openstack/swift-proxy-6d976bf467-mjgvz" Dec 03 13:19:17 crc kubenswrapper[4986]: I1203 13:19:17.181906 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d70793fc-c91d-4ddc-8a21-bcd243434f73-run-httpd\") pod \"swift-proxy-6d976bf467-mjgvz\" (UID: \"d70793fc-c91d-4ddc-8a21-bcd243434f73\") " pod="openstack/swift-proxy-6d976bf467-mjgvz" Dec 03 13:19:17 crc kubenswrapper[4986]: I1203 13:19:17.182552 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d70793fc-c91d-4ddc-8a21-bcd243434f73-run-httpd\") pod \"swift-proxy-6d976bf467-mjgvz\" (UID: \"d70793fc-c91d-4ddc-8a21-bcd243434f73\") " pod="openstack/swift-proxy-6d976bf467-mjgvz" Dec 03 13:19:17 crc kubenswrapper[4986]: I1203 13:19:17.182957 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d70793fc-c91d-4ddc-8a21-bcd243434f73-log-httpd\") pod \"swift-proxy-6d976bf467-mjgvz\" (UID: \"d70793fc-c91d-4ddc-8a21-bcd243434f73\") " pod="openstack/swift-proxy-6d976bf467-mjgvz" Dec 03 13:19:17 crc kubenswrapper[4986]: I1203 13:19:17.187553 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d70793fc-c91d-4ddc-8a21-bcd243434f73-combined-ca-bundle\") pod \"swift-proxy-6d976bf467-mjgvz\" (UID: \"d70793fc-c91d-4ddc-8a21-bcd243434f73\") " pod="openstack/swift-proxy-6d976bf467-mjgvz" Dec 03 13:19:17 crc kubenswrapper[4986]: I1203 13:19:17.188275 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d70793fc-c91d-4ddc-8a21-bcd243434f73-config-data\") pod \"swift-proxy-6d976bf467-mjgvz\" (UID: \"d70793fc-c91d-4ddc-8a21-bcd243434f73\") " pod="openstack/swift-proxy-6d976bf467-mjgvz" Dec 03 13:19:17 crc kubenswrapper[4986]: I1203 13:19:17.196486 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d70793fc-c91d-4ddc-8a21-bcd243434f73-internal-tls-certs\") pod \"swift-proxy-6d976bf467-mjgvz\" (UID: \"d70793fc-c91d-4ddc-8a21-bcd243434f73\") " pod="openstack/swift-proxy-6d976bf467-mjgvz" Dec 03 13:19:17 crc kubenswrapper[4986]: I1203 13:19:17.197607 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdxqq\" (UniqueName: \"kubernetes.io/projected/d70793fc-c91d-4ddc-8a21-bcd243434f73-kube-api-access-qdxqq\") pod \"swift-proxy-6d976bf467-mjgvz\" (UID: \"d70793fc-c91d-4ddc-8a21-bcd243434f73\") " pod="openstack/swift-proxy-6d976bf467-mjgvz" Dec 03 13:19:17 crc kubenswrapper[4986]: I1203 13:19:17.198916 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d70793fc-c91d-4ddc-8a21-bcd243434f73-public-tls-certs\") pod \"swift-proxy-6d976bf467-mjgvz\" (UID: \"d70793fc-c91d-4ddc-8a21-bcd243434f73\") " pod="openstack/swift-proxy-6d976bf467-mjgvz" Dec 03 13:19:17 crc kubenswrapper[4986]: I1203 13:19:17.207626 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d70793fc-c91d-4ddc-8a21-bcd243434f73-etc-swift\") pod \"swift-proxy-6d976bf467-mjgvz\" (UID: \"d70793fc-c91d-4ddc-8a21-bcd243434f73\") " pod="openstack/swift-proxy-6d976bf467-mjgvz" Dec 03 13:19:17 crc kubenswrapper[4986]: I1203 13:19:17.284632 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6d976bf467-mjgvz" Dec 03 13:19:17 crc kubenswrapper[4986]: I1203 13:19:17.433707 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Dec 03 13:19:17 crc kubenswrapper[4986]: I1203 13:19:17.610228 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"cf917ae1-694f-4e1e-8d85-6452ac6c4e0e","Type":"ContainerStarted","Data":"45eb97aebb49ed6498d75e1c376b91b9d8ee3fd0ead4853ab8473a56f385ebf1"} Dec 03 13:19:17 crc kubenswrapper[4986]: I1203 13:19:17.645373 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.6453509840000002 podStartE2EDuration="3.645350984s" podCreationTimestamp="2025-12-03 13:19:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:19:17.638779807 +0000 UTC m=+1417.105211008" watchObservedRunningTime="2025-12-03 13:19:17.645350984 +0000 UTC m=+1417.111782185" Dec 03 13:19:17 crc kubenswrapper[4986]: I1203 13:19:17.932959 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6d976bf467-mjgvz"] Dec 03 13:19:17 crc kubenswrapper[4986]: W1203 13:19:17.945402 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd70793fc_c91d_4ddc_8a21_bcd243434f73.slice/crio-7cad9b7f49c2a59ce5a6dc0629e621db06312a1c78bca570a5849bbd5a98046c WatchSource:0}: Error finding container 7cad9b7f49c2a59ce5a6dc0629e621db06312a1c78bca570a5849bbd5a98046c: Status 404 returned error can't find the container with id 7cad9b7f49c2a59ce5a6dc0629e621db06312a1c78bca570a5849bbd5a98046c Dec 03 13:19:18 crc kubenswrapper[4986]: I1203 13:19:18.625056 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6d976bf467-mjgvz" event={"ID":"d70793fc-c91d-4ddc-8a21-bcd243434f73","Type":"ContainerStarted","Data":"5189a19cf42990f6372f5a444aebd5620e6326a20696c93664641d10d999e289"} Dec 03 13:19:18 crc kubenswrapper[4986]: I1203 13:19:18.625721 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6d976bf467-mjgvz" event={"ID":"d70793fc-c91d-4ddc-8a21-bcd243434f73","Type":"ContainerStarted","Data":"b84c26c7627d2e80a6aa85c3029908431a9e88d97ec9987004169607dbca5600"} Dec 03 13:19:18 crc kubenswrapper[4986]: I1203 13:19:18.625740 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6d976bf467-mjgvz" event={"ID":"d70793fc-c91d-4ddc-8a21-bcd243434f73","Type":"ContainerStarted","Data":"7cad9b7f49c2a59ce5a6dc0629e621db06312a1c78bca570a5849bbd5a98046c"} Dec 03 13:19:18 crc kubenswrapper[4986]: I1203 13:19:18.626039 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6d976bf467-mjgvz" Dec 03 13:19:18 crc kubenswrapper[4986]: I1203 13:19:18.626067 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6d976bf467-mjgvz" Dec 03 13:19:19 crc kubenswrapper[4986]: I1203 13:19:19.122429 4986 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-cc774c568-phcpp" podUID="cee5d9b6-d11e-4bad-b013-196a4f401404" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.141:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.141:8443: connect: connection refused" Dec 03 13:19:20 crc kubenswrapper[4986]: I1203 13:19:20.053933 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-6d976bf467-mjgvz" podStartSLOduration=4.053908056 podStartE2EDuration="4.053908056s" podCreationTimestamp="2025-12-03 13:19:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:19:18.652525672 +0000 UTC m=+1418.118956883" watchObservedRunningTime="2025-12-03 13:19:20.053908056 +0000 UTC m=+1419.520339267" Dec 03 13:19:20 crc kubenswrapper[4986]: I1203 13:19:20.057530 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 03 13:19:20 crc kubenswrapper[4986]: I1203 13:19:20.057796 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0b482987-51ed-43dd-9099-c54d824588bf" containerName="ceilometer-central-agent" containerID="cri-o://5e5cc346550a3fec515bf0c4791854b5def68d62d3eb0b944e57908e69062b1a" gracePeriod=30 Dec 03 13:19:20 crc kubenswrapper[4986]: I1203 13:19:20.057925 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0b482987-51ed-43dd-9099-c54d824588bf" containerName="proxy-httpd" containerID="cri-o://354617f24609bb90f74a5d56c92fdeb07833a70a411fb023342eb61f620a5844" gracePeriod=30 Dec 03 13:19:20 crc kubenswrapper[4986]: I1203 13:19:20.057961 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0b482987-51ed-43dd-9099-c54d824588bf" containerName="sg-core" containerID="cri-o://eb0afe609b2a9c9c8c4a8eacdf3df3343aa2f4ce2503fa8629c7c87e0396c0f2" gracePeriod=30 Dec 03 13:19:20 crc kubenswrapper[4986]: I1203 13:19:20.057994 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0b482987-51ed-43dd-9099-c54d824588bf" containerName="ceilometer-notification-agent" containerID="cri-o://4f3957e372e4fd6e32d5915c5d40b448972c4b354892124fb0ec0cb57465fcf9" gracePeriod=30 Dec 03 13:19:20 crc kubenswrapper[4986]: I1203 13:19:20.066398 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 03 13:19:20 crc kubenswrapper[4986]: I1203 13:19:20.232809 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 03 13:19:20 crc kubenswrapper[4986]: I1203 13:19:20.648508 4986 generic.go:334] "Generic (PLEG): container finished" podID="0b482987-51ed-43dd-9099-c54d824588bf" containerID="354617f24609bb90f74a5d56c92fdeb07833a70a411fb023342eb61f620a5844" exitCode=0 Dec 03 13:19:20 crc kubenswrapper[4986]: I1203 13:19:20.648546 4986 generic.go:334] "Generic (PLEG): container finished" podID="0b482987-51ed-43dd-9099-c54d824588bf" containerID="eb0afe609b2a9c9c8c4a8eacdf3df3343aa2f4ce2503fa8629c7c87e0396c0f2" exitCode=2 Dec 03 13:19:20 crc kubenswrapper[4986]: I1203 13:19:20.648560 4986 generic.go:334] "Generic (PLEG): container finished" podID="0b482987-51ed-43dd-9099-c54d824588bf" containerID="5e5cc346550a3fec515bf0c4791854b5def68d62d3eb0b944e57908e69062b1a" exitCode=0 Dec 03 13:19:20 crc kubenswrapper[4986]: I1203 13:19:20.648574 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0b482987-51ed-43dd-9099-c54d824588bf","Type":"ContainerDied","Data":"354617f24609bb90f74a5d56c92fdeb07833a70a411fb023342eb61f620a5844"} Dec 03 13:19:20 crc kubenswrapper[4986]: I1203 13:19:20.648635 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0b482987-51ed-43dd-9099-c54d824588bf","Type":"ContainerDied","Data":"eb0afe609b2a9c9c8c4a8eacdf3df3343aa2f4ce2503fa8629c7c87e0396c0f2"} Dec 03 13:19:20 crc kubenswrapper[4986]: I1203 13:19:20.648649 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0b482987-51ed-43dd-9099-c54d824588bf","Type":"ContainerDied","Data":"5e5cc346550a3fec515bf0c4791854b5def68d62d3eb0b944e57908e69062b1a"} Dec 03 13:19:24 crc kubenswrapper[4986]: I1203 13:19:24.072386 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 03 13:19:24 crc kubenswrapper[4986]: I1203 13:19:24.073446 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="4ea00525-8d87-4a9e-b1ce-15a73b6bedd1" containerName="kube-state-metrics" containerID="cri-o://2b839fea51f99e9668279fe36bf0c93319d7ee0e42824ac01703df9a4aa0ecc5" gracePeriod=30 Dec 03 13:19:24 crc kubenswrapper[4986]: I1203 13:19:24.687112 4986 generic.go:334] "Generic (PLEG): container finished" podID="4ea00525-8d87-4a9e-b1ce-15a73b6bedd1" containerID="2b839fea51f99e9668279fe36bf0c93319d7ee0e42824ac01703df9a4aa0ecc5" exitCode=2 Dec 03 13:19:24 crc kubenswrapper[4986]: I1203 13:19:24.687170 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"4ea00525-8d87-4a9e-b1ce-15a73b6bedd1","Type":"ContainerDied","Data":"2b839fea51f99e9668279fe36bf0c93319d7ee0e42824ac01703df9a4aa0ecc5"} Dec 03 13:19:25 crc kubenswrapper[4986]: I1203 13:19:25.380636 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 03 13:19:25 crc kubenswrapper[4986]: I1203 13:19:25.381132 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="d4dae8bb-1f5e-446c-9820-769e77414c04" containerName="glance-log" containerID="cri-o://4c8bc6062e737df3ded631623dd55f56169f875e8ff065cfeb88edc561757def" gracePeriod=30 Dec 03 13:19:25 crc kubenswrapper[4986]: I1203 13:19:25.381192 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="d4dae8bb-1f5e-446c-9820-769e77414c04" containerName="glance-httpd" containerID="cri-o://cad60ee62b6c93be511446f0006c6f3f5fac38db2d6eabae98e5c3439c853b7b" gracePeriod=30 Dec 03 13:19:25 crc kubenswrapper[4986]: I1203 13:19:25.550379 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Dec 03 13:19:25 crc kubenswrapper[4986]: I1203 13:19:25.714843 4986 generic.go:334] "Generic (PLEG): container finished" podID="d4dae8bb-1f5e-446c-9820-769e77414c04" containerID="4c8bc6062e737df3ded631623dd55f56169f875e8ff065cfeb88edc561757def" exitCode=143 Dec 03 13:19:25 crc kubenswrapper[4986]: I1203 13:19:25.714918 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d4dae8bb-1f5e-446c-9820-769e77414c04","Type":"ContainerDied","Data":"4c8bc6062e737df3ded631623dd55f56169f875e8ff065cfeb88edc561757def"} Dec 03 13:19:25 crc kubenswrapper[4986]: I1203 13:19:25.742669 4986 generic.go:334] "Generic (PLEG): container finished" podID="0b482987-51ed-43dd-9099-c54d824588bf" containerID="4f3957e372e4fd6e32d5915c5d40b448972c4b354892124fb0ec0cb57465fcf9" exitCode=0 Dec 03 13:19:25 crc kubenswrapper[4986]: I1203 13:19:25.742718 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0b482987-51ed-43dd-9099-c54d824588bf","Type":"ContainerDied","Data":"4f3957e372e4fd6e32d5915c5d40b448972c4b354892124fb0ec0cb57465fcf9"} Dec 03 13:19:25 crc kubenswrapper[4986]: I1203 13:19:25.756782 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 03 13:19:25 crc kubenswrapper[4986]: I1203 13:19:25.768867 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-nqdzr"] Dec 03 13:19:25 crc kubenswrapper[4986]: E1203 13:19:25.769462 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ea00525-8d87-4a9e-b1ce-15a73b6bedd1" containerName="kube-state-metrics" Dec 03 13:19:25 crc kubenswrapper[4986]: I1203 13:19:25.769478 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ea00525-8d87-4a9e-b1ce-15a73b6bedd1" containerName="kube-state-metrics" Dec 03 13:19:25 crc kubenswrapper[4986]: I1203 13:19:25.769685 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ea00525-8d87-4a9e-b1ce-15a73b6bedd1" containerName="kube-state-metrics" Dec 03 13:19:25 crc kubenswrapper[4986]: I1203 13:19:25.770294 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-nqdzr" Dec 03 13:19:25 crc kubenswrapper[4986]: I1203 13:19:25.786240 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-nqdzr"] Dec 03 13:19:25 crc kubenswrapper[4986]: I1203 13:19:25.806262 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 13:19:25 crc kubenswrapper[4986]: I1203 13:19:25.865906 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jxspb\" (UniqueName: \"kubernetes.io/projected/4ea00525-8d87-4a9e-b1ce-15a73b6bedd1-kube-api-access-jxspb\") pod \"4ea00525-8d87-4a9e-b1ce-15a73b6bedd1\" (UID: \"4ea00525-8d87-4a9e-b1ce-15a73b6bedd1\") " Dec 03 13:19:25 crc kubenswrapper[4986]: I1203 13:19:25.866452 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svzzz\" (UniqueName: \"kubernetes.io/projected/c3245c21-420c-442d-934f-73f6917fcb21-kube-api-access-svzzz\") pod \"nova-api-db-create-nqdzr\" (UID: \"c3245c21-420c-442d-934f-73f6917fcb21\") " pod="openstack/nova-api-db-create-nqdzr" Dec 03 13:19:25 crc kubenswrapper[4986]: I1203 13:19:25.866574 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3245c21-420c-442d-934f-73f6917fcb21-operator-scripts\") pod \"nova-api-db-create-nqdzr\" (UID: \"c3245c21-420c-442d-934f-73f6917fcb21\") " pod="openstack/nova-api-db-create-nqdzr" Dec 03 13:19:25 crc kubenswrapper[4986]: I1203 13:19:25.870295 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-wv2h7"] Dec 03 13:19:25 crc kubenswrapper[4986]: E1203 13:19:25.872758 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b482987-51ed-43dd-9099-c54d824588bf" containerName="proxy-httpd" Dec 03 13:19:25 crc kubenswrapper[4986]: I1203 13:19:25.872785 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b482987-51ed-43dd-9099-c54d824588bf" containerName="proxy-httpd" Dec 03 13:19:25 crc kubenswrapper[4986]: E1203 13:19:25.872798 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b482987-51ed-43dd-9099-c54d824588bf" containerName="ceilometer-central-agent" Dec 03 13:19:25 crc kubenswrapper[4986]: I1203 13:19:25.872807 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b482987-51ed-43dd-9099-c54d824588bf" containerName="ceilometer-central-agent" Dec 03 13:19:25 crc kubenswrapper[4986]: E1203 13:19:25.872826 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b482987-51ed-43dd-9099-c54d824588bf" containerName="sg-core" Dec 03 13:19:25 crc kubenswrapper[4986]: I1203 13:19:25.872833 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b482987-51ed-43dd-9099-c54d824588bf" containerName="sg-core" Dec 03 13:19:25 crc kubenswrapper[4986]: E1203 13:19:25.872842 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b482987-51ed-43dd-9099-c54d824588bf" containerName="ceilometer-notification-agent" Dec 03 13:19:25 crc kubenswrapper[4986]: I1203 13:19:25.872847 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b482987-51ed-43dd-9099-c54d824588bf" containerName="ceilometer-notification-agent" Dec 03 13:19:25 crc kubenswrapper[4986]: I1203 13:19:25.873012 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b482987-51ed-43dd-9099-c54d824588bf" containerName="ceilometer-central-agent" Dec 03 13:19:25 crc kubenswrapper[4986]: I1203 13:19:25.873039 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b482987-51ed-43dd-9099-c54d824588bf" containerName="sg-core" Dec 03 13:19:25 crc kubenswrapper[4986]: I1203 13:19:25.873053 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b482987-51ed-43dd-9099-c54d824588bf" containerName="proxy-httpd" Dec 03 13:19:25 crc kubenswrapper[4986]: I1203 13:19:25.873060 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b482987-51ed-43dd-9099-c54d824588bf" containerName="ceilometer-notification-agent" Dec 03 13:19:25 crc kubenswrapper[4986]: I1203 13:19:25.873705 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-wv2h7" Dec 03 13:19:25 crc kubenswrapper[4986]: I1203 13:19:25.875480 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ea00525-8d87-4a9e-b1ce-15a73b6bedd1-kube-api-access-jxspb" (OuterVolumeSpecName: "kube-api-access-jxspb") pod "4ea00525-8d87-4a9e-b1ce-15a73b6bedd1" (UID: "4ea00525-8d87-4a9e-b1ce-15a73b6bedd1"). InnerVolumeSpecName "kube-api-access-jxspb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:19:25 crc kubenswrapper[4986]: I1203 13:19:25.882839 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-wv2h7"] Dec 03 13:19:25 crc kubenswrapper[4986]: I1203 13:19:25.893091 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-22db-account-create-update-ftg9x"] Dec 03 13:19:25 crc kubenswrapper[4986]: I1203 13:19:25.894618 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-22db-account-create-update-ftg9x" Dec 03 13:19:25 crc kubenswrapper[4986]: I1203 13:19:25.899111 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Dec 03 13:19:25 crc kubenswrapper[4986]: I1203 13:19:25.902652 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-22db-account-create-update-ftg9x"] Dec 03 13:19:25 crc kubenswrapper[4986]: I1203 13:19:25.947378 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-qbwm4"] Dec 03 13:19:25 crc kubenswrapper[4986]: I1203 13:19:25.948642 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-qbwm4" Dec 03 13:19:25 crc kubenswrapper[4986]: I1203 13:19:25.957507 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-qbwm4"] Dec 03 13:19:25 crc kubenswrapper[4986]: I1203 13:19:25.969913 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b482987-51ed-43dd-9099-c54d824588bf-scripts\") pod \"0b482987-51ed-43dd-9099-c54d824588bf\" (UID: \"0b482987-51ed-43dd-9099-c54d824588bf\") " Dec 03 13:19:25 crc kubenswrapper[4986]: I1203 13:19:25.970151 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b482987-51ed-43dd-9099-c54d824588bf-run-httpd\") pod \"0b482987-51ed-43dd-9099-c54d824588bf\" (UID: \"0b482987-51ed-43dd-9099-c54d824588bf\") " Dec 03 13:19:25 crc kubenswrapper[4986]: I1203 13:19:25.970269 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b482987-51ed-43dd-9099-c54d824588bf-combined-ca-bundle\") pod \"0b482987-51ed-43dd-9099-c54d824588bf\" (UID: \"0b482987-51ed-43dd-9099-c54d824588bf\") " Dec 03 13:19:25 crc kubenswrapper[4986]: I1203 13:19:25.970458 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0b482987-51ed-43dd-9099-c54d824588bf-sg-core-conf-yaml\") pod \"0b482987-51ed-43dd-9099-c54d824588bf\" (UID: \"0b482987-51ed-43dd-9099-c54d824588bf\") " Dec 03 13:19:25 crc kubenswrapper[4986]: I1203 13:19:25.970562 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b482987-51ed-43dd-9099-c54d824588bf-log-httpd\") pod \"0b482987-51ed-43dd-9099-c54d824588bf\" (UID: \"0b482987-51ed-43dd-9099-c54d824588bf\") " Dec 03 13:19:25 crc kubenswrapper[4986]: I1203 13:19:25.970719 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b482987-51ed-43dd-9099-c54d824588bf-config-data\") pod \"0b482987-51ed-43dd-9099-c54d824588bf\" (UID: \"0b482987-51ed-43dd-9099-c54d824588bf\") " Dec 03 13:19:25 crc kubenswrapper[4986]: I1203 13:19:25.970790 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7nmj\" (UniqueName: \"kubernetes.io/projected/0b482987-51ed-43dd-9099-c54d824588bf-kube-api-access-x7nmj\") pod \"0b482987-51ed-43dd-9099-c54d824588bf\" (UID: \"0b482987-51ed-43dd-9099-c54d824588bf\") " Dec 03 13:19:25 crc kubenswrapper[4986]: I1203 13:19:25.971089 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3245c21-420c-442d-934f-73f6917fcb21-operator-scripts\") pod \"nova-api-db-create-nqdzr\" (UID: \"c3245c21-420c-442d-934f-73f6917fcb21\") " pod="openstack/nova-api-db-create-nqdzr" Dec 03 13:19:25 crc kubenswrapper[4986]: I1203 13:19:25.971188 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2vhg\" (UniqueName: \"kubernetes.io/projected/67426b10-306c-4d32-94e7-99267dc8e435-kube-api-access-k2vhg\") pod \"nova-cell0-db-create-wv2h7\" (UID: \"67426b10-306c-4d32-94e7-99267dc8e435\") " pod="openstack/nova-cell0-db-create-wv2h7" Dec 03 13:19:25 crc kubenswrapper[4986]: I1203 13:19:25.971315 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svzzz\" (UniqueName: \"kubernetes.io/projected/c3245c21-420c-442d-934f-73f6917fcb21-kube-api-access-svzzz\") pod \"nova-api-db-create-nqdzr\" (UID: \"c3245c21-420c-442d-934f-73f6917fcb21\") " pod="openstack/nova-api-db-create-nqdzr" Dec 03 13:19:25 crc kubenswrapper[4986]: I1203 13:19:25.971412 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67426b10-306c-4d32-94e7-99267dc8e435-operator-scripts\") pod \"nova-cell0-db-create-wv2h7\" (UID: \"67426b10-306c-4d32-94e7-99267dc8e435\") " pod="openstack/nova-cell0-db-create-wv2h7" Dec 03 13:19:25 crc kubenswrapper[4986]: I1203 13:19:25.971535 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jxspb\" (UniqueName: \"kubernetes.io/projected/4ea00525-8d87-4a9e-b1ce-15a73b6bedd1-kube-api-access-jxspb\") on node \"crc\" DevicePath \"\"" Dec 03 13:19:25 crc kubenswrapper[4986]: I1203 13:19:25.970783 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b482987-51ed-43dd-9099-c54d824588bf-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0b482987-51ed-43dd-9099-c54d824588bf" (UID: "0b482987-51ed-43dd-9099-c54d824588bf"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 13:19:25 crc kubenswrapper[4986]: I1203 13:19:25.973529 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3245c21-420c-442d-934f-73f6917fcb21-operator-scripts\") pod \"nova-api-db-create-nqdzr\" (UID: \"c3245c21-420c-442d-934f-73f6917fcb21\") " pod="openstack/nova-api-db-create-nqdzr" Dec 03 13:19:25 crc kubenswrapper[4986]: I1203 13:19:25.973581 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b482987-51ed-43dd-9099-c54d824588bf-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0b482987-51ed-43dd-9099-c54d824588bf" (UID: "0b482987-51ed-43dd-9099-c54d824588bf"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 13:19:25 crc kubenswrapper[4986]: I1203 13:19:25.974775 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b482987-51ed-43dd-9099-c54d824588bf-scripts" (OuterVolumeSpecName: "scripts") pod "0b482987-51ed-43dd-9099-c54d824588bf" (UID: "0b482987-51ed-43dd-9099-c54d824588bf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:19:25 crc kubenswrapper[4986]: I1203 13:19:25.975052 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b482987-51ed-43dd-9099-c54d824588bf-kube-api-access-x7nmj" (OuterVolumeSpecName: "kube-api-access-x7nmj") pod "0b482987-51ed-43dd-9099-c54d824588bf" (UID: "0b482987-51ed-43dd-9099-c54d824588bf"). InnerVolumeSpecName "kube-api-access-x7nmj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:19:25 crc kubenswrapper[4986]: I1203 13:19:25.993757 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svzzz\" (UniqueName: \"kubernetes.io/projected/c3245c21-420c-442d-934f-73f6917fcb21-kube-api-access-svzzz\") pod \"nova-api-db-create-nqdzr\" (UID: \"c3245c21-420c-442d-934f-73f6917fcb21\") " pod="openstack/nova-api-db-create-nqdzr" Dec 03 13:19:26 crc kubenswrapper[4986]: I1203 13:19:26.015513 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b482987-51ed-43dd-9099-c54d824588bf-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0b482987-51ed-43dd-9099-c54d824588bf" (UID: "0b482987-51ed-43dd-9099-c54d824588bf"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:19:26 crc kubenswrapper[4986]: I1203 13:19:26.055049 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-c47e-account-create-update-g6vlv"] Dec 03 13:19:26 crc kubenswrapper[4986]: I1203 13:19:26.056743 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-c47e-account-create-update-g6vlv" Dec 03 13:19:26 crc kubenswrapper[4986]: I1203 13:19:26.059140 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Dec 03 13:19:26 crc kubenswrapper[4986]: I1203 13:19:26.073512 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44kx9\" (UniqueName: \"kubernetes.io/projected/c6b2f7e4-b8b0-4440-ac4b-291ea7b92d36-kube-api-access-44kx9\") pod \"nova-api-22db-account-create-update-ftg9x\" (UID: \"c6b2f7e4-b8b0-4440-ac4b-291ea7b92d36\") " pod="openstack/nova-api-22db-account-create-update-ftg9x" Dec 03 13:19:26 crc kubenswrapper[4986]: I1203 13:19:26.073621 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2vhg\" (UniqueName: \"kubernetes.io/projected/67426b10-306c-4d32-94e7-99267dc8e435-kube-api-access-k2vhg\") pod \"nova-cell0-db-create-wv2h7\" (UID: \"67426b10-306c-4d32-94e7-99267dc8e435\") " pod="openstack/nova-cell0-db-create-wv2h7" Dec 03 13:19:26 crc kubenswrapper[4986]: I1203 13:19:26.073664 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f8a0583-c937-4c77-8d43-ea7dcb406886-operator-scripts\") pod \"nova-cell1-db-create-qbwm4\" (UID: \"5f8a0583-c937-4c77-8d43-ea7dcb406886\") " pod="openstack/nova-cell1-db-create-qbwm4" Dec 03 13:19:26 crc kubenswrapper[4986]: I1203 13:19:26.073713 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6b2f7e4-b8b0-4440-ac4b-291ea7b92d36-operator-scripts\") pod \"nova-api-22db-account-create-update-ftg9x\" (UID: \"c6b2f7e4-b8b0-4440-ac4b-291ea7b92d36\") " pod="openstack/nova-api-22db-account-create-update-ftg9x" Dec 03 13:19:26 crc kubenswrapper[4986]: I1203 13:19:26.073769 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67426b10-306c-4d32-94e7-99267dc8e435-operator-scripts\") pod \"nova-cell0-db-create-wv2h7\" (UID: \"67426b10-306c-4d32-94e7-99267dc8e435\") " pod="openstack/nova-cell0-db-create-wv2h7" Dec 03 13:19:26 crc kubenswrapper[4986]: I1203 13:19:26.073816 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjzql\" (UniqueName: \"kubernetes.io/projected/5f8a0583-c937-4c77-8d43-ea7dcb406886-kube-api-access-hjzql\") pod \"nova-cell1-db-create-qbwm4\" (UID: \"5f8a0583-c937-4c77-8d43-ea7dcb406886\") " pod="openstack/nova-cell1-db-create-qbwm4" Dec 03 13:19:26 crc kubenswrapper[4986]: I1203 13:19:26.073913 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7nmj\" (UniqueName: \"kubernetes.io/projected/0b482987-51ed-43dd-9099-c54d824588bf-kube-api-access-x7nmj\") on node \"crc\" DevicePath \"\"" Dec 03 13:19:26 crc kubenswrapper[4986]: I1203 13:19:26.073936 4986 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b482987-51ed-43dd-9099-c54d824588bf-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 13:19:26 crc kubenswrapper[4986]: I1203 13:19:26.073949 4986 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b482987-51ed-43dd-9099-c54d824588bf-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 03 13:19:26 crc kubenswrapper[4986]: I1203 13:19:26.073961 4986 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0b482987-51ed-43dd-9099-c54d824588bf-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 03 13:19:26 crc kubenswrapper[4986]: I1203 13:19:26.073973 4986 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b482987-51ed-43dd-9099-c54d824588bf-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 03 13:19:26 crc kubenswrapper[4986]: I1203 13:19:26.074954 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67426b10-306c-4d32-94e7-99267dc8e435-operator-scripts\") pod \"nova-cell0-db-create-wv2h7\" (UID: \"67426b10-306c-4d32-94e7-99267dc8e435\") " pod="openstack/nova-cell0-db-create-wv2h7" Dec 03 13:19:26 crc kubenswrapper[4986]: I1203 13:19:26.085015 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b482987-51ed-43dd-9099-c54d824588bf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0b482987-51ed-43dd-9099-c54d824588bf" (UID: "0b482987-51ed-43dd-9099-c54d824588bf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:19:26 crc kubenswrapper[4986]: I1203 13:19:26.085383 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-c47e-account-create-update-g6vlv"] Dec 03 13:19:26 crc kubenswrapper[4986]: I1203 13:19:26.091153 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2vhg\" (UniqueName: \"kubernetes.io/projected/67426b10-306c-4d32-94e7-99267dc8e435-kube-api-access-k2vhg\") pod \"nova-cell0-db-create-wv2h7\" (UID: \"67426b10-306c-4d32-94e7-99267dc8e435\") " pod="openstack/nova-cell0-db-create-wv2h7" Dec 03 13:19:26 crc kubenswrapper[4986]: I1203 13:19:26.098113 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b482987-51ed-43dd-9099-c54d824588bf-config-data" (OuterVolumeSpecName: "config-data") pod "0b482987-51ed-43dd-9099-c54d824588bf" (UID: "0b482987-51ed-43dd-9099-c54d824588bf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:19:26 crc kubenswrapper[4986]: I1203 13:19:26.103626 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-nqdzr" Dec 03 13:19:26 crc kubenswrapper[4986]: I1203 13:19:26.175936 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f8a0583-c937-4c77-8d43-ea7dcb406886-operator-scripts\") pod \"nova-cell1-db-create-qbwm4\" (UID: \"5f8a0583-c937-4c77-8d43-ea7dcb406886\") " pod="openstack/nova-cell1-db-create-qbwm4" Dec 03 13:19:26 crc kubenswrapper[4986]: I1203 13:19:26.175999 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6b2f7e4-b8b0-4440-ac4b-291ea7b92d36-operator-scripts\") pod \"nova-api-22db-account-create-update-ftg9x\" (UID: \"c6b2f7e4-b8b0-4440-ac4b-291ea7b92d36\") " pod="openstack/nova-api-22db-account-create-update-ftg9x" Dec 03 13:19:26 crc kubenswrapper[4986]: I1203 13:19:26.176054 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjzql\" (UniqueName: \"kubernetes.io/projected/5f8a0583-c937-4c77-8d43-ea7dcb406886-kube-api-access-hjzql\") pod \"nova-cell1-db-create-qbwm4\" (UID: \"5f8a0583-c937-4c77-8d43-ea7dcb406886\") " pod="openstack/nova-cell1-db-create-qbwm4" Dec 03 13:19:26 crc kubenswrapper[4986]: I1203 13:19:26.176082 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsr6w\" (UniqueName: \"kubernetes.io/projected/8c8061fb-865f-46dd-846b-87907d3f12f7-kube-api-access-lsr6w\") pod \"nova-cell0-c47e-account-create-update-g6vlv\" (UID: \"8c8061fb-865f-46dd-846b-87907d3f12f7\") " pod="openstack/nova-cell0-c47e-account-create-update-g6vlv" Dec 03 13:19:26 crc kubenswrapper[4986]: I1203 13:19:26.176124 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c8061fb-865f-46dd-846b-87907d3f12f7-operator-scripts\") pod \"nova-cell0-c47e-account-create-update-g6vlv\" (UID: \"8c8061fb-865f-46dd-846b-87907d3f12f7\") " pod="openstack/nova-cell0-c47e-account-create-update-g6vlv" Dec 03 13:19:26 crc kubenswrapper[4986]: I1203 13:19:26.176160 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44kx9\" (UniqueName: \"kubernetes.io/projected/c6b2f7e4-b8b0-4440-ac4b-291ea7b92d36-kube-api-access-44kx9\") pod \"nova-api-22db-account-create-update-ftg9x\" (UID: \"c6b2f7e4-b8b0-4440-ac4b-291ea7b92d36\") " pod="openstack/nova-api-22db-account-create-update-ftg9x" Dec 03 13:19:26 crc kubenswrapper[4986]: I1203 13:19:26.176198 4986 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b482987-51ed-43dd-9099-c54d824588bf-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 13:19:26 crc kubenswrapper[4986]: I1203 13:19:26.176214 4986 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b482987-51ed-43dd-9099-c54d824588bf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 13:19:26 crc kubenswrapper[4986]: I1203 13:19:26.177097 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f8a0583-c937-4c77-8d43-ea7dcb406886-operator-scripts\") pod \"nova-cell1-db-create-qbwm4\" (UID: \"5f8a0583-c937-4c77-8d43-ea7dcb406886\") " pod="openstack/nova-cell1-db-create-qbwm4" Dec 03 13:19:26 crc kubenswrapper[4986]: I1203 13:19:26.177617 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6b2f7e4-b8b0-4440-ac4b-291ea7b92d36-operator-scripts\") pod \"nova-api-22db-account-create-update-ftg9x\" (UID: \"c6b2f7e4-b8b0-4440-ac4b-291ea7b92d36\") " pod="openstack/nova-api-22db-account-create-update-ftg9x" Dec 03 13:19:26 crc kubenswrapper[4986]: I1203 13:19:26.193352 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-wv2h7" Dec 03 13:19:26 crc kubenswrapper[4986]: I1203 13:19:26.194845 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44kx9\" (UniqueName: \"kubernetes.io/projected/c6b2f7e4-b8b0-4440-ac4b-291ea7b92d36-kube-api-access-44kx9\") pod \"nova-api-22db-account-create-update-ftg9x\" (UID: \"c6b2f7e4-b8b0-4440-ac4b-291ea7b92d36\") " pod="openstack/nova-api-22db-account-create-update-ftg9x" Dec 03 13:19:26 crc kubenswrapper[4986]: I1203 13:19:26.255936 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-22db-account-create-update-ftg9x" Dec 03 13:19:26 crc kubenswrapper[4986]: I1203 13:19:26.261972 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjzql\" (UniqueName: \"kubernetes.io/projected/5f8a0583-c937-4c77-8d43-ea7dcb406886-kube-api-access-hjzql\") pod \"nova-cell1-db-create-qbwm4\" (UID: \"5f8a0583-c937-4c77-8d43-ea7dcb406886\") " pod="openstack/nova-cell1-db-create-qbwm4" Dec 03 13:19:26 crc kubenswrapper[4986]: I1203 13:19:26.291218 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-qbwm4" Dec 03 13:19:26 crc kubenswrapper[4986]: I1203 13:19:26.294147 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lsr6w\" (UniqueName: \"kubernetes.io/projected/8c8061fb-865f-46dd-846b-87907d3f12f7-kube-api-access-lsr6w\") pod \"nova-cell0-c47e-account-create-update-g6vlv\" (UID: \"8c8061fb-865f-46dd-846b-87907d3f12f7\") " pod="openstack/nova-cell0-c47e-account-create-update-g6vlv" Dec 03 13:19:26 crc kubenswrapper[4986]: I1203 13:19:26.294275 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c8061fb-865f-46dd-846b-87907d3f12f7-operator-scripts\") pod \"nova-cell0-c47e-account-create-update-g6vlv\" (UID: \"8c8061fb-865f-46dd-846b-87907d3f12f7\") " pod="openstack/nova-cell0-c47e-account-create-update-g6vlv" Dec 03 13:19:26 crc kubenswrapper[4986]: I1203 13:19:26.295471 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c8061fb-865f-46dd-846b-87907d3f12f7-operator-scripts\") pod \"nova-cell0-c47e-account-create-update-g6vlv\" (UID: \"8c8061fb-865f-46dd-846b-87907d3f12f7\") " pod="openstack/nova-cell0-c47e-account-create-update-g6vlv" Dec 03 13:19:26 crc kubenswrapper[4986]: I1203 13:19:26.319168 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-e9c3-account-create-update-7vqrm"] Dec 03 13:19:26 crc kubenswrapper[4986]: I1203 13:19:26.320489 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-e9c3-account-create-update-7vqrm" Dec 03 13:19:26 crc kubenswrapper[4986]: I1203 13:19:26.323308 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lsr6w\" (UniqueName: \"kubernetes.io/projected/8c8061fb-865f-46dd-846b-87907d3f12f7-kube-api-access-lsr6w\") pod \"nova-cell0-c47e-account-create-update-g6vlv\" (UID: \"8c8061fb-865f-46dd-846b-87907d3f12f7\") " pod="openstack/nova-cell0-c47e-account-create-update-g6vlv" Dec 03 13:19:26 crc kubenswrapper[4986]: I1203 13:19:26.325121 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Dec 03 13:19:26 crc kubenswrapper[4986]: I1203 13:19:26.340490 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-e9c3-account-create-update-7vqrm"] Dec 03 13:19:26 crc kubenswrapper[4986]: I1203 13:19:26.387682 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-c47e-account-create-update-g6vlv" Dec 03 13:19:26 crc kubenswrapper[4986]: I1203 13:19:26.500193 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvf29\" (UniqueName: \"kubernetes.io/projected/f756aad9-8f89-429d-af35-c412a37c78cb-kube-api-access-nvf29\") pod \"nova-cell1-e9c3-account-create-update-7vqrm\" (UID: \"f756aad9-8f89-429d-af35-c412a37c78cb\") " pod="openstack/nova-cell1-e9c3-account-create-update-7vqrm" Dec 03 13:19:26 crc kubenswrapper[4986]: I1203 13:19:26.500734 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f756aad9-8f89-429d-af35-c412a37c78cb-operator-scripts\") pod \"nova-cell1-e9c3-account-create-update-7vqrm\" (UID: \"f756aad9-8f89-429d-af35-c412a37c78cb\") " pod="openstack/nova-cell1-e9c3-account-create-update-7vqrm" Dec 03 13:19:26 crc kubenswrapper[4986]: I1203 13:19:26.603366 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f756aad9-8f89-429d-af35-c412a37c78cb-operator-scripts\") pod \"nova-cell1-e9c3-account-create-update-7vqrm\" (UID: \"f756aad9-8f89-429d-af35-c412a37c78cb\") " pod="openstack/nova-cell1-e9c3-account-create-update-7vqrm" Dec 03 13:19:26 crc kubenswrapper[4986]: I1203 13:19:26.603463 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvf29\" (UniqueName: \"kubernetes.io/projected/f756aad9-8f89-429d-af35-c412a37c78cb-kube-api-access-nvf29\") pod \"nova-cell1-e9c3-account-create-update-7vqrm\" (UID: \"f756aad9-8f89-429d-af35-c412a37c78cb\") " pod="openstack/nova-cell1-e9c3-account-create-update-7vqrm" Dec 03 13:19:26 crc kubenswrapper[4986]: I1203 13:19:26.604572 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f756aad9-8f89-429d-af35-c412a37c78cb-operator-scripts\") pod \"nova-cell1-e9c3-account-create-update-7vqrm\" (UID: \"f756aad9-8f89-429d-af35-c412a37c78cb\") " pod="openstack/nova-cell1-e9c3-account-create-update-7vqrm" Dec 03 13:19:26 crc kubenswrapper[4986]: I1203 13:19:26.621655 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvf29\" (UniqueName: \"kubernetes.io/projected/f756aad9-8f89-429d-af35-c412a37c78cb-kube-api-access-nvf29\") pod \"nova-cell1-e9c3-account-create-update-7vqrm\" (UID: \"f756aad9-8f89-429d-af35-c412a37c78cb\") " pod="openstack/nova-cell1-e9c3-account-create-update-7vqrm" Dec 03 13:19:26 crc kubenswrapper[4986]: I1203 13:19:26.753535 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-nqdzr"] Dec 03 13:19:26 crc kubenswrapper[4986]: I1203 13:19:26.762814 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0b482987-51ed-43dd-9099-c54d824588bf","Type":"ContainerDied","Data":"4d44214afd786f28616bb237b9c4b6d1d10a40b70cee28886e298e397236f9d4"} Dec 03 13:19:26 crc kubenswrapper[4986]: I1203 13:19:26.762864 4986 scope.go:117] "RemoveContainer" containerID="354617f24609bb90f74a5d56c92fdeb07833a70a411fb023342eb61f620a5844" Dec 03 13:19:26 crc kubenswrapper[4986]: I1203 13:19:26.762986 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 13:19:26 crc kubenswrapper[4986]: I1203 13:19:26.769710 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"a2644ca5-1db7-491d-949e-e8810934a296","Type":"ContainerStarted","Data":"2cd2e5d0324ee06b972d9c55b89b25e6e716e9dc7fe0ffaed2ee1b3ed0a919e9"} Dec 03 13:19:26 crc kubenswrapper[4986]: I1203 13:19:26.772455 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"4ea00525-8d87-4a9e-b1ce-15a73b6bedd1","Type":"ContainerDied","Data":"e4c035369ecf2e51ba81488e3769bde30a14cd96309bca64d160bb61d5425d0e"} Dec 03 13:19:26 crc kubenswrapper[4986]: I1203 13:19:26.772527 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 03 13:19:26 crc kubenswrapper[4986]: W1203 13:19:26.780730 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc3245c21_420c_442d_934f_73f6917fcb21.slice/crio-7c3f9dbd64d970df522b7a88402e72e66801733dab1955199494547f654be52b WatchSource:0}: Error finding container 7c3f9dbd64d970df522b7a88402e72e66801733dab1955199494547f654be52b: Status 404 returned error can't find the container with id 7c3f9dbd64d970df522b7a88402e72e66801733dab1955199494547f654be52b Dec 03 13:19:26 crc kubenswrapper[4986]: I1203 13:19:26.798443 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.258051107 podStartE2EDuration="13.798409047s" podCreationTimestamp="2025-12-03 13:19:13 +0000 UTC" firstStartedPulling="2025-12-03 13:19:13.932644772 +0000 UTC m=+1413.399075963" lastFinishedPulling="2025-12-03 13:19:25.473002722 +0000 UTC m=+1424.939433903" observedRunningTime="2025-12-03 13:19:26.791378457 +0000 UTC m=+1426.257809648" watchObservedRunningTime="2025-12-03 13:19:26.798409047 +0000 UTC m=+1426.264840238" Dec 03 13:19:26 crc kubenswrapper[4986]: I1203 13:19:26.803617 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-e9c3-account-create-update-7vqrm" Dec 03 13:19:26 crc kubenswrapper[4986]: I1203 13:19:26.808751 4986 scope.go:117] "RemoveContainer" containerID="eb0afe609b2a9c9c8c4a8eacdf3df3343aa2f4ce2503fa8629c7c87e0396c0f2" Dec 03 13:19:26 crc kubenswrapper[4986]: I1203 13:19:26.851405 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 03 13:19:26 crc kubenswrapper[4986]: I1203 13:19:26.876768 4986 scope.go:117] "RemoveContainer" containerID="4f3957e372e4fd6e32d5915c5d40b448972c4b354892124fb0ec0cb57465fcf9" Dec 03 13:19:26 crc kubenswrapper[4986]: I1203 13:19:26.896096 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 03 13:19:26 crc kubenswrapper[4986]: I1203 13:19:26.922115 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 03 13:19:26 crc kubenswrapper[4986]: I1203 13:19:26.926452 4986 scope.go:117] "RemoveContainer" containerID="5e5cc346550a3fec515bf0c4791854b5def68d62d3eb0b944e57908e69062b1a" Dec 03 13:19:26 crc kubenswrapper[4986]: I1203 13:19:26.940871 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 03 13:19:26 crc kubenswrapper[4986]: I1203 13:19:26.956228 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b482987-51ed-43dd-9099-c54d824588bf" path="/var/lib/kubelet/pods/0b482987-51ed-43dd-9099-c54d824588bf/volumes" Dec 03 13:19:26 crc kubenswrapper[4986]: I1203 13:19:26.957377 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ea00525-8d87-4a9e-b1ce-15a73b6bedd1" path="/var/lib/kubelet/pods/4ea00525-8d87-4a9e-b1ce-15a73b6bedd1/volumes" Dec 03 13:19:26 crc kubenswrapper[4986]: W1203 13:19:26.959688 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc6b2f7e4_b8b0_4440_ac4b_291ea7b92d36.slice/crio-b63e9e5040126674782012d9875cfb8ba92e791f63f9e88d19efd72e3fba3a9a WatchSource:0}: Error finding container b63e9e5040126674782012d9875cfb8ba92e791f63f9e88d19efd72e3fba3a9a: Status 404 returned error can't find the container with id b63e9e5040126674782012d9875cfb8ba92e791f63f9e88d19efd72e3fba3a9a Dec 03 13:19:26 crc kubenswrapper[4986]: I1203 13:19:26.963787 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-wv2h7"] Dec 03 13:19:26 crc kubenswrapper[4986]: I1203 13:19:26.972330 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 03 13:19:26 crc kubenswrapper[4986]: I1203 13:19:26.974994 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 13:19:26 crc kubenswrapper[4986]: I1203 13:19:26.976563 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-xkjbs" Dec 03 13:19:26 crc kubenswrapper[4986]: I1203 13:19:26.977767 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 03 13:19:26 crc kubenswrapper[4986]: I1203 13:19:26.978484 4986 scope.go:117] "RemoveContainer" containerID="2b839fea51f99e9668279fe36bf0c93319d7ee0e42824ac01703df9a4aa0ecc5" Dec 03 13:19:26 crc kubenswrapper[4986]: I1203 13:19:26.978763 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 03 13:19:26 crc kubenswrapper[4986]: I1203 13:19:26.978982 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 03 13:19:26 crc kubenswrapper[4986]: I1203 13:19:26.986873 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Dec 03 13:19:26 crc kubenswrapper[4986]: I1203 13:19:26.988099 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 03 13:19:26 crc kubenswrapper[4986]: I1203 13:19:26.993271 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Dec 03 13:19:26 crc kubenswrapper[4986]: I1203 13:19:26.993476 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Dec 03 13:19:27 crc kubenswrapper[4986]: I1203 13:19:27.006889 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 03 13:19:27 crc kubenswrapper[4986]: I1203 13:19:27.018331 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 03 13:19:27 crc kubenswrapper[4986]: I1203 13:19:27.032370 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-22db-account-create-update-ftg9x"] Dec 03 13:19:27 crc kubenswrapper[4986]: W1203 13:19:27.044806 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8c8061fb_865f_46dd_846b_87907d3f12f7.slice/crio-febff9c25b9a15f6424812c89c24f0d7af85b5b46567f534cd07472b2a688d7d WatchSource:0}: Error finding container febff9c25b9a15f6424812c89c24f0d7af85b5b46567f534cd07472b2a688d7d: Status 404 returned error can't find the container with id febff9c25b9a15f6424812c89c24f0d7af85b5b46567f534cd07472b2a688d7d Dec 03 13:19:27 crc kubenswrapper[4986]: I1203 13:19:27.054402 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-c47e-account-create-update-g6vlv"] Dec 03 13:19:27 crc kubenswrapper[4986]: I1203 13:19:27.105063 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-qbwm4"] Dec 03 13:19:27 crc kubenswrapper[4986]: I1203 13:19:27.115391 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 03 13:19:27 crc kubenswrapper[4986]: I1203 13:19:27.115873 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="d5c981f3-a559-4641-aaae-2e90c5d0d543" containerName="glance-log" containerID="cri-o://9a5fbe41f6ab0b06a7618b9c833dbabf3623c8d0201e3e708bde7bf5c6a5d49f" gracePeriod=30 Dec 03 13:19:27 crc kubenswrapper[4986]: I1203 13:19:27.116308 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="d5c981f3-a559-4641-aaae-2e90c5d0d543" containerName="glance-httpd" containerID="cri-o://152f15955d2a2d608a7a6b95d81e9ec4a2cedda44a9d160e00d09e6b300ba6fd" gracePeriod=30 Dec 03 13:19:27 crc kubenswrapper[4986]: I1203 13:19:27.127828 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/33238cb5-2bde-4244-aa0d-cc11080f57fb-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"33238cb5-2bde-4244-aa0d-cc11080f57fb\") " pod="openstack/kube-state-metrics-0" Dec 03 13:19:27 crc kubenswrapper[4986]: I1203 13:19:27.130972 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/33238cb5-2bde-4244-aa0d-cc11080f57fb-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"33238cb5-2bde-4244-aa0d-cc11080f57fb\") " pod="openstack/kube-state-metrics-0" Dec 03 13:19:27 crc kubenswrapper[4986]: I1203 13:19:27.131050 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2eeabafd-9779-4486-bde1-9fdaf83cf88a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2eeabafd-9779-4486-bde1-9fdaf83cf88a\") " pod="openstack/ceilometer-0" Dec 03 13:19:27 crc kubenswrapper[4986]: I1203 13:19:27.131097 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2eeabafd-9779-4486-bde1-9fdaf83cf88a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2eeabafd-9779-4486-bde1-9fdaf83cf88a\") " pod="openstack/ceilometer-0" Dec 03 13:19:27 crc kubenswrapper[4986]: I1203 13:19:27.131155 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-548rf\" (UniqueName: \"kubernetes.io/projected/33238cb5-2bde-4244-aa0d-cc11080f57fb-kube-api-access-548rf\") pod \"kube-state-metrics-0\" (UID: \"33238cb5-2bde-4244-aa0d-cc11080f57fb\") " pod="openstack/kube-state-metrics-0" Dec 03 13:19:27 crc kubenswrapper[4986]: I1203 13:19:27.131181 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2eeabafd-9779-4486-bde1-9fdaf83cf88a-run-httpd\") pod \"ceilometer-0\" (UID: \"2eeabafd-9779-4486-bde1-9fdaf83cf88a\") " pod="openstack/ceilometer-0" Dec 03 13:19:27 crc kubenswrapper[4986]: I1203 13:19:27.131195 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2eeabafd-9779-4486-bde1-9fdaf83cf88a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2eeabafd-9779-4486-bde1-9fdaf83cf88a\") " pod="openstack/ceilometer-0" Dec 03 13:19:27 crc kubenswrapper[4986]: I1203 13:19:27.131220 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2eeabafd-9779-4486-bde1-9fdaf83cf88a-config-data\") pod \"ceilometer-0\" (UID: \"2eeabafd-9779-4486-bde1-9fdaf83cf88a\") " pod="openstack/ceilometer-0" Dec 03 13:19:27 crc kubenswrapper[4986]: I1203 13:19:27.131333 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2eeabafd-9779-4486-bde1-9fdaf83cf88a-scripts\") pod \"ceilometer-0\" (UID: \"2eeabafd-9779-4486-bde1-9fdaf83cf88a\") " pod="openstack/ceilometer-0" Dec 03 13:19:27 crc kubenswrapper[4986]: I1203 13:19:27.131500 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33238cb5-2bde-4244-aa0d-cc11080f57fb-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"33238cb5-2bde-4244-aa0d-cc11080f57fb\") " pod="openstack/kube-state-metrics-0" Dec 03 13:19:27 crc kubenswrapper[4986]: I1203 13:19:27.131654 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2eeabafd-9779-4486-bde1-9fdaf83cf88a-log-httpd\") pod \"ceilometer-0\" (UID: \"2eeabafd-9779-4486-bde1-9fdaf83cf88a\") " pod="openstack/ceilometer-0" Dec 03 13:19:27 crc kubenswrapper[4986]: I1203 13:19:27.132572 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bx6nd\" (UniqueName: \"kubernetes.io/projected/2eeabafd-9779-4486-bde1-9fdaf83cf88a-kube-api-access-bx6nd\") pod \"ceilometer-0\" (UID: \"2eeabafd-9779-4486-bde1-9fdaf83cf88a\") " pod="openstack/ceilometer-0" Dec 03 13:19:27 crc kubenswrapper[4986]: I1203 13:19:27.239119 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2eeabafd-9779-4486-bde1-9fdaf83cf88a-log-httpd\") pod \"ceilometer-0\" (UID: \"2eeabafd-9779-4486-bde1-9fdaf83cf88a\") " pod="openstack/ceilometer-0" Dec 03 13:19:27 crc kubenswrapper[4986]: I1203 13:19:27.239172 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bx6nd\" (UniqueName: \"kubernetes.io/projected/2eeabafd-9779-4486-bde1-9fdaf83cf88a-kube-api-access-bx6nd\") pod \"ceilometer-0\" (UID: \"2eeabafd-9779-4486-bde1-9fdaf83cf88a\") " pod="openstack/ceilometer-0" Dec 03 13:19:27 crc kubenswrapper[4986]: I1203 13:19:27.239212 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/33238cb5-2bde-4244-aa0d-cc11080f57fb-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"33238cb5-2bde-4244-aa0d-cc11080f57fb\") " pod="openstack/kube-state-metrics-0" Dec 03 13:19:27 crc kubenswrapper[4986]: I1203 13:19:27.239246 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/33238cb5-2bde-4244-aa0d-cc11080f57fb-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"33238cb5-2bde-4244-aa0d-cc11080f57fb\") " pod="openstack/kube-state-metrics-0" Dec 03 13:19:27 crc kubenswrapper[4986]: I1203 13:19:27.239299 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2eeabafd-9779-4486-bde1-9fdaf83cf88a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2eeabafd-9779-4486-bde1-9fdaf83cf88a\") " pod="openstack/ceilometer-0" Dec 03 13:19:27 crc kubenswrapper[4986]: I1203 13:19:27.239350 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2eeabafd-9779-4486-bde1-9fdaf83cf88a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2eeabafd-9779-4486-bde1-9fdaf83cf88a\") " pod="openstack/ceilometer-0" Dec 03 13:19:27 crc kubenswrapper[4986]: I1203 13:19:27.239386 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-548rf\" (UniqueName: \"kubernetes.io/projected/33238cb5-2bde-4244-aa0d-cc11080f57fb-kube-api-access-548rf\") pod \"kube-state-metrics-0\" (UID: \"33238cb5-2bde-4244-aa0d-cc11080f57fb\") " pod="openstack/kube-state-metrics-0" Dec 03 13:19:27 crc kubenswrapper[4986]: I1203 13:19:27.239410 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2eeabafd-9779-4486-bde1-9fdaf83cf88a-run-httpd\") pod \"ceilometer-0\" (UID: \"2eeabafd-9779-4486-bde1-9fdaf83cf88a\") " pod="openstack/ceilometer-0" Dec 03 13:19:27 crc kubenswrapper[4986]: I1203 13:19:27.239429 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2eeabafd-9779-4486-bde1-9fdaf83cf88a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2eeabafd-9779-4486-bde1-9fdaf83cf88a\") " pod="openstack/ceilometer-0" Dec 03 13:19:27 crc kubenswrapper[4986]: I1203 13:19:27.239451 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2eeabafd-9779-4486-bde1-9fdaf83cf88a-config-data\") pod \"ceilometer-0\" (UID: \"2eeabafd-9779-4486-bde1-9fdaf83cf88a\") " pod="openstack/ceilometer-0" Dec 03 13:19:27 crc kubenswrapper[4986]: I1203 13:19:27.239477 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2eeabafd-9779-4486-bde1-9fdaf83cf88a-scripts\") pod \"ceilometer-0\" (UID: \"2eeabafd-9779-4486-bde1-9fdaf83cf88a\") " pod="openstack/ceilometer-0" Dec 03 13:19:27 crc kubenswrapper[4986]: I1203 13:19:27.239742 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33238cb5-2bde-4244-aa0d-cc11080f57fb-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"33238cb5-2bde-4244-aa0d-cc11080f57fb\") " pod="openstack/kube-state-metrics-0" Dec 03 13:19:27 crc kubenswrapper[4986]: I1203 13:19:27.249064 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2eeabafd-9779-4486-bde1-9fdaf83cf88a-log-httpd\") pod \"ceilometer-0\" (UID: \"2eeabafd-9779-4486-bde1-9fdaf83cf88a\") " pod="openstack/ceilometer-0" Dec 03 13:19:27 crc kubenswrapper[4986]: I1203 13:19:27.250605 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2eeabafd-9779-4486-bde1-9fdaf83cf88a-run-httpd\") pod \"ceilometer-0\" (UID: \"2eeabafd-9779-4486-bde1-9fdaf83cf88a\") " pod="openstack/ceilometer-0" Dec 03 13:19:27 crc kubenswrapper[4986]: I1203 13:19:27.263476 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/33238cb5-2bde-4244-aa0d-cc11080f57fb-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"33238cb5-2bde-4244-aa0d-cc11080f57fb\") " pod="openstack/kube-state-metrics-0" Dec 03 13:19:27 crc kubenswrapper[4986]: I1203 13:19:27.265997 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33238cb5-2bde-4244-aa0d-cc11080f57fb-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"33238cb5-2bde-4244-aa0d-cc11080f57fb\") " pod="openstack/kube-state-metrics-0" Dec 03 13:19:27 crc kubenswrapper[4986]: I1203 13:19:27.266496 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/33238cb5-2bde-4244-aa0d-cc11080f57fb-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"33238cb5-2bde-4244-aa0d-cc11080f57fb\") " pod="openstack/kube-state-metrics-0" Dec 03 13:19:27 crc kubenswrapper[4986]: I1203 13:19:27.267839 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2eeabafd-9779-4486-bde1-9fdaf83cf88a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2eeabafd-9779-4486-bde1-9fdaf83cf88a\") " pod="openstack/ceilometer-0" Dec 03 13:19:27 crc kubenswrapper[4986]: I1203 13:19:27.269236 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2eeabafd-9779-4486-bde1-9fdaf83cf88a-config-data\") pod \"ceilometer-0\" (UID: \"2eeabafd-9779-4486-bde1-9fdaf83cf88a\") " pod="openstack/ceilometer-0" Dec 03 13:19:27 crc kubenswrapper[4986]: I1203 13:19:27.269945 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2eeabafd-9779-4486-bde1-9fdaf83cf88a-scripts\") pod \"ceilometer-0\" (UID: \"2eeabafd-9779-4486-bde1-9fdaf83cf88a\") " pod="openstack/ceilometer-0" Dec 03 13:19:27 crc kubenswrapper[4986]: I1203 13:19:27.297470 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2eeabafd-9779-4486-bde1-9fdaf83cf88a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2eeabafd-9779-4486-bde1-9fdaf83cf88a\") " pod="openstack/ceilometer-0" Dec 03 13:19:27 crc kubenswrapper[4986]: I1203 13:19:27.298910 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2eeabafd-9779-4486-bde1-9fdaf83cf88a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2eeabafd-9779-4486-bde1-9fdaf83cf88a\") " pod="openstack/ceilometer-0" Dec 03 13:19:27 crc kubenswrapper[4986]: I1203 13:19:27.309607 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-548rf\" (UniqueName: \"kubernetes.io/projected/33238cb5-2bde-4244-aa0d-cc11080f57fb-kube-api-access-548rf\") pod \"kube-state-metrics-0\" (UID: \"33238cb5-2bde-4244-aa0d-cc11080f57fb\") " pod="openstack/kube-state-metrics-0" Dec 03 13:19:27 crc kubenswrapper[4986]: I1203 13:19:27.321405 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bx6nd\" (UniqueName: \"kubernetes.io/projected/2eeabafd-9779-4486-bde1-9fdaf83cf88a-kube-api-access-bx6nd\") pod \"ceilometer-0\" (UID: \"2eeabafd-9779-4486-bde1-9fdaf83cf88a\") " pod="openstack/ceilometer-0" Dec 03 13:19:27 crc kubenswrapper[4986]: I1203 13:19:27.327518 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6d976bf467-mjgvz" Dec 03 13:19:27 crc kubenswrapper[4986]: I1203 13:19:27.330749 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 13:19:27 crc kubenswrapper[4986]: I1203 13:19:27.337568 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 03 13:19:27 crc kubenswrapper[4986]: I1203 13:19:27.405040 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6d976bf467-mjgvz" Dec 03 13:19:27 crc kubenswrapper[4986]: I1203 13:19:27.523942 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-e9c3-account-create-update-7vqrm"] Dec 03 13:19:27 crc kubenswrapper[4986]: I1203 13:19:27.829938 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-c47e-account-create-update-g6vlv" event={"ID":"8c8061fb-865f-46dd-846b-87907d3f12f7","Type":"ContainerStarted","Data":"62badd142a85ea6f0382f6f27db0497df741d71dba57238651e1f5009ba07b33"} Dec 03 13:19:27 crc kubenswrapper[4986]: I1203 13:19:27.830199 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-c47e-account-create-update-g6vlv" event={"ID":"8c8061fb-865f-46dd-846b-87907d3f12f7","Type":"ContainerStarted","Data":"febff9c25b9a15f6424812c89c24f0d7af85b5b46567f534cd07472b2a688d7d"} Dec 03 13:19:27 crc kubenswrapper[4986]: I1203 13:19:27.849868 4986 generic.go:334] "Generic (PLEG): container finished" podID="d5c981f3-a559-4641-aaae-2e90c5d0d543" containerID="9a5fbe41f6ab0b06a7618b9c833dbabf3623c8d0201e3e708bde7bf5c6a5d49f" exitCode=143 Dec 03 13:19:27 crc kubenswrapper[4986]: I1203 13:19:27.850105 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d5c981f3-a559-4641-aaae-2e90c5d0d543","Type":"ContainerDied","Data":"9a5fbe41f6ab0b06a7618b9c833dbabf3623c8d0201e3e708bde7bf5c6a5d49f"} Dec 03 13:19:27 crc kubenswrapper[4986]: I1203 13:19:27.875671 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-qbwm4" event={"ID":"5f8a0583-c937-4c77-8d43-ea7dcb406886","Type":"ContainerStarted","Data":"a3aaab06a2e42872acc3927a63ed614a82501a2010f7a485a0fef58178256ce0"} Dec 03 13:19:27 crc kubenswrapper[4986]: I1203 13:19:27.880764 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-nqdzr" event={"ID":"c3245c21-420c-442d-934f-73f6917fcb21","Type":"ContainerStarted","Data":"4c4753feaadd0261c00da6983cc4d7980e020557577e4254fee703cc8b3362e7"} Dec 03 13:19:27 crc kubenswrapper[4986]: I1203 13:19:27.880797 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-nqdzr" event={"ID":"c3245c21-420c-442d-934f-73f6917fcb21","Type":"ContainerStarted","Data":"7c3f9dbd64d970df522b7a88402e72e66801733dab1955199494547f654be52b"} Dec 03 13:19:27 crc kubenswrapper[4986]: I1203 13:19:27.882252 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-c47e-account-create-update-g6vlv" podStartSLOduration=1.882233322 podStartE2EDuration="1.882233322s" podCreationTimestamp="2025-12-03 13:19:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:19:27.865227812 +0000 UTC m=+1427.331659003" watchObservedRunningTime="2025-12-03 13:19:27.882233322 +0000 UTC m=+1427.348664513" Dec 03 13:19:27 crc kubenswrapper[4986]: I1203 13:19:27.893295 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-22db-account-create-update-ftg9x" event={"ID":"c6b2f7e4-b8b0-4440-ac4b-291ea7b92d36","Type":"ContainerStarted","Data":"700d858f0986e9aa3ad329ac3ac7246c350f9e40f1ad140df00c0b85de0bda74"} Dec 03 13:19:27 crc kubenswrapper[4986]: I1203 13:19:27.893331 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-22db-account-create-update-ftg9x" event={"ID":"c6b2f7e4-b8b0-4440-ac4b-291ea7b92d36","Type":"ContainerStarted","Data":"b63e9e5040126674782012d9875cfb8ba92e791f63f9e88d19efd72e3fba3a9a"} Dec 03 13:19:27 crc kubenswrapper[4986]: I1203 13:19:27.898674 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-wv2h7" event={"ID":"67426b10-306c-4d32-94e7-99267dc8e435","Type":"ContainerStarted","Data":"85d977ecdb3eb494457ba18ea8ef8b41b7366ed30b9bebe5bb21dcf9596a6dc7"} Dec 03 13:19:27 crc kubenswrapper[4986]: I1203 13:19:27.898725 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-wv2h7" event={"ID":"67426b10-306c-4d32-94e7-99267dc8e435","Type":"ContainerStarted","Data":"f58dc88ef6e7feebb1d820f83301d5560ddab95325a4c2fa88536d62fd5a97ac"} Dec 03 13:19:27 crc kubenswrapper[4986]: I1203 13:19:27.905310 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-e9c3-account-create-update-7vqrm" event={"ID":"f756aad9-8f89-429d-af35-c412a37c78cb","Type":"ContainerStarted","Data":"1a84fcb0f2d438e53568de704acf0583a0b6fb2426c940ad8ff10fff24ebbaad"} Dec 03 13:19:27 crc kubenswrapper[4986]: I1203 13:19:27.909703 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-qbwm4" podStartSLOduration=2.909678452 podStartE2EDuration="2.909678452s" podCreationTimestamp="2025-12-03 13:19:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:19:27.889348564 +0000 UTC m=+1427.355779755" watchObservedRunningTime="2025-12-03 13:19:27.909678452 +0000 UTC m=+1427.376109643" Dec 03 13:19:27 crc kubenswrapper[4986]: I1203 13:19:27.919743 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-nqdzr" podStartSLOduration=2.919728813 podStartE2EDuration="2.919728813s" podCreationTimestamp="2025-12-03 13:19:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:19:27.905965543 +0000 UTC m=+1427.372396734" watchObservedRunningTime="2025-12-03 13:19:27.919728813 +0000 UTC m=+1427.386160004" Dec 03 13:19:27 crc kubenswrapper[4986]: I1203 13:19:27.951469 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-wv2h7" podStartSLOduration=2.951443309 podStartE2EDuration="2.951443309s" podCreationTimestamp="2025-12-03 13:19:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:19:27.920484914 +0000 UTC m=+1427.386916105" watchObservedRunningTime="2025-12-03 13:19:27.951443309 +0000 UTC m=+1427.417874500" Dec 03 13:19:27 crc kubenswrapper[4986]: I1203 13:19:27.967926 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-22db-account-create-update-ftg9x" podStartSLOduration=2.967902174 podStartE2EDuration="2.967902174s" podCreationTimestamp="2025-12-03 13:19:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:19:27.936802365 +0000 UTC m=+1427.403233556" watchObservedRunningTime="2025-12-03 13:19:27.967902174 +0000 UTC m=+1427.434333365" Dec 03 13:19:28 crc kubenswrapper[4986]: I1203 13:19:28.008207 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-e9c3-account-create-update-7vqrm" podStartSLOduration=2.008186301 podStartE2EDuration="2.008186301s" podCreationTimestamp="2025-12-03 13:19:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:19:27.953625609 +0000 UTC m=+1427.420056800" watchObservedRunningTime="2025-12-03 13:19:28.008186301 +0000 UTC m=+1427.474617492" Dec 03 13:19:28 crc kubenswrapper[4986]: I1203 13:19:28.093343 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 03 13:19:28 crc kubenswrapper[4986]: I1203 13:19:28.215812 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 03 13:19:28 crc kubenswrapper[4986]: I1203 13:19:28.920800 4986 generic.go:334] "Generic (PLEG): container finished" podID="67426b10-306c-4d32-94e7-99267dc8e435" containerID="85d977ecdb3eb494457ba18ea8ef8b41b7366ed30b9bebe5bb21dcf9596a6dc7" exitCode=0 Dec 03 13:19:28 crc kubenswrapper[4986]: I1203 13:19:28.921734 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-wv2h7" event={"ID":"67426b10-306c-4d32-94e7-99267dc8e435","Type":"ContainerDied","Data":"85d977ecdb3eb494457ba18ea8ef8b41b7366ed30b9bebe5bb21dcf9596a6dc7"} Dec 03 13:19:28 crc kubenswrapper[4986]: I1203 13:19:28.923568 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"33238cb5-2bde-4244-aa0d-cc11080f57fb","Type":"ContainerStarted","Data":"cf781db99589b233f911c6cc279e7ddef35334050696c0d60cf7e00a02260854"} Dec 03 13:19:28 crc kubenswrapper[4986]: I1203 13:19:28.948109 4986 generic.go:334] "Generic (PLEG): container finished" podID="c3245c21-420c-442d-934f-73f6917fcb21" containerID="4c4753feaadd0261c00da6983cc4d7980e020557577e4254fee703cc8b3362e7" exitCode=0 Dec 03 13:19:28 crc kubenswrapper[4986]: I1203 13:19:28.965087 4986 generic.go:334] "Generic (PLEG): container finished" podID="d4dae8bb-1f5e-446c-9820-769e77414c04" containerID="cad60ee62b6c93be511446f0006c6f3f5fac38db2d6eabae98e5c3439c853b7b" exitCode=0 Dec 03 13:19:28 crc kubenswrapper[4986]: I1203 13:19:28.966465 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-nqdzr" event={"ID":"c3245c21-420c-442d-934f-73f6917fcb21","Type":"ContainerDied","Data":"4c4753feaadd0261c00da6983cc4d7980e020557577e4254fee703cc8b3362e7"} Dec 03 13:19:28 crc kubenswrapper[4986]: I1203 13:19:28.966524 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d4dae8bb-1f5e-446c-9820-769e77414c04","Type":"ContainerDied","Data":"cad60ee62b6c93be511446f0006c6f3f5fac38db2d6eabae98e5c3439c853b7b"} Dec 03 13:19:28 crc kubenswrapper[4986]: I1203 13:19:28.967076 4986 generic.go:334] "Generic (PLEG): container finished" podID="f756aad9-8f89-429d-af35-c412a37c78cb" containerID="a3153ae505ed0e0cde5955015e094b254025fb8a23ac54ca88e4a31915d6058d" exitCode=0 Dec 03 13:19:28 crc kubenswrapper[4986]: I1203 13:19:28.967131 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-e9c3-account-create-update-7vqrm" event={"ID":"f756aad9-8f89-429d-af35-c412a37c78cb","Type":"ContainerDied","Data":"a3153ae505ed0e0cde5955015e094b254025fb8a23ac54ca88e4a31915d6058d"} Dec 03 13:19:28 crc kubenswrapper[4986]: I1203 13:19:28.970962 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2eeabafd-9779-4486-bde1-9fdaf83cf88a","Type":"ContainerStarted","Data":"3cbf3c0aa6bfb4e1950fcec528423824dc0767804e4d3ac7d8bd11d52f5ca79e"} Dec 03 13:19:28 crc kubenswrapper[4986]: I1203 13:19:28.977449 4986 generic.go:334] "Generic (PLEG): container finished" podID="5f8a0583-c937-4c77-8d43-ea7dcb406886" containerID="6d37bbc015c90c052e92f17f5bc953ba2333fdb98cb2a10b033bf2717f8f16a7" exitCode=0 Dec 03 13:19:28 crc kubenswrapper[4986]: I1203 13:19:28.977511 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-qbwm4" event={"ID":"5f8a0583-c937-4c77-8d43-ea7dcb406886","Type":"ContainerDied","Data":"6d37bbc015c90c052e92f17f5bc953ba2333fdb98cb2a10b033bf2717f8f16a7"} Dec 03 13:19:28 crc kubenswrapper[4986]: I1203 13:19:28.989485 4986 generic.go:334] "Generic (PLEG): container finished" podID="8c8061fb-865f-46dd-846b-87907d3f12f7" containerID="62badd142a85ea6f0382f6f27db0497df741d71dba57238651e1f5009ba07b33" exitCode=0 Dec 03 13:19:28 crc kubenswrapper[4986]: I1203 13:19:28.989554 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-c47e-account-create-update-g6vlv" event={"ID":"8c8061fb-865f-46dd-846b-87907d3f12f7","Type":"ContainerDied","Data":"62badd142a85ea6f0382f6f27db0497df741d71dba57238651e1f5009ba07b33"} Dec 03 13:19:28 crc kubenswrapper[4986]: I1203 13:19:28.996947 4986 generic.go:334] "Generic (PLEG): container finished" podID="c6b2f7e4-b8b0-4440-ac4b-291ea7b92d36" containerID="700d858f0986e9aa3ad329ac3ac7246c350f9e40f1ad140df00c0b85de0bda74" exitCode=0 Dec 03 13:19:28 crc kubenswrapper[4986]: I1203 13:19:28.996989 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-22db-account-create-update-ftg9x" event={"ID":"c6b2f7e4-b8b0-4440-ac4b-291ea7b92d36","Type":"ContainerDied","Data":"700d858f0986e9aa3ad329ac3ac7246c350f9e40f1ad140df00c0b85de0bda74"} Dec 03 13:19:29 crc kubenswrapper[4986]: I1203 13:19:29.050669 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 03 13:19:29 crc kubenswrapper[4986]: I1203 13:19:29.090668 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4dae8bb-1f5e-446c-9820-769e77414c04-combined-ca-bundle\") pod \"d4dae8bb-1f5e-446c-9820-769e77414c04\" (UID: \"d4dae8bb-1f5e-446c-9820-769e77414c04\") " Dec 03 13:19:29 crc kubenswrapper[4986]: I1203 13:19:29.090769 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d4dae8bb-1f5e-446c-9820-769e77414c04-httpd-run\") pod \"d4dae8bb-1f5e-446c-9820-769e77414c04\" (UID: \"d4dae8bb-1f5e-446c-9820-769e77414c04\") " Dec 03 13:19:29 crc kubenswrapper[4986]: I1203 13:19:29.090840 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4dae8bb-1f5e-446c-9820-769e77414c04-logs\") pod \"d4dae8bb-1f5e-446c-9820-769e77414c04\" (UID: \"d4dae8bb-1f5e-446c-9820-769e77414c04\") " Dec 03 13:19:29 crc kubenswrapper[4986]: I1203 13:19:29.090864 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4dae8bb-1f5e-446c-9820-769e77414c04-scripts\") pod \"d4dae8bb-1f5e-446c-9820-769e77414c04\" (UID: \"d4dae8bb-1f5e-446c-9820-769e77414c04\") " Dec 03 13:19:29 crc kubenswrapper[4986]: I1203 13:19:29.090928 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2mdff\" (UniqueName: \"kubernetes.io/projected/d4dae8bb-1f5e-446c-9820-769e77414c04-kube-api-access-2mdff\") pod \"d4dae8bb-1f5e-446c-9820-769e77414c04\" (UID: \"d4dae8bb-1f5e-446c-9820-769e77414c04\") " Dec 03 13:19:29 crc kubenswrapper[4986]: I1203 13:19:29.090951 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"d4dae8bb-1f5e-446c-9820-769e77414c04\" (UID: \"d4dae8bb-1f5e-446c-9820-769e77414c04\") " Dec 03 13:19:29 crc kubenswrapper[4986]: I1203 13:19:29.091010 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4dae8bb-1f5e-446c-9820-769e77414c04-config-data\") pod \"d4dae8bb-1f5e-446c-9820-769e77414c04\" (UID: \"d4dae8bb-1f5e-446c-9820-769e77414c04\") " Dec 03 13:19:29 crc kubenswrapper[4986]: I1203 13:19:29.091072 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4dae8bb-1f5e-446c-9820-769e77414c04-public-tls-certs\") pod \"d4dae8bb-1f5e-446c-9820-769e77414c04\" (UID: \"d4dae8bb-1f5e-446c-9820-769e77414c04\") " Dec 03 13:19:29 crc kubenswrapper[4986]: I1203 13:19:29.094871 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4dae8bb-1f5e-446c-9820-769e77414c04-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "d4dae8bb-1f5e-446c-9820-769e77414c04" (UID: "d4dae8bb-1f5e-446c-9820-769e77414c04"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 13:19:29 crc kubenswrapper[4986]: I1203 13:19:29.095131 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4dae8bb-1f5e-446c-9820-769e77414c04-logs" (OuterVolumeSpecName: "logs") pod "d4dae8bb-1f5e-446c-9820-769e77414c04" (UID: "d4dae8bb-1f5e-446c-9820-769e77414c04"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 13:19:29 crc kubenswrapper[4986]: I1203 13:19:29.100546 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4dae8bb-1f5e-446c-9820-769e77414c04-scripts" (OuterVolumeSpecName: "scripts") pod "d4dae8bb-1f5e-446c-9820-769e77414c04" (UID: "d4dae8bb-1f5e-446c-9820-769e77414c04"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:19:29 crc kubenswrapper[4986]: I1203 13:19:29.119420 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4dae8bb-1f5e-446c-9820-769e77414c04-kube-api-access-2mdff" (OuterVolumeSpecName: "kube-api-access-2mdff") pod "d4dae8bb-1f5e-446c-9820-769e77414c04" (UID: "d4dae8bb-1f5e-446c-9820-769e77414c04"). InnerVolumeSpecName "kube-api-access-2mdff". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:19:29 crc kubenswrapper[4986]: I1203 13:19:29.123829 4986 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-cc774c568-phcpp" podUID="cee5d9b6-d11e-4bad-b013-196a4f401404" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.141:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.141:8443: connect: connection refused" Dec 03 13:19:29 crc kubenswrapper[4986]: I1203 13:19:29.123952 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-cc774c568-phcpp" Dec 03 13:19:29 crc kubenswrapper[4986]: I1203 13:19:29.149916 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "d4dae8bb-1f5e-446c-9820-769e77414c04" (UID: "d4dae8bb-1f5e-446c-9820-769e77414c04"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 03 13:19:29 crc kubenswrapper[4986]: I1203 13:19:29.167799 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4dae8bb-1f5e-446c-9820-769e77414c04-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d4dae8bb-1f5e-446c-9820-769e77414c04" (UID: "d4dae8bb-1f5e-446c-9820-769e77414c04"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:19:29 crc kubenswrapper[4986]: I1203 13:19:29.193755 4986 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4dae8bb-1f5e-446c-9820-769e77414c04-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 13:19:29 crc kubenswrapper[4986]: I1203 13:19:29.193790 4986 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d4dae8bb-1f5e-446c-9820-769e77414c04-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 03 13:19:29 crc kubenswrapper[4986]: I1203 13:19:29.193801 4986 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4dae8bb-1f5e-446c-9820-769e77414c04-logs\") on node \"crc\" DevicePath \"\"" Dec 03 13:19:29 crc kubenswrapper[4986]: I1203 13:19:29.193810 4986 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4dae8bb-1f5e-446c-9820-769e77414c04-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 13:19:29 crc kubenswrapper[4986]: I1203 13:19:29.193819 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2mdff\" (UniqueName: \"kubernetes.io/projected/d4dae8bb-1f5e-446c-9820-769e77414c04-kube-api-access-2mdff\") on node \"crc\" DevicePath \"\"" Dec 03 13:19:29 crc kubenswrapper[4986]: I1203 13:19:29.193852 4986 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Dec 03 13:19:29 crc kubenswrapper[4986]: E1203 13:19:29.199664 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d4dae8bb-1f5e-446c-9820-769e77414c04-config-data podName:d4dae8bb-1f5e-446c-9820-769e77414c04 nodeName:}" failed. No retries permitted until 2025-12-03 13:19:29.699594618 +0000 UTC m=+1429.166025809 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "config-data" (UniqueName: "kubernetes.io/secret/d4dae8bb-1f5e-446c-9820-769e77414c04-config-data") pod "d4dae8bb-1f5e-446c-9820-769e77414c04" (UID: "d4dae8bb-1f5e-446c-9820-769e77414c04") : error deleting /var/lib/kubelet/pods/d4dae8bb-1f5e-446c-9820-769e77414c04/volume-subpaths: remove /var/lib/kubelet/pods/d4dae8bb-1f5e-446c-9820-769e77414c04/volume-subpaths: no such file or directory Dec 03 13:19:29 crc kubenswrapper[4986]: I1203 13:19:29.202389 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4dae8bb-1f5e-446c-9820-769e77414c04-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "d4dae8bb-1f5e-446c-9820-769e77414c04" (UID: "d4dae8bb-1f5e-446c-9820-769e77414c04"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:19:29 crc kubenswrapper[4986]: I1203 13:19:29.224788 4986 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Dec 03 13:19:29 crc kubenswrapper[4986]: I1203 13:19:29.295094 4986 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Dec 03 13:19:29 crc kubenswrapper[4986]: I1203 13:19:29.295130 4986 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4dae8bb-1f5e-446c-9820-769e77414c04-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 03 13:19:29 crc kubenswrapper[4986]: I1203 13:19:29.700826 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4dae8bb-1f5e-446c-9820-769e77414c04-config-data\") pod \"d4dae8bb-1f5e-446c-9820-769e77414c04\" (UID: \"d4dae8bb-1f5e-446c-9820-769e77414c04\") " Dec 03 13:19:29 crc kubenswrapper[4986]: I1203 13:19:29.717442 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4dae8bb-1f5e-446c-9820-769e77414c04-config-data" (OuterVolumeSpecName: "config-data") pod "d4dae8bb-1f5e-446c-9820-769e77414c04" (UID: "d4dae8bb-1f5e-446c-9820-769e77414c04"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:19:29 crc kubenswrapper[4986]: I1203 13:19:29.802962 4986 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4dae8bb-1f5e-446c-9820-769e77414c04-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 13:19:29 crc kubenswrapper[4986]: I1203 13:19:29.926473 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 03 13:19:30 crc kubenswrapper[4986]: I1203 13:19:30.007756 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"33238cb5-2bde-4244-aa0d-cc11080f57fb","Type":"ContainerStarted","Data":"c7160f36d5690eee25b4f4e4879654aedc9f613abccdfbe22cefc022d7af6169"} Dec 03 13:19:30 crc kubenswrapper[4986]: I1203 13:19:30.007899 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Dec 03 13:19:30 crc kubenswrapper[4986]: I1203 13:19:30.010133 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d4dae8bb-1f5e-446c-9820-769e77414c04","Type":"ContainerDied","Data":"df503ab7331baa7994fce6aff8cd2d41df28ec4a060afa8051c098e7a9fa05ac"} Dec 03 13:19:30 crc kubenswrapper[4986]: I1203 13:19:30.010173 4986 scope.go:117] "RemoveContainer" containerID="cad60ee62b6c93be511446f0006c6f3f5fac38db2d6eabae98e5c3439c853b7b" Dec 03 13:19:30 crc kubenswrapper[4986]: I1203 13:19:30.010225 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 03 13:19:30 crc kubenswrapper[4986]: I1203 13:19:30.014836 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2eeabafd-9779-4486-bde1-9fdaf83cf88a","Type":"ContainerStarted","Data":"8840a305734b65731f05fa8c564f0e6a333ea9694df6852bb2fa5f45380b57e4"} Dec 03 13:19:30 crc kubenswrapper[4986]: I1203 13:19:30.029971 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=3.614790932 podStartE2EDuration="4.029954865s" podCreationTimestamp="2025-12-03 13:19:26 +0000 UTC" firstStartedPulling="2025-12-03 13:19:28.136600755 +0000 UTC m=+1427.603031946" lastFinishedPulling="2025-12-03 13:19:28.551764688 +0000 UTC m=+1428.018195879" observedRunningTime="2025-12-03 13:19:30.025230727 +0000 UTC m=+1429.491661918" watchObservedRunningTime="2025-12-03 13:19:30.029954865 +0000 UTC m=+1429.496386056" Dec 03 13:19:30 crc kubenswrapper[4986]: I1203 13:19:30.074238 4986 scope.go:117] "RemoveContainer" containerID="4c8bc6062e737df3ded631623dd55f56169f875e8ff065cfeb88edc561757def" Dec 03 13:19:30 crc kubenswrapper[4986]: I1203 13:19:30.074343 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 03 13:19:30 crc kubenswrapper[4986]: I1203 13:19:30.085653 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 03 13:19:30 crc kubenswrapper[4986]: I1203 13:19:30.114951 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 03 13:19:30 crc kubenswrapper[4986]: E1203 13:19:30.115589 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4dae8bb-1f5e-446c-9820-769e77414c04" containerName="glance-log" Dec 03 13:19:30 crc kubenswrapper[4986]: I1203 13:19:30.115685 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4dae8bb-1f5e-446c-9820-769e77414c04" containerName="glance-log" Dec 03 13:19:30 crc kubenswrapper[4986]: E1203 13:19:30.115807 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4dae8bb-1f5e-446c-9820-769e77414c04" containerName="glance-httpd" Dec 03 13:19:30 crc kubenswrapper[4986]: I1203 13:19:30.115890 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4dae8bb-1f5e-446c-9820-769e77414c04" containerName="glance-httpd" Dec 03 13:19:30 crc kubenswrapper[4986]: I1203 13:19:30.116163 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4dae8bb-1f5e-446c-9820-769e77414c04" containerName="glance-log" Dec 03 13:19:30 crc kubenswrapper[4986]: I1203 13:19:30.116256 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4dae8bb-1f5e-446c-9820-769e77414c04" containerName="glance-httpd" Dec 03 13:19:30 crc kubenswrapper[4986]: I1203 13:19:30.117557 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 03 13:19:30 crc kubenswrapper[4986]: I1203 13:19:30.124247 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Dec 03 13:19:30 crc kubenswrapper[4986]: I1203 13:19:30.125413 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 03 13:19:30 crc kubenswrapper[4986]: I1203 13:19:30.126188 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 03 13:19:30 crc kubenswrapper[4986]: I1203 13:19:30.212224 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/515375fd-69ab-4f66-9fa3-ac72e0eeb97b-logs\") pod \"glance-default-external-api-0\" (UID: \"515375fd-69ab-4f66-9fa3-ac72e0eeb97b\") " pod="openstack/glance-default-external-api-0" Dec 03 13:19:30 crc kubenswrapper[4986]: I1203 13:19:30.212508 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/515375fd-69ab-4f66-9fa3-ac72e0eeb97b-config-data\") pod \"glance-default-external-api-0\" (UID: \"515375fd-69ab-4f66-9fa3-ac72e0eeb97b\") " pod="openstack/glance-default-external-api-0" Dec 03 13:19:30 crc kubenswrapper[4986]: I1203 13:19:30.212532 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"515375fd-69ab-4f66-9fa3-ac72e0eeb97b\") " pod="openstack/glance-default-external-api-0" Dec 03 13:19:30 crc kubenswrapper[4986]: I1203 13:19:30.212586 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/515375fd-69ab-4f66-9fa3-ac72e0eeb97b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"515375fd-69ab-4f66-9fa3-ac72e0eeb97b\") " pod="openstack/glance-default-external-api-0" Dec 03 13:19:30 crc kubenswrapper[4986]: I1203 13:19:30.212602 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/515375fd-69ab-4f66-9fa3-ac72e0eeb97b-scripts\") pod \"glance-default-external-api-0\" (UID: \"515375fd-69ab-4f66-9fa3-ac72e0eeb97b\") " pod="openstack/glance-default-external-api-0" Dec 03 13:19:30 crc kubenswrapper[4986]: I1203 13:19:30.212622 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/515375fd-69ab-4f66-9fa3-ac72e0eeb97b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"515375fd-69ab-4f66-9fa3-ac72e0eeb97b\") " pod="openstack/glance-default-external-api-0" Dec 03 13:19:30 crc kubenswrapper[4986]: I1203 13:19:30.212698 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/515375fd-69ab-4f66-9fa3-ac72e0eeb97b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"515375fd-69ab-4f66-9fa3-ac72e0eeb97b\") " pod="openstack/glance-default-external-api-0" Dec 03 13:19:30 crc kubenswrapper[4986]: I1203 13:19:30.212722 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4gnr\" (UniqueName: \"kubernetes.io/projected/515375fd-69ab-4f66-9fa3-ac72e0eeb97b-kube-api-access-v4gnr\") pod \"glance-default-external-api-0\" (UID: \"515375fd-69ab-4f66-9fa3-ac72e0eeb97b\") " pod="openstack/glance-default-external-api-0" Dec 03 13:19:30 crc kubenswrapper[4986]: I1203 13:19:30.314225 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/515375fd-69ab-4f66-9fa3-ac72e0eeb97b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"515375fd-69ab-4f66-9fa3-ac72e0eeb97b\") " pod="openstack/glance-default-external-api-0" Dec 03 13:19:30 crc kubenswrapper[4986]: I1203 13:19:30.314311 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4gnr\" (UniqueName: \"kubernetes.io/projected/515375fd-69ab-4f66-9fa3-ac72e0eeb97b-kube-api-access-v4gnr\") pod \"glance-default-external-api-0\" (UID: \"515375fd-69ab-4f66-9fa3-ac72e0eeb97b\") " pod="openstack/glance-default-external-api-0" Dec 03 13:19:30 crc kubenswrapper[4986]: I1203 13:19:30.314387 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/515375fd-69ab-4f66-9fa3-ac72e0eeb97b-logs\") pod \"glance-default-external-api-0\" (UID: \"515375fd-69ab-4f66-9fa3-ac72e0eeb97b\") " pod="openstack/glance-default-external-api-0" Dec 03 13:19:30 crc kubenswrapper[4986]: I1203 13:19:30.314415 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/515375fd-69ab-4f66-9fa3-ac72e0eeb97b-config-data\") pod \"glance-default-external-api-0\" (UID: \"515375fd-69ab-4f66-9fa3-ac72e0eeb97b\") " pod="openstack/glance-default-external-api-0" Dec 03 13:19:30 crc kubenswrapper[4986]: I1203 13:19:30.314447 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"515375fd-69ab-4f66-9fa3-ac72e0eeb97b\") " pod="openstack/glance-default-external-api-0" Dec 03 13:19:30 crc kubenswrapper[4986]: I1203 13:19:30.314532 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/515375fd-69ab-4f66-9fa3-ac72e0eeb97b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"515375fd-69ab-4f66-9fa3-ac72e0eeb97b\") " pod="openstack/glance-default-external-api-0" Dec 03 13:19:30 crc kubenswrapper[4986]: I1203 13:19:30.314557 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/515375fd-69ab-4f66-9fa3-ac72e0eeb97b-scripts\") pod \"glance-default-external-api-0\" (UID: \"515375fd-69ab-4f66-9fa3-ac72e0eeb97b\") " pod="openstack/glance-default-external-api-0" Dec 03 13:19:30 crc kubenswrapper[4986]: I1203 13:19:30.314583 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/515375fd-69ab-4f66-9fa3-ac72e0eeb97b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"515375fd-69ab-4f66-9fa3-ac72e0eeb97b\") " pod="openstack/glance-default-external-api-0" Dec 03 13:19:30 crc kubenswrapper[4986]: I1203 13:19:30.316056 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/515375fd-69ab-4f66-9fa3-ac72e0eeb97b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"515375fd-69ab-4f66-9fa3-ac72e0eeb97b\") " pod="openstack/glance-default-external-api-0" Dec 03 13:19:30 crc kubenswrapper[4986]: I1203 13:19:30.316405 4986 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"515375fd-69ab-4f66-9fa3-ac72e0eeb97b\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-external-api-0" Dec 03 13:19:30 crc kubenswrapper[4986]: I1203 13:19:30.320685 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/515375fd-69ab-4f66-9fa3-ac72e0eeb97b-logs\") pod \"glance-default-external-api-0\" (UID: \"515375fd-69ab-4f66-9fa3-ac72e0eeb97b\") " pod="openstack/glance-default-external-api-0" Dec 03 13:19:30 crc kubenswrapper[4986]: I1203 13:19:30.334505 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/515375fd-69ab-4f66-9fa3-ac72e0eeb97b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"515375fd-69ab-4f66-9fa3-ac72e0eeb97b\") " pod="openstack/glance-default-external-api-0" Dec 03 13:19:30 crc kubenswrapper[4986]: I1203 13:19:30.335558 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/515375fd-69ab-4f66-9fa3-ac72e0eeb97b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"515375fd-69ab-4f66-9fa3-ac72e0eeb97b\") " pod="openstack/glance-default-external-api-0" Dec 03 13:19:30 crc kubenswrapper[4986]: I1203 13:19:30.341493 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/515375fd-69ab-4f66-9fa3-ac72e0eeb97b-config-data\") pod \"glance-default-external-api-0\" (UID: \"515375fd-69ab-4f66-9fa3-ac72e0eeb97b\") " pod="openstack/glance-default-external-api-0" Dec 03 13:19:30 crc kubenswrapper[4986]: I1203 13:19:30.348872 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/515375fd-69ab-4f66-9fa3-ac72e0eeb97b-scripts\") pod \"glance-default-external-api-0\" (UID: \"515375fd-69ab-4f66-9fa3-ac72e0eeb97b\") " pod="openstack/glance-default-external-api-0" Dec 03 13:19:30 crc kubenswrapper[4986]: I1203 13:19:30.349136 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4gnr\" (UniqueName: \"kubernetes.io/projected/515375fd-69ab-4f66-9fa3-ac72e0eeb97b-kube-api-access-v4gnr\") pod \"glance-default-external-api-0\" (UID: \"515375fd-69ab-4f66-9fa3-ac72e0eeb97b\") " pod="openstack/glance-default-external-api-0" Dec 03 13:19:30 crc kubenswrapper[4986]: I1203 13:19:30.429030 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"515375fd-69ab-4f66-9fa3-ac72e0eeb97b\") " pod="openstack/glance-default-external-api-0" Dec 03 13:19:30 crc kubenswrapper[4986]: I1203 13:19:30.517509 4986 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="d5c981f3-a559-4641-aaae-2e90c5d0d543" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.161:9292/healthcheck\": dial tcp 10.217.0.161:9292: connect: connection refused" Dec 03 13:19:30 crc kubenswrapper[4986]: I1203 13:19:30.517623 4986 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="d5c981f3-a559-4641-aaae-2e90c5d0d543" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.161:9292/healthcheck\": dial tcp 10.217.0.161:9292: connect: connection refused" Dec 03 13:19:30 crc kubenswrapper[4986]: I1203 13:19:30.541217 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-wv2h7" Dec 03 13:19:30 crc kubenswrapper[4986]: I1203 13:19:30.557509 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 03 13:19:30 crc kubenswrapper[4986]: I1203 13:19:30.750273 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67426b10-306c-4d32-94e7-99267dc8e435-operator-scripts\") pod \"67426b10-306c-4d32-94e7-99267dc8e435\" (UID: \"67426b10-306c-4d32-94e7-99267dc8e435\") " Dec 03 13:19:30 crc kubenswrapper[4986]: I1203 13:19:30.750766 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k2vhg\" (UniqueName: \"kubernetes.io/projected/67426b10-306c-4d32-94e7-99267dc8e435-kube-api-access-k2vhg\") pod \"67426b10-306c-4d32-94e7-99267dc8e435\" (UID: \"67426b10-306c-4d32-94e7-99267dc8e435\") " Dec 03 13:19:30 crc kubenswrapper[4986]: I1203 13:19:30.752661 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67426b10-306c-4d32-94e7-99267dc8e435-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "67426b10-306c-4d32-94e7-99267dc8e435" (UID: "67426b10-306c-4d32-94e7-99267dc8e435"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:19:30 crc kubenswrapper[4986]: I1203 13:19:30.830569 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67426b10-306c-4d32-94e7-99267dc8e435-kube-api-access-k2vhg" (OuterVolumeSpecName: "kube-api-access-k2vhg") pod "67426b10-306c-4d32-94e7-99267dc8e435" (UID: "67426b10-306c-4d32-94e7-99267dc8e435"). InnerVolumeSpecName "kube-api-access-k2vhg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:19:30 crc kubenswrapper[4986]: I1203 13:19:30.852697 4986 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67426b10-306c-4d32-94e7-99267dc8e435-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 13:19:30 crc kubenswrapper[4986]: I1203 13:19:30.852741 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k2vhg\" (UniqueName: \"kubernetes.io/projected/67426b10-306c-4d32-94e7-99267dc8e435-kube-api-access-k2vhg\") on node \"crc\" DevicePath \"\"" Dec 03 13:19:30 crc kubenswrapper[4986]: I1203 13:19:30.922912 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-nqdzr" Dec 03 13:19:30 crc kubenswrapper[4986]: I1203 13:19:30.935828 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-c47e-account-create-update-g6vlv" Dec 03 13:19:30 crc kubenswrapper[4986]: I1203 13:19:30.965853 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-qbwm4" Dec 03 13:19:30 crc kubenswrapper[4986]: I1203 13:19:30.992684 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4dae8bb-1f5e-446c-9820-769e77414c04" path="/var/lib/kubelet/pods/d4dae8bb-1f5e-446c-9820-769e77414c04/volumes" Dec 03 13:19:30 crc kubenswrapper[4986]: I1203 13:19:30.994849 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-22db-account-create-update-ftg9x" Dec 03 13:19:31 crc kubenswrapper[4986]: I1203 13:19:31.011474 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-e9c3-account-create-update-7vqrm" Dec 03 13:19:31 crc kubenswrapper[4986]: I1203 13:19:31.057771 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3245c21-420c-442d-934f-73f6917fcb21-operator-scripts\") pod \"c3245c21-420c-442d-934f-73f6917fcb21\" (UID: \"c3245c21-420c-442d-934f-73f6917fcb21\") " Dec 03 13:19:31 crc kubenswrapper[4986]: I1203 13:19:31.058025 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-svzzz\" (UniqueName: \"kubernetes.io/projected/c3245c21-420c-442d-934f-73f6917fcb21-kube-api-access-svzzz\") pod \"c3245c21-420c-442d-934f-73f6917fcb21\" (UID: \"c3245c21-420c-442d-934f-73f6917fcb21\") " Dec 03 13:19:31 crc kubenswrapper[4986]: I1203 13:19:31.058107 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lsr6w\" (UniqueName: \"kubernetes.io/projected/8c8061fb-865f-46dd-846b-87907d3f12f7-kube-api-access-lsr6w\") pod \"8c8061fb-865f-46dd-846b-87907d3f12f7\" (UID: \"8c8061fb-865f-46dd-846b-87907d3f12f7\") " Dec 03 13:19:31 crc kubenswrapper[4986]: I1203 13:19:31.058326 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c8061fb-865f-46dd-846b-87907d3f12f7-operator-scripts\") pod \"8c8061fb-865f-46dd-846b-87907d3f12f7\" (UID: \"8c8061fb-865f-46dd-846b-87907d3f12f7\") " Dec 03 13:19:31 crc kubenswrapper[4986]: I1203 13:19:31.063258 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3245c21-420c-442d-934f-73f6917fcb21-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c3245c21-420c-442d-934f-73f6917fcb21" (UID: "c3245c21-420c-442d-934f-73f6917fcb21"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:19:31 crc kubenswrapper[4986]: I1203 13:19:31.064720 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c8061fb-865f-46dd-846b-87907d3f12f7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8c8061fb-865f-46dd-846b-87907d3f12f7" (UID: "8c8061fb-865f-46dd-846b-87907d3f12f7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:19:31 crc kubenswrapper[4986]: I1203 13:19:31.065363 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c8061fb-865f-46dd-846b-87907d3f12f7-kube-api-access-lsr6w" (OuterVolumeSpecName: "kube-api-access-lsr6w") pod "8c8061fb-865f-46dd-846b-87907d3f12f7" (UID: "8c8061fb-865f-46dd-846b-87907d3f12f7"). InnerVolumeSpecName "kube-api-access-lsr6w". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:19:31 crc kubenswrapper[4986]: I1203 13:19:31.065565 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-wv2h7" event={"ID":"67426b10-306c-4d32-94e7-99267dc8e435","Type":"ContainerDied","Data":"f58dc88ef6e7feebb1d820f83301d5560ddab95325a4c2fa88536d62fd5a97ac"} Dec 03 13:19:31 crc kubenswrapper[4986]: I1203 13:19:31.069408 4986 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f58dc88ef6e7feebb1d820f83301d5560ddab95325a4c2fa88536d62fd5a97ac" Dec 03 13:19:31 crc kubenswrapper[4986]: I1203 13:19:31.069525 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-e9c3-account-create-update-7vqrm" event={"ID":"f756aad9-8f89-429d-af35-c412a37c78cb","Type":"ContainerDied","Data":"1a84fcb0f2d438e53568de704acf0583a0b6fb2426c940ad8ff10fff24ebbaad"} Dec 03 13:19:31 crc kubenswrapper[4986]: I1203 13:19:31.069551 4986 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a84fcb0f2d438e53568de704acf0583a0b6fb2426c940ad8ff10fff24ebbaad" Dec 03 13:19:31 crc kubenswrapper[4986]: I1203 13:19:31.065549 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-wv2h7" Dec 03 13:19:31 crc kubenswrapper[4986]: I1203 13:19:31.068764 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-e9c3-account-create-update-7vqrm" Dec 03 13:19:31 crc kubenswrapper[4986]: I1203 13:19:31.066894 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3245c21-420c-442d-934f-73f6917fcb21-kube-api-access-svzzz" (OuterVolumeSpecName: "kube-api-access-svzzz") pod "c3245c21-420c-442d-934f-73f6917fcb21" (UID: "c3245c21-420c-442d-934f-73f6917fcb21"). InnerVolumeSpecName "kube-api-access-svzzz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:19:31 crc kubenswrapper[4986]: I1203 13:19:31.076501 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-qbwm4" event={"ID":"5f8a0583-c937-4c77-8d43-ea7dcb406886","Type":"ContainerDied","Data":"a3aaab06a2e42872acc3927a63ed614a82501a2010f7a485a0fef58178256ce0"} Dec 03 13:19:31 crc kubenswrapper[4986]: I1203 13:19:31.076635 4986 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3aaab06a2e42872acc3927a63ed614a82501a2010f7a485a0fef58178256ce0" Dec 03 13:19:31 crc kubenswrapper[4986]: I1203 13:19:31.076580 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-qbwm4" Dec 03 13:19:31 crc kubenswrapper[4986]: I1203 13:19:31.077133 4986 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c8061fb-865f-46dd-846b-87907d3f12f7-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 13:19:31 crc kubenswrapper[4986]: I1203 13:19:31.077393 4986 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3245c21-420c-442d-934f-73f6917fcb21-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 13:19:31 crc kubenswrapper[4986]: I1203 13:19:31.077530 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-svzzz\" (UniqueName: \"kubernetes.io/projected/c3245c21-420c-442d-934f-73f6917fcb21-kube-api-access-svzzz\") on node \"crc\" DevicePath \"\"" Dec 03 13:19:31 crc kubenswrapper[4986]: I1203 13:19:31.077610 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lsr6w\" (UniqueName: \"kubernetes.io/projected/8c8061fb-865f-46dd-846b-87907d3f12f7-kube-api-access-lsr6w\") on node \"crc\" DevicePath \"\"" Dec 03 13:19:31 crc kubenswrapper[4986]: I1203 13:19:31.092069 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-c47e-account-create-update-g6vlv" event={"ID":"8c8061fb-865f-46dd-846b-87907d3f12f7","Type":"ContainerDied","Data":"febff9c25b9a15f6424812c89c24f0d7af85b5b46567f534cd07472b2a688d7d"} Dec 03 13:19:31 crc kubenswrapper[4986]: I1203 13:19:31.092114 4986 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="febff9c25b9a15f6424812c89c24f0d7af85b5b46567f534cd07472b2a688d7d" Dec 03 13:19:31 crc kubenswrapper[4986]: I1203 13:19:31.092177 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-c47e-account-create-update-g6vlv" Dec 03 13:19:31 crc kubenswrapper[4986]: I1203 13:19:31.098248 4986 generic.go:334] "Generic (PLEG): container finished" podID="d5c981f3-a559-4641-aaae-2e90c5d0d543" containerID="152f15955d2a2d608a7a6b95d81e9ec4a2cedda44a9d160e00d09e6b300ba6fd" exitCode=0 Dec 03 13:19:31 crc kubenswrapper[4986]: I1203 13:19:31.098373 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d5c981f3-a559-4641-aaae-2e90c5d0d543","Type":"ContainerDied","Data":"152f15955d2a2d608a7a6b95d81e9ec4a2cedda44a9d160e00d09e6b300ba6fd"} Dec 03 13:19:31 crc kubenswrapper[4986]: I1203 13:19:31.133436 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-nqdzr" Dec 03 13:19:31 crc kubenswrapper[4986]: I1203 13:19:31.133477 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-nqdzr" event={"ID":"c3245c21-420c-442d-934f-73f6917fcb21","Type":"ContainerDied","Data":"7c3f9dbd64d970df522b7a88402e72e66801733dab1955199494547f654be52b"} Dec 03 13:19:31 crc kubenswrapper[4986]: I1203 13:19:31.133526 4986 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c3f9dbd64d970df522b7a88402e72e66801733dab1955199494547f654be52b" Dec 03 13:19:31 crc kubenswrapper[4986]: I1203 13:19:31.136639 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-22db-account-create-update-ftg9x" event={"ID":"c6b2f7e4-b8b0-4440-ac4b-291ea7b92d36","Type":"ContainerDied","Data":"b63e9e5040126674782012d9875cfb8ba92e791f63f9e88d19efd72e3fba3a9a"} Dec 03 13:19:31 crc kubenswrapper[4986]: I1203 13:19:31.136665 4986 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b63e9e5040126674782012d9875cfb8ba92e791f63f9e88d19efd72e3fba3a9a" Dec 03 13:19:31 crc kubenswrapper[4986]: I1203 13:19:31.136706 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-22db-account-create-update-ftg9x" Dec 03 13:19:31 crc kubenswrapper[4986]: I1203 13:19:31.138706 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2eeabafd-9779-4486-bde1-9fdaf83cf88a","Type":"ContainerStarted","Data":"8f45a89ce34a47281bbd82265b9335e1eb74df5eea4b67e8005eea599d1235ac"} Dec 03 13:19:31 crc kubenswrapper[4986]: I1203 13:19:31.178548 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hjzql\" (UniqueName: \"kubernetes.io/projected/5f8a0583-c937-4c77-8d43-ea7dcb406886-kube-api-access-hjzql\") pod \"5f8a0583-c937-4c77-8d43-ea7dcb406886\" (UID: \"5f8a0583-c937-4c77-8d43-ea7dcb406886\") " Dec 03 13:19:31 crc kubenswrapper[4986]: I1203 13:19:31.178657 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f756aad9-8f89-429d-af35-c412a37c78cb-operator-scripts\") pod \"f756aad9-8f89-429d-af35-c412a37c78cb\" (UID: \"f756aad9-8f89-429d-af35-c412a37c78cb\") " Dec 03 13:19:31 crc kubenswrapper[4986]: I1203 13:19:31.178693 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-44kx9\" (UniqueName: \"kubernetes.io/projected/c6b2f7e4-b8b0-4440-ac4b-291ea7b92d36-kube-api-access-44kx9\") pod \"c6b2f7e4-b8b0-4440-ac4b-291ea7b92d36\" (UID: \"c6b2f7e4-b8b0-4440-ac4b-291ea7b92d36\") " Dec 03 13:19:31 crc kubenswrapper[4986]: I1203 13:19:31.178784 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f8a0583-c937-4c77-8d43-ea7dcb406886-operator-scripts\") pod \"5f8a0583-c937-4c77-8d43-ea7dcb406886\" (UID: \"5f8a0583-c937-4c77-8d43-ea7dcb406886\") " Dec 03 13:19:31 crc kubenswrapper[4986]: I1203 13:19:31.178884 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nvf29\" (UniqueName: \"kubernetes.io/projected/f756aad9-8f89-429d-af35-c412a37c78cb-kube-api-access-nvf29\") pod \"f756aad9-8f89-429d-af35-c412a37c78cb\" (UID: \"f756aad9-8f89-429d-af35-c412a37c78cb\") " Dec 03 13:19:31 crc kubenswrapper[4986]: I1203 13:19:31.178945 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6b2f7e4-b8b0-4440-ac4b-291ea7b92d36-operator-scripts\") pod \"c6b2f7e4-b8b0-4440-ac4b-291ea7b92d36\" (UID: \"c6b2f7e4-b8b0-4440-ac4b-291ea7b92d36\") " Dec 03 13:19:31 crc kubenswrapper[4986]: I1203 13:19:31.179562 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f756aad9-8f89-429d-af35-c412a37c78cb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f756aad9-8f89-429d-af35-c412a37c78cb" (UID: "f756aad9-8f89-429d-af35-c412a37c78cb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:19:31 crc kubenswrapper[4986]: I1203 13:19:31.179981 4986 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f756aad9-8f89-429d-af35-c412a37c78cb-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 13:19:31 crc kubenswrapper[4986]: I1203 13:19:31.180227 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6b2f7e4-b8b0-4440-ac4b-291ea7b92d36-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c6b2f7e4-b8b0-4440-ac4b-291ea7b92d36" (UID: "c6b2f7e4-b8b0-4440-ac4b-291ea7b92d36"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:19:31 crc kubenswrapper[4986]: I1203 13:19:31.180537 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f8a0583-c937-4c77-8d43-ea7dcb406886-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5f8a0583-c937-4c77-8d43-ea7dcb406886" (UID: "5f8a0583-c937-4c77-8d43-ea7dcb406886"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:19:31 crc kubenswrapper[4986]: I1203 13:19:31.183037 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6b2f7e4-b8b0-4440-ac4b-291ea7b92d36-kube-api-access-44kx9" (OuterVolumeSpecName: "kube-api-access-44kx9") pod "c6b2f7e4-b8b0-4440-ac4b-291ea7b92d36" (UID: "c6b2f7e4-b8b0-4440-ac4b-291ea7b92d36"). InnerVolumeSpecName "kube-api-access-44kx9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:19:31 crc kubenswrapper[4986]: I1203 13:19:31.183440 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f8a0583-c937-4c77-8d43-ea7dcb406886-kube-api-access-hjzql" (OuterVolumeSpecName: "kube-api-access-hjzql") pod "5f8a0583-c937-4c77-8d43-ea7dcb406886" (UID: "5f8a0583-c937-4c77-8d43-ea7dcb406886"). InnerVolumeSpecName "kube-api-access-hjzql". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:19:31 crc kubenswrapper[4986]: I1203 13:19:31.185216 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f756aad9-8f89-429d-af35-c412a37c78cb-kube-api-access-nvf29" (OuterVolumeSpecName: "kube-api-access-nvf29") pod "f756aad9-8f89-429d-af35-c412a37c78cb" (UID: "f756aad9-8f89-429d-af35-c412a37c78cb"). InnerVolumeSpecName "kube-api-access-nvf29". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:19:31 crc kubenswrapper[4986]: I1203 13:19:31.284592 4986 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6b2f7e4-b8b0-4440-ac4b-291ea7b92d36-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 13:19:31 crc kubenswrapper[4986]: I1203 13:19:31.284630 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hjzql\" (UniqueName: \"kubernetes.io/projected/5f8a0583-c937-4c77-8d43-ea7dcb406886-kube-api-access-hjzql\") on node \"crc\" DevicePath \"\"" Dec 03 13:19:31 crc kubenswrapper[4986]: I1203 13:19:31.284646 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-44kx9\" (UniqueName: \"kubernetes.io/projected/c6b2f7e4-b8b0-4440-ac4b-291ea7b92d36-kube-api-access-44kx9\") on node \"crc\" DevicePath \"\"" Dec 03 13:19:31 crc kubenswrapper[4986]: I1203 13:19:31.284658 4986 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f8a0583-c937-4c77-8d43-ea7dcb406886-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 13:19:31 crc kubenswrapper[4986]: I1203 13:19:31.284672 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nvf29\" (UniqueName: \"kubernetes.io/projected/f756aad9-8f89-429d-af35-c412a37c78cb-kube-api-access-nvf29\") on node \"crc\" DevicePath \"\"" Dec 03 13:19:31 crc kubenswrapper[4986]: I1203 13:19:31.487613 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 03 13:19:31 crc kubenswrapper[4986]: W1203 13:19:31.491571 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod515375fd_69ab_4f66_9fa3_ac72e0eeb97b.slice/crio-057caa6a427cd2cdf1c4d578ff258b6324a1f50b21e2e112a27d509166ad9d12 WatchSource:0}: Error finding container 057caa6a427cd2cdf1c4d578ff258b6324a1f50b21e2e112a27d509166ad9d12: Status 404 returned error can't find the container with id 057caa6a427cd2cdf1c4d578ff258b6324a1f50b21e2e112a27d509166ad9d12 Dec 03 13:19:31 crc kubenswrapper[4986]: I1203 13:19:31.619721 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 03 13:19:31 crc kubenswrapper[4986]: I1203 13:19:31.694785 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5c981f3-a559-4641-aaae-2e90c5d0d543-logs\") pod \"d5c981f3-a559-4641-aaae-2e90c5d0d543\" (UID: \"d5c981f3-a559-4641-aaae-2e90c5d0d543\") " Dec 03 13:19:31 crc kubenswrapper[4986]: I1203 13:19:31.695145 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"d5c981f3-a559-4641-aaae-2e90c5d0d543\" (UID: \"d5c981f3-a559-4641-aaae-2e90c5d0d543\") " Dec 03 13:19:31 crc kubenswrapper[4986]: I1203 13:19:31.695189 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5c981f3-a559-4641-aaae-2e90c5d0d543-internal-tls-certs\") pod \"d5c981f3-a559-4641-aaae-2e90c5d0d543\" (UID: \"d5c981f3-a559-4641-aaae-2e90c5d0d543\") " Dec 03 13:19:31 crc kubenswrapper[4986]: I1203 13:19:31.695254 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5c981f3-a559-4641-aaae-2e90c5d0d543-config-data\") pod \"d5c981f3-a559-4641-aaae-2e90c5d0d543\" (UID: \"d5c981f3-a559-4641-aaae-2e90c5d0d543\") " Dec 03 13:19:31 crc kubenswrapper[4986]: I1203 13:19:31.695309 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqtsg\" (UniqueName: \"kubernetes.io/projected/d5c981f3-a559-4641-aaae-2e90c5d0d543-kube-api-access-fqtsg\") pod \"d5c981f3-a559-4641-aaae-2e90c5d0d543\" (UID: \"d5c981f3-a559-4641-aaae-2e90c5d0d543\") " Dec 03 13:19:31 crc kubenswrapper[4986]: I1203 13:19:31.695342 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d5c981f3-a559-4641-aaae-2e90c5d0d543-httpd-run\") pod \"d5c981f3-a559-4641-aaae-2e90c5d0d543\" (UID: \"d5c981f3-a559-4641-aaae-2e90c5d0d543\") " Dec 03 13:19:31 crc kubenswrapper[4986]: I1203 13:19:31.695368 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5c981f3-a559-4641-aaae-2e90c5d0d543-combined-ca-bundle\") pod \"d5c981f3-a559-4641-aaae-2e90c5d0d543\" (UID: \"d5c981f3-a559-4641-aaae-2e90c5d0d543\") " Dec 03 13:19:31 crc kubenswrapper[4986]: I1203 13:19:31.695484 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5c981f3-a559-4641-aaae-2e90c5d0d543-scripts\") pod \"d5c981f3-a559-4641-aaae-2e90c5d0d543\" (UID: \"d5c981f3-a559-4641-aaae-2e90c5d0d543\") " Dec 03 13:19:31 crc kubenswrapper[4986]: I1203 13:19:31.696881 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5c981f3-a559-4641-aaae-2e90c5d0d543-logs" (OuterVolumeSpecName: "logs") pod "d5c981f3-a559-4641-aaae-2e90c5d0d543" (UID: "d5c981f3-a559-4641-aaae-2e90c5d0d543"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 13:19:31 crc kubenswrapper[4986]: I1203 13:19:31.700031 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5c981f3-a559-4641-aaae-2e90c5d0d543-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "d5c981f3-a559-4641-aaae-2e90c5d0d543" (UID: "d5c981f3-a559-4641-aaae-2e90c5d0d543"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 13:19:31 crc kubenswrapper[4986]: I1203 13:19:31.721996 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5c981f3-a559-4641-aaae-2e90c5d0d543-kube-api-access-fqtsg" (OuterVolumeSpecName: "kube-api-access-fqtsg") pod "d5c981f3-a559-4641-aaae-2e90c5d0d543" (UID: "d5c981f3-a559-4641-aaae-2e90c5d0d543"). InnerVolumeSpecName "kube-api-access-fqtsg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:19:31 crc kubenswrapper[4986]: I1203 13:19:31.722368 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5c981f3-a559-4641-aaae-2e90c5d0d543-scripts" (OuterVolumeSpecName: "scripts") pod "d5c981f3-a559-4641-aaae-2e90c5d0d543" (UID: "d5c981f3-a559-4641-aaae-2e90c5d0d543"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:19:31 crc kubenswrapper[4986]: I1203 13:19:31.725526 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "d5c981f3-a559-4641-aaae-2e90c5d0d543" (UID: "d5c981f3-a559-4641-aaae-2e90c5d0d543"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 03 13:19:31 crc kubenswrapper[4986]: I1203 13:19:31.758429 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5c981f3-a559-4641-aaae-2e90c5d0d543-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d5c981f3-a559-4641-aaae-2e90c5d0d543" (UID: "d5c981f3-a559-4641-aaae-2e90c5d0d543"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:19:31 crc kubenswrapper[4986]: I1203 13:19:31.797012 4986 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5c981f3-a559-4641-aaae-2e90c5d0d543-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 13:19:31 crc kubenswrapper[4986]: I1203 13:19:31.797036 4986 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5c981f3-a559-4641-aaae-2e90c5d0d543-logs\") on node \"crc\" DevicePath \"\"" Dec 03 13:19:31 crc kubenswrapper[4986]: I1203 13:19:31.797064 4986 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Dec 03 13:19:31 crc kubenswrapper[4986]: I1203 13:19:31.797076 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqtsg\" (UniqueName: \"kubernetes.io/projected/d5c981f3-a559-4641-aaae-2e90c5d0d543-kube-api-access-fqtsg\") on node \"crc\" DevicePath \"\"" Dec 03 13:19:31 crc kubenswrapper[4986]: I1203 13:19:31.797087 4986 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d5c981f3-a559-4641-aaae-2e90c5d0d543-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 03 13:19:31 crc kubenswrapper[4986]: I1203 13:19:31.797096 4986 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5c981f3-a559-4641-aaae-2e90c5d0d543-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 13:19:31 crc kubenswrapper[4986]: I1203 13:19:31.805754 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5c981f3-a559-4641-aaae-2e90c5d0d543-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "d5c981f3-a559-4641-aaae-2e90c5d0d543" (UID: "d5c981f3-a559-4641-aaae-2e90c5d0d543"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:19:31 crc kubenswrapper[4986]: I1203 13:19:31.805830 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5c981f3-a559-4641-aaae-2e90c5d0d543-config-data" (OuterVolumeSpecName: "config-data") pod "d5c981f3-a559-4641-aaae-2e90c5d0d543" (UID: "d5c981f3-a559-4641-aaae-2e90c5d0d543"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:19:31 crc kubenswrapper[4986]: I1203 13:19:31.827454 4986 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Dec 03 13:19:31 crc kubenswrapper[4986]: I1203 13:19:31.898717 4986 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Dec 03 13:19:31 crc kubenswrapper[4986]: I1203 13:19:31.898766 4986 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5c981f3-a559-4641-aaae-2e90c5d0d543-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 03 13:19:31 crc kubenswrapper[4986]: I1203 13:19:31.898783 4986 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5c981f3-a559-4641-aaae-2e90c5d0d543-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 13:19:32 crc kubenswrapper[4986]: I1203 13:19:32.159010 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d5c981f3-a559-4641-aaae-2e90c5d0d543","Type":"ContainerDied","Data":"2561718171dd0fae47bbdbb6c6452d8f695915abff555243477e39879275500d"} Dec 03 13:19:32 crc kubenswrapper[4986]: I1203 13:19:32.159073 4986 scope.go:117] "RemoveContainer" containerID="152f15955d2a2d608a7a6b95d81e9ec4a2cedda44a9d160e00d09e6b300ba6fd" Dec 03 13:19:32 crc kubenswrapper[4986]: I1203 13:19:32.159216 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 03 13:19:32 crc kubenswrapper[4986]: I1203 13:19:32.179452 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"515375fd-69ab-4f66-9fa3-ac72e0eeb97b","Type":"ContainerStarted","Data":"52c27bdb85773cbf2bb88cc15bb27d3194077c57fc98cf1ecbe03d2e3d733f06"} Dec 03 13:19:32 crc kubenswrapper[4986]: I1203 13:19:32.179489 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"515375fd-69ab-4f66-9fa3-ac72e0eeb97b","Type":"ContainerStarted","Data":"057caa6a427cd2cdf1c4d578ff258b6324a1f50b21e2e112a27d509166ad9d12"} Dec 03 13:19:32 crc kubenswrapper[4986]: I1203 13:19:32.196892 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2eeabafd-9779-4486-bde1-9fdaf83cf88a","Type":"ContainerStarted","Data":"24040e342f3df8f8c3e6a04a7e5f0a3f9f93a9fa727a36c48125d4f11717b174"} Dec 03 13:19:32 crc kubenswrapper[4986]: I1203 13:19:32.217529 4986 scope.go:117] "RemoveContainer" containerID="9a5fbe41f6ab0b06a7618b9c833dbabf3623c8d0201e3e708bde7bf5c6a5d49f" Dec 03 13:19:32 crc kubenswrapper[4986]: I1203 13:19:32.235366 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 03 13:19:32 crc kubenswrapper[4986]: I1203 13:19:32.264592 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 03 13:19:32 crc kubenswrapper[4986]: I1203 13:19:32.303325 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 03 13:19:32 crc kubenswrapper[4986]: E1203 13:19:32.305144 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67426b10-306c-4d32-94e7-99267dc8e435" containerName="mariadb-database-create" Dec 03 13:19:32 crc kubenswrapper[4986]: I1203 13:19:32.305223 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="67426b10-306c-4d32-94e7-99267dc8e435" containerName="mariadb-database-create" Dec 03 13:19:32 crc kubenswrapper[4986]: E1203 13:19:32.305301 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c8061fb-865f-46dd-846b-87907d3f12f7" containerName="mariadb-account-create-update" Dec 03 13:19:32 crc kubenswrapper[4986]: I1203 13:19:32.305363 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c8061fb-865f-46dd-846b-87907d3f12f7" containerName="mariadb-account-create-update" Dec 03 13:19:32 crc kubenswrapper[4986]: E1203 13:19:32.305423 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5c981f3-a559-4641-aaae-2e90c5d0d543" containerName="glance-log" Dec 03 13:19:32 crc kubenswrapper[4986]: I1203 13:19:32.305471 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5c981f3-a559-4641-aaae-2e90c5d0d543" containerName="glance-log" Dec 03 13:19:32 crc kubenswrapper[4986]: E1203 13:19:32.305729 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3245c21-420c-442d-934f-73f6917fcb21" containerName="mariadb-database-create" Dec 03 13:19:32 crc kubenswrapper[4986]: I1203 13:19:32.305787 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3245c21-420c-442d-934f-73f6917fcb21" containerName="mariadb-database-create" Dec 03 13:19:32 crc kubenswrapper[4986]: E1203 13:19:32.305864 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f8a0583-c937-4c77-8d43-ea7dcb406886" containerName="mariadb-database-create" Dec 03 13:19:32 crc kubenswrapper[4986]: I1203 13:19:32.305913 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f8a0583-c937-4c77-8d43-ea7dcb406886" containerName="mariadb-database-create" Dec 03 13:19:32 crc kubenswrapper[4986]: E1203 13:19:32.305997 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f756aad9-8f89-429d-af35-c412a37c78cb" containerName="mariadb-account-create-update" Dec 03 13:19:32 crc kubenswrapper[4986]: I1203 13:19:32.306062 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="f756aad9-8f89-429d-af35-c412a37c78cb" containerName="mariadb-account-create-update" Dec 03 13:19:32 crc kubenswrapper[4986]: E1203 13:19:32.306140 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5c981f3-a559-4641-aaae-2e90c5d0d543" containerName="glance-httpd" Dec 03 13:19:32 crc kubenswrapper[4986]: I1203 13:19:32.306191 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5c981f3-a559-4641-aaae-2e90c5d0d543" containerName="glance-httpd" Dec 03 13:19:32 crc kubenswrapper[4986]: E1203 13:19:32.306259 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6b2f7e4-b8b0-4440-ac4b-291ea7b92d36" containerName="mariadb-account-create-update" Dec 03 13:19:32 crc kubenswrapper[4986]: I1203 13:19:32.306334 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6b2f7e4-b8b0-4440-ac4b-291ea7b92d36" containerName="mariadb-account-create-update" Dec 03 13:19:32 crc kubenswrapper[4986]: I1203 13:19:32.307082 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="67426b10-306c-4d32-94e7-99267dc8e435" containerName="mariadb-database-create" Dec 03 13:19:32 crc kubenswrapper[4986]: I1203 13:19:32.307164 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f8a0583-c937-4c77-8d43-ea7dcb406886" containerName="mariadb-database-create" Dec 03 13:19:32 crc kubenswrapper[4986]: I1203 13:19:32.307233 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5c981f3-a559-4641-aaae-2e90c5d0d543" containerName="glance-httpd" Dec 03 13:19:32 crc kubenswrapper[4986]: I1203 13:19:32.307313 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c8061fb-865f-46dd-846b-87907d3f12f7" containerName="mariadb-account-create-update" Dec 03 13:19:32 crc kubenswrapper[4986]: I1203 13:19:32.307379 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3245c21-420c-442d-934f-73f6917fcb21" containerName="mariadb-database-create" Dec 03 13:19:32 crc kubenswrapper[4986]: I1203 13:19:32.307448 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6b2f7e4-b8b0-4440-ac4b-291ea7b92d36" containerName="mariadb-account-create-update" Dec 03 13:19:32 crc kubenswrapper[4986]: I1203 13:19:32.307504 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5c981f3-a559-4641-aaae-2e90c5d0d543" containerName="glance-log" Dec 03 13:19:32 crc kubenswrapper[4986]: I1203 13:19:32.307744 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="f756aad9-8f89-429d-af35-c412a37c78cb" containerName="mariadb-account-create-update" Dec 03 13:19:32 crc kubenswrapper[4986]: I1203 13:19:32.315796 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 03 13:19:32 crc kubenswrapper[4986]: I1203 13:19:32.318014 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Dec 03 13:19:32 crc kubenswrapper[4986]: I1203 13:19:32.318243 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 03 13:19:32 crc kubenswrapper[4986]: I1203 13:19:32.328959 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 03 13:19:32 crc kubenswrapper[4986]: I1203 13:19:32.422468 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/91b841cf-0b35-4065-8268-3d018757b029-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"91b841cf-0b35-4065-8268-3d018757b029\") " pod="openstack/glance-default-internal-api-0" Dec 03 13:19:32 crc kubenswrapper[4986]: I1203 13:19:32.422730 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4sr8\" (UniqueName: \"kubernetes.io/projected/91b841cf-0b35-4065-8268-3d018757b029-kube-api-access-k4sr8\") pod \"glance-default-internal-api-0\" (UID: \"91b841cf-0b35-4065-8268-3d018757b029\") " pod="openstack/glance-default-internal-api-0" Dec 03 13:19:32 crc kubenswrapper[4986]: I1203 13:19:32.422806 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"91b841cf-0b35-4065-8268-3d018757b029\") " pod="openstack/glance-default-internal-api-0" Dec 03 13:19:32 crc kubenswrapper[4986]: I1203 13:19:32.422882 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91b841cf-0b35-4065-8268-3d018757b029-scripts\") pod \"glance-default-internal-api-0\" (UID: \"91b841cf-0b35-4065-8268-3d018757b029\") " pod="openstack/glance-default-internal-api-0" Dec 03 13:19:32 crc kubenswrapper[4986]: I1203 13:19:32.422973 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91b841cf-0b35-4065-8268-3d018757b029-config-data\") pod \"glance-default-internal-api-0\" (UID: \"91b841cf-0b35-4065-8268-3d018757b029\") " pod="openstack/glance-default-internal-api-0" Dec 03 13:19:32 crc kubenswrapper[4986]: I1203 13:19:32.423126 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/91b841cf-0b35-4065-8268-3d018757b029-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"91b841cf-0b35-4065-8268-3d018757b029\") " pod="openstack/glance-default-internal-api-0" Dec 03 13:19:32 crc kubenswrapper[4986]: I1203 13:19:32.423224 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91b841cf-0b35-4065-8268-3d018757b029-logs\") pod \"glance-default-internal-api-0\" (UID: \"91b841cf-0b35-4065-8268-3d018757b029\") " pod="openstack/glance-default-internal-api-0" Dec 03 13:19:32 crc kubenswrapper[4986]: I1203 13:19:32.423322 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91b841cf-0b35-4065-8268-3d018757b029-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"91b841cf-0b35-4065-8268-3d018757b029\") " pod="openstack/glance-default-internal-api-0" Dec 03 13:19:32 crc kubenswrapper[4986]: I1203 13:19:32.525364 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/91b841cf-0b35-4065-8268-3d018757b029-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"91b841cf-0b35-4065-8268-3d018757b029\") " pod="openstack/glance-default-internal-api-0" Dec 03 13:19:32 crc kubenswrapper[4986]: I1203 13:19:32.525615 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91b841cf-0b35-4065-8268-3d018757b029-logs\") pod \"glance-default-internal-api-0\" (UID: \"91b841cf-0b35-4065-8268-3d018757b029\") " pod="openstack/glance-default-internal-api-0" Dec 03 13:19:32 crc kubenswrapper[4986]: I1203 13:19:32.525725 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91b841cf-0b35-4065-8268-3d018757b029-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"91b841cf-0b35-4065-8268-3d018757b029\") " pod="openstack/glance-default-internal-api-0" Dec 03 13:19:32 crc kubenswrapper[4986]: I1203 13:19:32.525796 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/91b841cf-0b35-4065-8268-3d018757b029-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"91b841cf-0b35-4065-8268-3d018757b029\") " pod="openstack/glance-default-internal-api-0" Dec 03 13:19:32 crc kubenswrapper[4986]: I1203 13:19:32.525891 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4sr8\" (UniqueName: \"kubernetes.io/projected/91b841cf-0b35-4065-8268-3d018757b029-kube-api-access-k4sr8\") pod \"glance-default-internal-api-0\" (UID: \"91b841cf-0b35-4065-8268-3d018757b029\") " pod="openstack/glance-default-internal-api-0" Dec 03 13:19:32 crc kubenswrapper[4986]: I1203 13:19:32.525965 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"91b841cf-0b35-4065-8268-3d018757b029\") " pod="openstack/glance-default-internal-api-0" Dec 03 13:19:32 crc kubenswrapper[4986]: I1203 13:19:32.526036 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91b841cf-0b35-4065-8268-3d018757b029-scripts\") pod \"glance-default-internal-api-0\" (UID: \"91b841cf-0b35-4065-8268-3d018757b029\") " pod="openstack/glance-default-internal-api-0" Dec 03 13:19:32 crc kubenswrapper[4986]: I1203 13:19:32.526174 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91b841cf-0b35-4065-8268-3d018757b029-config-data\") pod \"glance-default-internal-api-0\" (UID: \"91b841cf-0b35-4065-8268-3d018757b029\") " pod="openstack/glance-default-internal-api-0" Dec 03 13:19:32 crc kubenswrapper[4986]: I1203 13:19:32.526409 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/91b841cf-0b35-4065-8268-3d018757b029-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"91b841cf-0b35-4065-8268-3d018757b029\") " pod="openstack/glance-default-internal-api-0" Dec 03 13:19:32 crc kubenswrapper[4986]: I1203 13:19:32.527901 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91b841cf-0b35-4065-8268-3d018757b029-logs\") pod \"glance-default-internal-api-0\" (UID: \"91b841cf-0b35-4065-8268-3d018757b029\") " pod="openstack/glance-default-internal-api-0" Dec 03 13:19:32 crc kubenswrapper[4986]: I1203 13:19:32.529061 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/91b841cf-0b35-4065-8268-3d018757b029-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"91b841cf-0b35-4065-8268-3d018757b029\") " pod="openstack/glance-default-internal-api-0" Dec 03 13:19:32 crc kubenswrapper[4986]: I1203 13:19:32.529315 4986 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"91b841cf-0b35-4065-8268-3d018757b029\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-internal-api-0" Dec 03 13:19:32 crc kubenswrapper[4986]: I1203 13:19:32.532117 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91b841cf-0b35-4065-8268-3d018757b029-config-data\") pod \"glance-default-internal-api-0\" (UID: \"91b841cf-0b35-4065-8268-3d018757b029\") " pod="openstack/glance-default-internal-api-0" Dec 03 13:19:32 crc kubenswrapper[4986]: I1203 13:19:32.534185 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91b841cf-0b35-4065-8268-3d018757b029-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"91b841cf-0b35-4065-8268-3d018757b029\") " pod="openstack/glance-default-internal-api-0" Dec 03 13:19:32 crc kubenswrapper[4986]: I1203 13:19:32.548706 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91b841cf-0b35-4065-8268-3d018757b029-scripts\") pod \"glance-default-internal-api-0\" (UID: \"91b841cf-0b35-4065-8268-3d018757b029\") " pod="openstack/glance-default-internal-api-0" Dec 03 13:19:32 crc kubenswrapper[4986]: I1203 13:19:32.562857 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4sr8\" (UniqueName: \"kubernetes.io/projected/91b841cf-0b35-4065-8268-3d018757b029-kube-api-access-k4sr8\") pod \"glance-default-internal-api-0\" (UID: \"91b841cf-0b35-4065-8268-3d018757b029\") " pod="openstack/glance-default-internal-api-0" Dec 03 13:19:32 crc kubenswrapper[4986]: I1203 13:19:32.569132 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"91b841cf-0b35-4065-8268-3d018757b029\") " pod="openstack/glance-default-internal-api-0" Dec 03 13:19:32 crc kubenswrapper[4986]: I1203 13:19:32.669553 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 03 13:19:32 crc kubenswrapper[4986]: I1203 13:19:32.975374 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5c981f3-a559-4641-aaae-2e90c5d0d543" path="/var/lib/kubelet/pods/d5c981f3-a559-4641-aaae-2e90c5d0d543/volumes" Dec 03 13:19:33 crc kubenswrapper[4986]: I1203 13:19:33.208245 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"515375fd-69ab-4f66-9fa3-ac72e0eeb97b","Type":"ContainerStarted","Data":"6087f3173d68c8c36e61fbc135428cc0f251010c412665bbe10f7f0dd56015e5"} Dec 03 13:19:33 crc kubenswrapper[4986]: I1203 13:19:33.212926 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2eeabafd-9779-4486-bde1-9fdaf83cf88a","Type":"ContainerStarted","Data":"68070736734de20d1f8b18cf401705263490156d987c729dda1ddad098d9e1a4"} Dec 03 13:19:33 crc kubenswrapper[4986]: I1203 13:19:33.212962 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2eeabafd-9779-4486-bde1-9fdaf83cf88a" containerName="ceilometer-central-agent" containerID="cri-o://8840a305734b65731f05fa8c564f0e6a333ea9694df6852bb2fa5f45380b57e4" gracePeriod=30 Dec 03 13:19:33 crc kubenswrapper[4986]: I1203 13:19:33.212990 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 03 13:19:33 crc kubenswrapper[4986]: I1203 13:19:33.213030 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2eeabafd-9779-4486-bde1-9fdaf83cf88a" containerName="sg-core" containerID="cri-o://24040e342f3df8f8c3e6a04a7e5f0a3f9f93a9fa727a36c48125d4f11717b174" gracePeriod=30 Dec 03 13:19:33 crc kubenswrapper[4986]: I1203 13:19:33.213042 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2eeabafd-9779-4486-bde1-9fdaf83cf88a" containerName="ceilometer-notification-agent" containerID="cri-o://8f45a89ce34a47281bbd82265b9335e1eb74df5eea4b67e8005eea599d1235ac" gracePeriod=30 Dec 03 13:19:33 crc kubenswrapper[4986]: I1203 13:19:33.213086 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2eeabafd-9779-4486-bde1-9fdaf83cf88a" containerName="proxy-httpd" containerID="cri-o://68070736734de20d1f8b18cf401705263490156d987c729dda1ddad098d9e1a4" gracePeriod=30 Dec 03 13:19:33 crc kubenswrapper[4986]: I1203 13:19:33.236422 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.236396157 podStartE2EDuration="3.236396157s" podCreationTimestamp="2025-12-03 13:19:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:19:33.229652495 +0000 UTC m=+1432.696083686" watchObservedRunningTime="2025-12-03 13:19:33.236396157 +0000 UTC m=+1432.702827348" Dec 03 13:19:33 crc kubenswrapper[4986]: I1203 13:19:33.261718 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.665503728 podStartE2EDuration="7.26169922s" podCreationTimestamp="2025-12-03 13:19:26 +0000 UTC" firstStartedPulling="2025-12-03 13:19:28.321834434 +0000 UTC m=+1427.788265625" lastFinishedPulling="2025-12-03 13:19:32.918029926 +0000 UTC m=+1432.384461117" observedRunningTime="2025-12-03 13:19:33.254747212 +0000 UTC m=+1432.721178403" watchObservedRunningTime="2025-12-03 13:19:33.26169922 +0000 UTC m=+1432.728130401" Dec 03 13:19:33 crc kubenswrapper[4986]: W1203 13:19:33.296116 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod91b841cf_0b35_4065_8268_3d018757b029.slice/crio-6909c38f61787843833d6cbc06169eab5408c043f1e2c2c5eb4c5841a8fcc8d8 WatchSource:0}: Error finding container 6909c38f61787843833d6cbc06169eab5408c043f1e2c2c5eb4c5841a8fcc8d8: Status 404 returned error can't find the container with id 6909c38f61787843833d6cbc06169eab5408c043f1e2c2c5eb4c5841a8fcc8d8 Dec 03 13:19:33 crc kubenswrapper[4986]: I1203 13:19:33.312139 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 03 13:19:33 crc kubenswrapper[4986]: E1203 13:19:33.626387 4986 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2eeabafd_9779_4486_bde1_9fdaf83cf88a.slice/crio-conmon-8f45a89ce34a47281bbd82265b9335e1eb74df5eea4b67e8005eea599d1235ac.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2eeabafd_9779_4486_bde1_9fdaf83cf88a.slice/crio-8f45a89ce34a47281bbd82265b9335e1eb74df5eea4b67e8005eea599d1235ac.scope\": RecentStats: unable to find data in memory cache]" Dec 03 13:19:34 crc kubenswrapper[4986]: I1203 13:19:34.229670 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"91b841cf-0b35-4065-8268-3d018757b029","Type":"ContainerStarted","Data":"ac963c54a6ce45941696cad2df3cec39305c59e2589753faf4d4ef0d0cdffe4c"} Dec 03 13:19:34 crc kubenswrapper[4986]: I1203 13:19:34.230211 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"91b841cf-0b35-4065-8268-3d018757b029","Type":"ContainerStarted","Data":"6909c38f61787843833d6cbc06169eab5408c043f1e2c2c5eb4c5841a8fcc8d8"} Dec 03 13:19:34 crc kubenswrapper[4986]: I1203 13:19:34.233501 4986 generic.go:334] "Generic (PLEG): container finished" podID="2eeabafd-9779-4486-bde1-9fdaf83cf88a" containerID="24040e342f3df8f8c3e6a04a7e5f0a3f9f93a9fa727a36c48125d4f11717b174" exitCode=2 Dec 03 13:19:34 crc kubenswrapper[4986]: I1203 13:19:34.233537 4986 generic.go:334] "Generic (PLEG): container finished" podID="2eeabafd-9779-4486-bde1-9fdaf83cf88a" containerID="8f45a89ce34a47281bbd82265b9335e1eb74df5eea4b67e8005eea599d1235ac" exitCode=0 Dec 03 13:19:34 crc kubenswrapper[4986]: I1203 13:19:34.233664 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2eeabafd-9779-4486-bde1-9fdaf83cf88a","Type":"ContainerDied","Data":"24040e342f3df8f8c3e6a04a7e5f0a3f9f93a9fa727a36c48125d4f11717b174"} Dec 03 13:19:34 crc kubenswrapper[4986]: I1203 13:19:34.233716 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2eeabafd-9779-4486-bde1-9fdaf83cf88a","Type":"ContainerDied","Data":"8f45a89ce34a47281bbd82265b9335e1eb74df5eea4b67e8005eea599d1235ac"} Dec 03 13:19:35 crc kubenswrapper[4986]: I1203 13:19:35.259755 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"91b841cf-0b35-4065-8268-3d018757b029","Type":"ContainerStarted","Data":"e46aa106a661d856bc6394604a7cc4326420eedc425786e6f355d90f5812e749"} Dec 03 13:19:35 crc kubenswrapper[4986]: I1203 13:19:35.279849 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.279829486 podStartE2EDuration="3.279829486s" podCreationTimestamp="2025-12-03 13:19:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:19:35.277084012 +0000 UTC m=+1434.743515223" watchObservedRunningTime="2025-12-03 13:19:35.279829486 +0000 UTC m=+1434.746260677" Dec 03 13:19:35 crc kubenswrapper[4986]: I1203 13:19:35.686417 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-cc774c568-phcpp" Dec 03 13:19:35 crc kubenswrapper[4986]: I1203 13:19:35.804550 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/cee5d9b6-d11e-4bad-b013-196a4f401404-horizon-tls-certs\") pod \"cee5d9b6-d11e-4bad-b013-196a4f401404\" (UID: \"cee5d9b6-d11e-4bad-b013-196a4f401404\") " Dec 03 13:19:35 crc kubenswrapper[4986]: I1203 13:19:35.804593 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/cee5d9b6-d11e-4bad-b013-196a4f401404-horizon-secret-key\") pod \"cee5d9b6-d11e-4bad-b013-196a4f401404\" (UID: \"cee5d9b6-d11e-4bad-b013-196a4f401404\") " Dec 03 13:19:35 crc kubenswrapper[4986]: I1203 13:19:35.804616 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cee5d9b6-d11e-4bad-b013-196a4f401404-config-data\") pod \"cee5d9b6-d11e-4bad-b013-196a4f401404\" (UID: \"cee5d9b6-d11e-4bad-b013-196a4f401404\") " Dec 03 13:19:35 crc kubenswrapper[4986]: I1203 13:19:35.804662 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cee5d9b6-d11e-4bad-b013-196a4f401404-combined-ca-bundle\") pod \"cee5d9b6-d11e-4bad-b013-196a4f401404\" (UID: \"cee5d9b6-d11e-4bad-b013-196a4f401404\") " Dec 03 13:19:35 crc kubenswrapper[4986]: I1203 13:19:35.804701 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67nzv\" (UniqueName: \"kubernetes.io/projected/cee5d9b6-d11e-4bad-b013-196a4f401404-kube-api-access-67nzv\") pod \"cee5d9b6-d11e-4bad-b013-196a4f401404\" (UID: \"cee5d9b6-d11e-4bad-b013-196a4f401404\") " Dec 03 13:19:35 crc kubenswrapper[4986]: I1203 13:19:35.804903 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cee5d9b6-d11e-4bad-b013-196a4f401404-scripts\") pod \"cee5d9b6-d11e-4bad-b013-196a4f401404\" (UID: \"cee5d9b6-d11e-4bad-b013-196a4f401404\") " Dec 03 13:19:35 crc kubenswrapper[4986]: I1203 13:19:35.804925 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cee5d9b6-d11e-4bad-b013-196a4f401404-logs\") pod \"cee5d9b6-d11e-4bad-b013-196a4f401404\" (UID: \"cee5d9b6-d11e-4bad-b013-196a4f401404\") " Dec 03 13:19:35 crc kubenswrapper[4986]: I1203 13:19:35.805384 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cee5d9b6-d11e-4bad-b013-196a4f401404-logs" (OuterVolumeSpecName: "logs") pod "cee5d9b6-d11e-4bad-b013-196a4f401404" (UID: "cee5d9b6-d11e-4bad-b013-196a4f401404"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 13:19:35 crc kubenswrapper[4986]: I1203 13:19:35.810035 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cee5d9b6-d11e-4bad-b013-196a4f401404-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "cee5d9b6-d11e-4bad-b013-196a4f401404" (UID: "cee5d9b6-d11e-4bad-b013-196a4f401404"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:19:35 crc kubenswrapper[4986]: I1203 13:19:35.811272 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cee5d9b6-d11e-4bad-b013-196a4f401404-kube-api-access-67nzv" (OuterVolumeSpecName: "kube-api-access-67nzv") pod "cee5d9b6-d11e-4bad-b013-196a4f401404" (UID: "cee5d9b6-d11e-4bad-b013-196a4f401404"). InnerVolumeSpecName "kube-api-access-67nzv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:19:35 crc kubenswrapper[4986]: I1203 13:19:35.831374 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cee5d9b6-d11e-4bad-b013-196a4f401404-config-data" (OuterVolumeSpecName: "config-data") pod "cee5d9b6-d11e-4bad-b013-196a4f401404" (UID: "cee5d9b6-d11e-4bad-b013-196a4f401404"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:19:35 crc kubenswrapper[4986]: I1203 13:19:35.832093 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cee5d9b6-d11e-4bad-b013-196a4f401404-scripts" (OuterVolumeSpecName: "scripts") pod "cee5d9b6-d11e-4bad-b013-196a4f401404" (UID: "cee5d9b6-d11e-4bad-b013-196a4f401404"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:19:35 crc kubenswrapper[4986]: I1203 13:19:35.833317 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cee5d9b6-d11e-4bad-b013-196a4f401404-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cee5d9b6-d11e-4bad-b013-196a4f401404" (UID: "cee5d9b6-d11e-4bad-b013-196a4f401404"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:19:35 crc kubenswrapper[4986]: I1203 13:19:35.856163 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cee5d9b6-d11e-4bad-b013-196a4f401404-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "cee5d9b6-d11e-4bad-b013-196a4f401404" (UID: "cee5d9b6-d11e-4bad-b013-196a4f401404"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:19:35 crc kubenswrapper[4986]: I1203 13:19:35.906694 4986 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cee5d9b6-d11e-4bad-b013-196a4f401404-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 13:19:35 crc kubenswrapper[4986]: I1203 13:19:35.906727 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67nzv\" (UniqueName: \"kubernetes.io/projected/cee5d9b6-d11e-4bad-b013-196a4f401404-kube-api-access-67nzv\") on node \"crc\" DevicePath \"\"" Dec 03 13:19:35 crc kubenswrapper[4986]: I1203 13:19:35.906742 4986 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cee5d9b6-d11e-4bad-b013-196a4f401404-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 13:19:35 crc kubenswrapper[4986]: I1203 13:19:35.906753 4986 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cee5d9b6-d11e-4bad-b013-196a4f401404-logs\") on node \"crc\" DevicePath \"\"" Dec 03 13:19:35 crc kubenswrapper[4986]: I1203 13:19:35.906764 4986 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/cee5d9b6-d11e-4bad-b013-196a4f401404-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 03 13:19:35 crc kubenswrapper[4986]: I1203 13:19:35.906774 4986 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/cee5d9b6-d11e-4bad-b013-196a4f401404-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 03 13:19:35 crc kubenswrapper[4986]: I1203 13:19:35.906785 4986 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cee5d9b6-d11e-4bad-b013-196a4f401404-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 13:19:36 crc kubenswrapper[4986]: I1203 13:19:36.273417 4986 generic.go:334] "Generic (PLEG): container finished" podID="2eeabafd-9779-4486-bde1-9fdaf83cf88a" containerID="8840a305734b65731f05fa8c564f0e6a333ea9694df6852bb2fa5f45380b57e4" exitCode=0 Dec 03 13:19:36 crc kubenswrapper[4986]: I1203 13:19:36.273878 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2eeabafd-9779-4486-bde1-9fdaf83cf88a","Type":"ContainerDied","Data":"8840a305734b65731f05fa8c564f0e6a333ea9694df6852bb2fa5f45380b57e4"} Dec 03 13:19:36 crc kubenswrapper[4986]: I1203 13:19:36.277211 4986 generic.go:334] "Generic (PLEG): container finished" podID="cee5d9b6-d11e-4bad-b013-196a4f401404" containerID="d7d30e426c52ae0b45fc8f0203633e1d5f3166da333a8d50729b7513d223357e" exitCode=137 Dec 03 13:19:36 crc kubenswrapper[4986]: I1203 13:19:36.277310 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-cc774c568-phcpp" event={"ID":"cee5d9b6-d11e-4bad-b013-196a4f401404","Type":"ContainerDied","Data":"d7d30e426c52ae0b45fc8f0203633e1d5f3166da333a8d50729b7513d223357e"} Dec 03 13:19:36 crc kubenswrapper[4986]: I1203 13:19:36.277344 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-cc774c568-phcpp" Dec 03 13:19:36 crc kubenswrapper[4986]: I1203 13:19:36.277372 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-cc774c568-phcpp" event={"ID":"cee5d9b6-d11e-4bad-b013-196a4f401404","Type":"ContainerDied","Data":"2e58cb3d381ed3dce5a24d1d0fcdeacc052821d334ca39b2b325f2950db053f4"} Dec 03 13:19:36 crc kubenswrapper[4986]: I1203 13:19:36.277396 4986 scope.go:117] "RemoveContainer" containerID="b3a510936fd6eb78b8694de1aa0785b123dd535c543cc43a92b46a0c5eb19968" Dec 03 13:19:36 crc kubenswrapper[4986]: I1203 13:19:36.318635 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-cc774c568-phcpp"] Dec 03 13:19:36 crc kubenswrapper[4986]: I1203 13:19:36.330050 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-cc774c568-phcpp"] Dec 03 13:19:36 crc kubenswrapper[4986]: I1203 13:19:36.339652 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-xg69r"] Dec 03 13:19:36 crc kubenswrapper[4986]: E1203 13:19:36.340053 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cee5d9b6-d11e-4bad-b013-196a4f401404" containerName="horizon-log" Dec 03 13:19:36 crc kubenswrapper[4986]: I1203 13:19:36.340071 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="cee5d9b6-d11e-4bad-b013-196a4f401404" containerName="horizon-log" Dec 03 13:19:36 crc kubenswrapper[4986]: E1203 13:19:36.340118 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cee5d9b6-d11e-4bad-b013-196a4f401404" containerName="horizon" Dec 03 13:19:36 crc kubenswrapper[4986]: I1203 13:19:36.340126 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="cee5d9b6-d11e-4bad-b013-196a4f401404" containerName="horizon" Dec 03 13:19:36 crc kubenswrapper[4986]: I1203 13:19:36.340328 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="cee5d9b6-d11e-4bad-b013-196a4f401404" containerName="horizon" Dec 03 13:19:36 crc kubenswrapper[4986]: I1203 13:19:36.340359 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="cee5d9b6-d11e-4bad-b013-196a4f401404" containerName="horizon-log" Dec 03 13:19:36 crc kubenswrapper[4986]: I1203 13:19:36.340962 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-xg69r" Dec 03 13:19:36 crc kubenswrapper[4986]: I1203 13:19:36.343545 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Dec 03 13:19:36 crc kubenswrapper[4986]: I1203 13:19:36.343715 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-clvtb" Dec 03 13:19:36 crc kubenswrapper[4986]: I1203 13:19:36.343792 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Dec 03 13:19:36 crc kubenswrapper[4986]: I1203 13:19:36.402612 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-xg69r"] Dec 03 13:19:36 crc kubenswrapper[4986]: I1203 13:19:36.419005 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25ce0dda-434c-4a77-a144-a6956f2e0407-config-data\") pod \"nova-cell0-conductor-db-sync-xg69r\" (UID: \"25ce0dda-434c-4a77-a144-a6956f2e0407\") " pod="openstack/nova-cell0-conductor-db-sync-xg69r" Dec 03 13:19:36 crc kubenswrapper[4986]: I1203 13:19:36.419259 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25ce0dda-434c-4a77-a144-a6956f2e0407-scripts\") pod \"nova-cell0-conductor-db-sync-xg69r\" (UID: \"25ce0dda-434c-4a77-a144-a6956f2e0407\") " pod="openstack/nova-cell0-conductor-db-sync-xg69r" Dec 03 13:19:36 crc kubenswrapper[4986]: I1203 13:19:36.419419 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zgrn\" (UniqueName: \"kubernetes.io/projected/25ce0dda-434c-4a77-a144-a6956f2e0407-kube-api-access-7zgrn\") pod \"nova-cell0-conductor-db-sync-xg69r\" (UID: \"25ce0dda-434c-4a77-a144-a6956f2e0407\") " pod="openstack/nova-cell0-conductor-db-sync-xg69r" Dec 03 13:19:36 crc kubenswrapper[4986]: I1203 13:19:36.419642 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25ce0dda-434c-4a77-a144-a6956f2e0407-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-xg69r\" (UID: \"25ce0dda-434c-4a77-a144-a6956f2e0407\") " pod="openstack/nova-cell0-conductor-db-sync-xg69r" Dec 03 13:19:36 crc kubenswrapper[4986]: I1203 13:19:36.500910 4986 scope.go:117] "RemoveContainer" containerID="d7d30e426c52ae0b45fc8f0203633e1d5f3166da333a8d50729b7513d223357e" Dec 03 13:19:36 crc kubenswrapper[4986]: I1203 13:19:36.521310 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zgrn\" (UniqueName: \"kubernetes.io/projected/25ce0dda-434c-4a77-a144-a6956f2e0407-kube-api-access-7zgrn\") pod \"nova-cell0-conductor-db-sync-xg69r\" (UID: \"25ce0dda-434c-4a77-a144-a6956f2e0407\") " pod="openstack/nova-cell0-conductor-db-sync-xg69r" Dec 03 13:19:36 crc kubenswrapper[4986]: I1203 13:19:36.521434 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25ce0dda-434c-4a77-a144-a6956f2e0407-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-xg69r\" (UID: \"25ce0dda-434c-4a77-a144-a6956f2e0407\") " pod="openstack/nova-cell0-conductor-db-sync-xg69r" Dec 03 13:19:36 crc kubenswrapper[4986]: I1203 13:19:36.521540 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25ce0dda-434c-4a77-a144-a6956f2e0407-config-data\") pod \"nova-cell0-conductor-db-sync-xg69r\" (UID: \"25ce0dda-434c-4a77-a144-a6956f2e0407\") " pod="openstack/nova-cell0-conductor-db-sync-xg69r" Dec 03 13:19:36 crc kubenswrapper[4986]: I1203 13:19:36.521569 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25ce0dda-434c-4a77-a144-a6956f2e0407-scripts\") pod \"nova-cell0-conductor-db-sync-xg69r\" (UID: \"25ce0dda-434c-4a77-a144-a6956f2e0407\") " pod="openstack/nova-cell0-conductor-db-sync-xg69r" Dec 03 13:19:36 crc kubenswrapper[4986]: I1203 13:19:36.527820 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25ce0dda-434c-4a77-a144-a6956f2e0407-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-xg69r\" (UID: \"25ce0dda-434c-4a77-a144-a6956f2e0407\") " pod="openstack/nova-cell0-conductor-db-sync-xg69r" Dec 03 13:19:36 crc kubenswrapper[4986]: I1203 13:19:36.528169 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25ce0dda-434c-4a77-a144-a6956f2e0407-scripts\") pod \"nova-cell0-conductor-db-sync-xg69r\" (UID: \"25ce0dda-434c-4a77-a144-a6956f2e0407\") " pod="openstack/nova-cell0-conductor-db-sync-xg69r" Dec 03 13:19:36 crc kubenswrapper[4986]: I1203 13:19:36.528998 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25ce0dda-434c-4a77-a144-a6956f2e0407-config-data\") pod \"nova-cell0-conductor-db-sync-xg69r\" (UID: \"25ce0dda-434c-4a77-a144-a6956f2e0407\") " pod="openstack/nova-cell0-conductor-db-sync-xg69r" Dec 03 13:19:36 crc kubenswrapper[4986]: I1203 13:19:36.540250 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zgrn\" (UniqueName: \"kubernetes.io/projected/25ce0dda-434c-4a77-a144-a6956f2e0407-kube-api-access-7zgrn\") pod \"nova-cell0-conductor-db-sync-xg69r\" (UID: \"25ce0dda-434c-4a77-a144-a6956f2e0407\") " pod="openstack/nova-cell0-conductor-db-sync-xg69r" Dec 03 13:19:36 crc kubenswrapper[4986]: I1203 13:19:36.541192 4986 scope.go:117] "RemoveContainer" containerID="b3a510936fd6eb78b8694de1aa0785b123dd535c543cc43a92b46a0c5eb19968" Dec 03 13:19:36 crc kubenswrapper[4986]: E1203 13:19:36.542895 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3a510936fd6eb78b8694de1aa0785b123dd535c543cc43a92b46a0c5eb19968\": container with ID starting with b3a510936fd6eb78b8694de1aa0785b123dd535c543cc43a92b46a0c5eb19968 not found: ID does not exist" containerID="b3a510936fd6eb78b8694de1aa0785b123dd535c543cc43a92b46a0c5eb19968" Dec 03 13:19:36 crc kubenswrapper[4986]: I1203 13:19:36.542925 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3a510936fd6eb78b8694de1aa0785b123dd535c543cc43a92b46a0c5eb19968"} err="failed to get container status \"b3a510936fd6eb78b8694de1aa0785b123dd535c543cc43a92b46a0c5eb19968\": rpc error: code = NotFound desc = could not find container \"b3a510936fd6eb78b8694de1aa0785b123dd535c543cc43a92b46a0c5eb19968\": container with ID starting with b3a510936fd6eb78b8694de1aa0785b123dd535c543cc43a92b46a0c5eb19968 not found: ID does not exist" Dec 03 13:19:36 crc kubenswrapper[4986]: I1203 13:19:36.542953 4986 scope.go:117] "RemoveContainer" containerID="d7d30e426c52ae0b45fc8f0203633e1d5f3166da333a8d50729b7513d223357e" Dec 03 13:19:36 crc kubenswrapper[4986]: E1203 13:19:36.544050 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7d30e426c52ae0b45fc8f0203633e1d5f3166da333a8d50729b7513d223357e\": container with ID starting with d7d30e426c52ae0b45fc8f0203633e1d5f3166da333a8d50729b7513d223357e not found: ID does not exist" containerID="d7d30e426c52ae0b45fc8f0203633e1d5f3166da333a8d50729b7513d223357e" Dec 03 13:19:36 crc kubenswrapper[4986]: I1203 13:19:36.544098 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7d30e426c52ae0b45fc8f0203633e1d5f3166da333a8d50729b7513d223357e"} err="failed to get container status \"d7d30e426c52ae0b45fc8f0203633e1d5f3166da333a8d50729b7513d223357e\": rpc error: code = NotFound desc = could not find container \"d7d30e426c52ae0b45fc8f0203633e1d5f3166da333a8d50729b7513d223357e\": container with ID starting with d7d30e426c52ae0b45fc8f0203633e1d5f3166da333a8d50729b7513d223357e not found: ID does not exist" Dec 03 13:19:36 crc kubenswrapper[4986]: I1203 13:19:36.661759 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-xg69r" Dec 03 13:19:36 crc kubenswrapper[4986]: I1203 13:19:36.958799 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cee5d9b6-d11e-4bad-b013-196a4f401404" path="/var/lib/kubelet/pods/cee5d9b6-d11e-4bad-b013-196a4f401404/volumes" Dec 03 13:19:36 crc kubenswrapper[4986]: I1203 13:19:36.959360 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-xg69r"] Dec 03 13:19:37 crc kubenswrapper[4986]: I1203 13:19:37.288015 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-xg69r" event={"ID":"25ce0dda-434c-4a77-a144-a6956f2e0407","Type":"ContainerStarted","Data":"92214bc63f75dcba1928c515b49031c47ad71fa4375ee8b2c30f60aa4466aede"} Dec 03 13:19:37 crc kubenswrapper[4986]: I1203 13:19:37.473113 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Dec 03 13:19:40 crc kubenswrapper[4986]: I1203 13:19:40.558269 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 03 13:19:40 crc kubenswrapper[4986]: I1203 13:19:40.558864 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 03 13:19:40 crc kubenswrapper[4986]: I1203 13:19:40.593851 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 03 13:19:40 crc kubenswrapper[4986]: I1203 13:19:40.603458 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 03 13:19:41 crc kubenswrapper[4986]: I1203 13:19:41.331690 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 03 13:19:41 crc kubenswrapper[4986]: I1203 13:19:41.331758 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 03 13:19:41 crc kubenswrapper[4986]: I1203 13:19:41.745212 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hm7d5"] Dec 03 13:19:41 crc kubenswrapper[4986]: I1203 13:19:41.747833 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hm7d5" Dec 03 13:19:41 crc kubenswrapper[4986]: I1203 13:19:41.779669 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hm7d5"] Dec 03 13:19:41 crc kubenswrapper[4986]: I1203 13:19:41.865688 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4fce6b6-3b41-4cb2-99cf-910b09724dd5-catalog-content\") pod \"certified-operators-hm7d5\" (UID: \"b4fce6b6-3b41-4cb2-99cf-910b09724dd5\") " pod="openshift-marketplace/certified-operators-hm7d5" Dec 03 13:19:41 crc kubenswrapper[4986]: I1203 13:19:41.865776 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7msz4\" (UniqueName: \"kubernetes.io/projected/b4fce6b6-3b41-4cb2-99cf-910b09724dd5-kube-api-access-7msz4\") pod \"certified-operators-hm7d5\" (UID: \"b4fce6b6-3b41-4cb2-99cf-910b09724dd5\") " pod="openshift-marketplace/certified-operators-hm7d5" Dec 03 13:19:41 crc kubenswrapper[4986]: I1203 13:19:41.865821 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4fce6b6-3b41-4cb2-99cf-910b09724dd5-utilities\") pod \"certified-operators-hm7d5\" (UID: \"b4fce6b6-3b41-4cb2-99cf-910b09724dd5\") " pod="openshift-marketplace/certified-operators-hm7d5" Dec 03 13:19:41 crc kubenswrapper[4986]: I1203 13:19:41.966975 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4fce6b6-3b41-4cb2-99cf-910b09724dd5-catalog-content\") pod \"certified-operators-hm7d5\" (UID: \"b4fce6b6-3b41-4cb2-99cf-910b09724dd5\") " pod="openshift-marketplace/certified-operators-hm7d5" Dec 03 13:19:41 crc kubenswrapper[4986]: I1203 13:19:41.967040 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7msz4\" (UniqueName: \"kubernetes.io/projected/b4fce6b6-3b41-4cb2-99cf-910b09724dd5-kube-api-access-7msz4\") pod \"certified-operators-hm7d5\" (UID: \"b4fce6b6-3b41-4cb2-99cf-910b09724dd5\") " pod="openshift-marketplace/certified-operators-hm7d5" Dec 03 13:19:41 crc kubenswrapper[4986]: I1203 13:19:41.967091 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4fce6b6-3b41-4cb2-99cf-910b09724dd5-utilities\") pod \"certified-operators-hm7d5\" (UID: \"b4fce6b6-3b41-4cb2-99cf-910b09724dd5\") " pod="openshift-marketplace/certified-operators-hm7d5" Dec 03 13:19:41 crc kubenswrapper[4986]: I1203 13:19:41.969761 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4fce6b6-3b41-4cb2-99cf-910b09724dd5-catalog-content\") pod \"certified-operators-hm7d5\" (UID: \"b4fce6b6-3b41-4cb2-99cf-910b09724dd5\") " pod="openshift-marketplace/certified-operators-hm7d5" Dec 03 13:19:41 crc kubenswrapper[4986]: I1203 13:19:41.969774 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4fce6b6-3b41-4cb2-99cf-910b09724dd5-utilities\") pod \"certified-operators-hm7d5\" (UID: \"b4fce6b6-3b41-4cb2-99cf-910b09724dd5\") " pod="openshift-marketplace/certified-operators-hm7d5" Dec 03 13:19:41 crc kubenswrapper[4986]: I1203 13:19:41.990971 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7msz4\" (UniqueName: \"kubernetes.io/projected/b4fce6b6-3b41-4cb2-99cf-910b09724dd5-kube-api-access-7msz4\") pod \"certified-operators-hm7d5\" (UID: \"b4fce6b6-3b41-4cb2-99cf-910b09724dd5\") " pod="openshift-marketplace/certified-operators-hm7d5" Dec 03 13:19:42 crc kubenswrapper[4986]: I1203 13:19:42.089293 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hm7d5" Dec 03 13:19:42 crc kubenswrapper[4986]: I1203 13:19:42.670011 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 03 13:19:42 crc kubenswrapper[4986]: I1203 13:19:42.670381 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 03 13:19:42 crc kubenswrapper[4986]: I1203 13:19:42.712007 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 03 13:19:42 crc kubenswrapper[4986]: I1203 13:19:42.724166 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 03 13:19:43 crc kubenswrapper[4986]: I1203 13:19:43.348393 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 03 13:19:43 crc kubenswrapper[4986]: I1203 13:19:43.348437 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 03 13:19:44 crc kubenswrapper[4986]: I1203 13:19:44.058362 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 03 13:19:44 crc kubenswrapper[4986]: I1203 13:19:44.058729 4986 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 03 13:19:44 crc kubenswrapper[4986]: I1203 13:19:44.060610 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 03 13:19:45 crc kubenswrapper[4986]: I1203 13:19:45.874522 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 03 13:19:45 crc kubenswrapper[4986]: I1203 13:19:45.874884 4986 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 03 13:19:45 crc kubenswrapper[4986]: I1203 13:19:45.961022 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 03 13:19:48 crc kubenswrapper[4986]: E1203 13:19:48.815971 4986 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified" Dec 03 13:19:48 crc kubenswrapper[4986]: E1203 13:19:48.816942 4986 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:nova-cell0-conductor-db-sync,Image:quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CELL_NAME,Value:cell0,ValueFrom:nil,},EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:false,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/kolla/config_files/config.json,SubPath:nova-conductor-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7zgrn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42436,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-cell0-conductor-db-sync-xg69r_openstack(25ce0dda-434c-4a77-a144-a6956f2e0407): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 13:19:48 crc kubenswrapper[4986]: E1203 13:19:48.818446 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-cell0-conductor-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/nova-cell0-conductor-db-sync-xg69r" podUID="25ce0dda-434c-4a77-a144-a6956f2e0407" Dec 03 13:19:49 crc kubenswrapper[4986]: I1203 13:19:49.267270 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hm7d5"] Dec 03 13:19:49 crc kubenswrapper[4986]: I1203 13:19:49.413297 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hm7d5" event={"ID":"b4fce6b6-3b41-4cb2-99cf-910b09724dd5","Type":"ContainerStarted","Data":"8bd1e0773383fcf0f9aca22762c67e307160b903ccd58a00f28c7cf39001d28b"} Dec 03 13:19:49 crc kubenswrapper[4986]: E1203 13:19:49.414746 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-cell0-conductor-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified\\\"\"" pod="openstack/nova-cell0-conductor-db-sync-xg69r" podUID="25ce0dda-434c-4a77-a144-a6956f2e0407" Dec 03 13:19:50 crc kubenswrapper[4986]: I1203 13:19:50.426461 4986 generic.go:334] "Generic (PLEG): container finished" podID="b4fce6b6-3b41-4cb2-99cf-910b09724dd5" containerID="21358fb275dc97b084cab4c48b9992407c2ed0977ff076078931f51735447e31" exitCode=0 Dec 03 13:19:50 crc kubenswrapper[4986]: I1203 13:19:50.426580 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hm7d5" event={"ID":"b4fce6b6-3b41-4cb2-99cf-910b09724dd5","Type":"ContainerDied","Data":"21358fb275dc97b084cab4c48b9992407c2ed0977ff076078931f51735447e31"} Dec 03 13:19:50 crc kubenswrapper[4986]: I1203 13:19:50.432625 4986 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 03 13:19:52 crc kubenswrapper[4986]: I1203 13:19:52.445921 4986 generic.go:334] "Generic (PLEG): container finished" podID="b4fce6b6-3b41-4cb2-99cf-910b09724dd5" containerID="1436bba8d5bde05954d864a671a3bd68d384aef7985a2a247c900737f1f3018f" exitCode=0 Dec 03 13:19:52 crc kubenswrapper[4986]: I1203 13:19:52.445990 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hm7d5" event={"ID":"b4fce6b6-3b41-4cb2-99cf-910b09724dd5","Type":"ContainerDied","Data":"1436bba8d5bde05954d864a671a3bd68d384aef7985a2a247c900737f1f3018f"} Dec 03 13:19:53 crc kubenswrapper[4986]: I1203 13:19:53.458170 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hm7d5" event={"ID":"b4fce6b6-3b41-4cb2-99cf-910b09724dd5","Type":"ContainerStarted","Data":"6b017156d7b165e59c7f38c32951ba9ce033aac63f4a712eef3221340d3073ab"} Dec 03 13:19:53 crc kubenswrapper[4986]: I1203 13:19:53.479498 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hm7d5" podStartSLOduration=10.106712743 podStartE2EDuration="12.479474929s" podCreationTimestamp="2025-12-03 13:19:41 +0000 UTC" firstStartedPulling="2025-12-03 13:19:50.432330576 +0000 UTC m=+1449.898761777" lastFinishedPulling="2025-12-03 13:19:52.805092772 +0000 UTC m=+1452.271523963" observedRunningTime="2025-12-03 13:19:53.473056676 +0000 UTC m=+1452.939487907" watchObservedRunningTime="2025-12-03 13:19:53.479474929 +0000 UTC m=+1452.945906120" Dec 03 13:19:57 crc kubenswrapper[4986]: I1203 13:19:57.339296 4986 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="2eeabafd-9779-4486-bde1-9fdaf83cf88a" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 03 13:20:02 crc kubenswrapper[4986]: I1203 13:20:02.089859 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hm7d5" Dec 03 13:20:02 crc kubenswrapper[4986]: I1203 13:20:02.090378 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hm7d5" Dec 03 13:20:02 crc kubenswrapper[4986]: I1203 13:20:02.152567 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hm7d5" Dec 03 13:20:02 crc kubenswrapper[4986]: I1203 13:20:02.532672 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-xg69r" event={"ID":"25ce0dda-434c-4a77-a144-a6956f2e0407","Type":"ContainerStarted","Data":"068ac7536234da0a7a4f4b655aea2c95e576ab15ae3a6933a9e16f518c503a4b"} Dec 03 13:20:02 crc kubenswrapper[4986]: I1203 13:20:02.561432 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-xg69r" podStartSLOduration=2.134929437 podStartE2EDuration="26.561415662s" podCreationTimestamp="2025-12-03 13:19:36 +0000 UTC" firstStartedPulling="2025-12-03 13:19:36.962487321 +0000 UTC m=+1436.428918512" lastFinishedPulling="2025-12-03 13:20:01.388973536 +0000 UTC m=+1460.855404737" observedRunningTime="2025-12-03 13:20:02.551422492 +0000 UTC m=+1462.017853703" watchObservedRunningTime="2025-12-03 13:20:02.561415662 +0000 UTC m=+1462.027846853" Dec 03 13:20:02 crc kubenswrapper[4986]: I1203 13:20:02.584322 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hm7d5" Dec 03 13:20:02 crc kubenswrapper[4986]: I1203 13:20:02.641527 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hm7d5"] Dec 03 13:20:03 crc kubenswrapper[4986]: I1203 13:20:03.547854 4986 generic.go:334] "Generic (PLEG): container finished" podID="2eeabafd-9779-4486-bde1-9fdaf83cf88a" containerID="68070736734de20d1f8b18cf401705263490156d987c729dda1ddad098d9e1a4" exitCode=137 Dec 03 13:20:03 crc kubenswrapper[4986]: I1203 13:20:03.548013 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2eeabafd-9779-4486-bde1-9fdaf83cf88a","Type":"ContainerDied","Data":"68070736734de20d1f8b18cf401705263490156d987c729dda1ddad098d9e1a4"} Dec 03 13:20:04 crc kubenswrapper[4986]: I1203 13:20:04.138228 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 13:20:04 crc kubenswrapper[4986]: I1203 13:20:04.183424 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bx6nd\" (UniqueName: \"kubernetes.io/projected/2eeabafd-9779-4486-bde1-9fdaf83cf88a-kube-api-access-bx6nd\") pod \"2eeabafd-9779-4486-bde1-9fdaf83cf88a\" (UID: \"2eeabafd-9779-4486-bde1-9fdaf83cf88a\") " Dec 03 13:20:04 crc kubenswrapper[4986]: I1203 13:20:04.183491 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2eeabafd-9779-4486-bde1-9fdaf83cf88a-sg-core-conf-yaml\") pod \"2eeabafd-9779-4486-bde1-9fdaf83cf88a\" (UID: \"2eeabafd-9779-4486-bde1-9fdaf83cf88a\") " Dec 03 13:20:04 crc kubenswrapper[4986]: I1203 13:20:04.183619 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2eeabafd-9779-4486-bde1-9fdaf83cf88a-config-data\") pod \"2eeabafd-9779-4486-bde1-9fdaf83cf88a\" (UID: \"2eeabafd-9779-4486-bde1-9fdaf83cf88a\") " Dec 03 13:20:04 crc kubenswrapper[4986]: I1203 13:20:04.183640 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2eeabafd-9779-4486-bde1-9fdaf83cf88a-combined-ca-bundle\") pod \"2eeabafd-9779-4486-bde1-9fdaf83cf88a\" (UID: \"2eeabafd-9779-4486-bde1-9fdaf83cf88a\") " Dec 03 13:20:04 crc kubenswrapper[4986]: I1203 13:20:04.183665 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2eeabafd-9779-4486-bde1-9fdaf83cf88a-run-httpd\") pod \"2eeabafd-9779-4486-bde1-9fdaf83cf88a\" (UID: \"2eeabafd-9779-4486-bde1-9fdaf83cf88a\") " Dec 03 13:20:04 crc kubenswrapper[4986]: I1203 13:20:04.183703 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2eeabafd-9779-4486-bde1-9fdaf83cf88a-scripts\") pod \"2eeabafd-9779-4486-bde1-9fdaf83cf88a\" (UID: \"2eeabafd-9779-4486-bde1-9fdaf83cf88a\") " Dec 03 13:20:04 crc kubenswrapper[4986]: I1203 13:20:04.183755 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2eeabafd-9779-4486-bde1-9fdaf83cf88a-log-httpd\") pod \"2eeabafd-9779-4486-bde1-9fdaf83cf88a\" (UID: \"2eeabafd-9779-4486-bde1-9fdaf83cf88a\") " Dec 03 13:20:04 crc kubenswrapper[4986]: I1203 13:20:04.183775 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2eeabafd-9779-4486-bde1-9fdaf83cf88a-ceilometer-tls-certs\") pod \"2eeabafd-9779-4486-bde1-9fdaf83cf88a\" (UID: \"2eeabafd-9779-4486-bde1-9fdaf83cf88a\") " Dec 03 13:20:04 crc kubenswrapper[4986]: I1203 13:20:04.184052 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2eeabafd-9779-4486-bde1-9fdaf83cf88a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "2eeabafd-9779-4486-bde1-9fdaf83cf88a" (UID: "2eeabafd-9779-4486-bde1-9fdaf83cf88a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 13:20:04 crc kubenswrapper[4986]: I1203 13:20:04.184246 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2eeabafd-9779-4486-bde1-9fdaf83cf88a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "2eeabafd-9779-4486-bde1-9fdaf83cf88a" (UID: "2eeabafd-9779-4486-bde1-9fdaf83cf88a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 13:20:04 crc kubenswrapper[4986]: I1203 13:20:04.184343 4986 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2eeabafd-9779-4486-bde1-9fdaf83cf88a-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 03 13:20:04 crc kubenswrapper[4986]: I1203 13:20:04.189408 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2eeabafd-9779-4486-bde1-9fdaf83cf88a-scripts" (OuterVolumeSpecName: "scripts") pod "2eeabafd-9779-4486-bde1-9fdaf83cf88a" (UID: "2eeabafd-9779-4486-bde1-9fdaf83cf88a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:20:04 crc kubenswrapper[4986]: I1203 13:20:04.192504 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2eeabafd-9779-4486-bde1-9fdaf83cf88a-kube-api-access-bx6nd" (OuterVolumeSpecName: "kube-api-access-bx6nd") pod "2eeabafd-9779-4486-bde1-9fdaf83cf88a" (UID: "2eeabafd-9779-4486-bde1-9fdaf83cf88a"). InnerVolumeSpecName "kube-api-access-bx6nd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:20:04 crc kubenswrapper[4986]: I1203 13:20:04.236617 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2eeabafd-9779-4486-bde1-9fdaf83cf88a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "2eeabafd-9779-4486-bde1-9fdaf83cf88a" (UID: "2eeabafd-9779-4486-bde1-9fdaf83cf88a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:20:04 crc kubenswrapper[4986]: I1203 13:20:04.237968 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2eeabafd-9779-4486-bde1-9fdaf83cf88a-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "2eeabafd-9779-4486-bde1-9fdaf83cf88a" (UID: "2eeabafd-9779-4486-bde1-9fdaf83cf88a"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:20:04 crc kubenswrapper[4986]: I1203 13:20:04.258047 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2eeabafd-9779-4486-bde1-9fdaf83cf88a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2eeabafd-9779-4486-bde1-9fdaf83cf88a" (UID: "2eeabafd-9779-4486-bde1-9fdaf83cf88a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:20:04 crc kubenswrapper[4986]: I1203 13:20:04.285506 4986 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2eeabafd-9779-4486-bde1-9fdaf83cf88a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 13:20:04 crc kubenswrapper[4986]: I1203 13:20:04.285541 4986 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2eeabafd-9779-4486-bde1-9fdaf83cf88a-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 13:20:04 crc kubenswrapper[4986]: I1203 13:20:04.285554 4986 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2eeabafd-9779-4486-bde1-9fdaf83cf88a-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 03 13:20:04 crc kubenswrapper[4986]: I1203 13:20:04.285566 4986 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2eeabafd-9779-4486-bde1-9fdaf83cf88a-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 03 13:20:04 crc kubenswrapper[4986]: I1203 13:20:04.285578 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bx6nd\" (UniqueName: \"kubernetes.io/projected/2eeabafd-9779-4486-bde1-9fdaf83cf88a-kube-api-access-bx6nd\") on node \"crc\" DevicePath \"\"" Dec 03 13:20:04 crc kubenswrapper[4986]: I1203 13:20:04.285592 4986 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2eeabafd-9779-4486-bde1-9fdaf83cf88a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 03 13:20:04 crc kubenswrapper[4986]: I1203 13:20:04.327425 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2eeabafd-9779-4486-bde1-9fdaf83cf88a-config-data" (OuterVolumeSpecName: "config-data") pod "2eeabafd-9779-4486-bde1-9fdaf83cf88a" (UID: "2eeabafd-9779-4486-bde1-9fdaf83cf88a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:20:04 crc kubenswrapper[4986]: I1203 13:20:04.387201 4986 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2eeabafd-9779-4486-bde1-9fdaf83cf88a-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 13:20:04 crc kubenswrapper[4986]: I1203 13:20:04.562480 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2eeabafd-9779-4486-bde1-9fdaf83cf88a","Type":"ContainerDied","Data":"3cbf3c0aa6bfb4e1950fcec528423824dc0767804e4d3ac7d8bd11d52f5ca79e"} Dec 03 13:20:04 crc kubenswrapper[4986]: I1203 13:20:04.562539 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 13:20:04 crc kubenswrapper[4986]: I1203 13:20:04.562556 4986 scope.go:117] "RemoveContainer" containerID="68070736734de20d1f8b18cf401705263490156d987c729dda1ddad098d9e1a4" Dec 03 13:20:04 crc kubenswrapper[4986]: I1203 13:20:04.562643 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hm7d5" podUID="b4fce6b6-3b41-4cb2-99cf-910b09724dd5" containerName="registry-server" containerID="cri-o://6b017156d7b165e59c7f38c32951ba9ce033aac63f4a712eef3221340d3073ab" gracePeriod=2 Dec 03 13:20:04 crc kubenswrapper[4986]: I1203 13:20:04.591816 4986 scope.go:117] "RemoveContainer" containerID="24040e342f3df8f8c3e6a04a7e5f0a3f9f93a9fa727a36c48125d4f11717b174" Dec 03 13:20:04 crc kubenswrapper[4986]: I1203 13:20:04.616058 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 03 13:20:04 crc kubenswrapper[4986]: I1203 13:20:04.627527 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 03 13:20:04 crc kubenswrapper[4986]: I1203 13:20:04.634632 4986 scope.go:117] "RemoveContainer" containerID="8f45a89ce34a47281bbd82265b9335e1eb74df5eea4b67e8005eea599d1235ac" Dec 03 13:20:04 crc kubenswrapper[4986]: I1203 13:20:04.655028 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 03 13:20:04 crc kubenswrapper[4986]: E1203 13:20:04.655948 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2eeabafd-9779-4486-bde1-9fdaf83cf88a" containerName="sg-core" Dec 03 13:20:04 crc kubenswrapper[4986]: I1203 13:20:04.656052 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="2eeabafd-9779-4486-bde1-9fdaf83cf88a" containerName="sg-core" Dec 03 13:20:04 crc kubenswrapper[4986]: E1203 13:20:04.656130 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2eeabafd-9779-4486-bde1-9fdaf83cf88a" containerName="ceilometer-central-agent" Dec 03 13:20:04 crc kubenswrapper[4986]: I1203 13:20:04.656141 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="2eeabafd-9779-4486-bde1-9fdaf83cf88a" containerName="ceilometer-central-agent" Dec 03 13:20:04 crc kubenswrapper[4986]: E1203 13:20:04.656155 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2eeabafd-9779-4486-bde1-9fdaf83cf88a" containerName="proxy-httpd" Dec 03 13:20:04 crc kubenswrapper[4986]: I1203 13:20:04.656216 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="2eeabafd-9779-4486-bde1-9fdaf83cf88a" containerName="proxy-httpd" Dec 03 13:20:04 crc kubenswrapper[4986]: E1203 13:20:04.656273 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2eeabafd-9779-4486-bde1-9fdaf83cf88a" containerName="ceilometer-notification-agent" Dec 03 13:20:04 crc kubenswrapper[4986]: I1203 13:20:04.656324 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="2eeabafd-9779-4486-bde1-9fdaf83cf88a" containerName="ceilometer-notification-agent" Dec 03 13:20:04 crc kubenswrapper[4986]: I1203 13:20:04.657060 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="2eeabafd-9779-4486-bde1-9fdaf83cf88a" containerName="ceilometer-central-agent" Dec 03 13:20:04 crc kubenswrapper[4986]: I1203 13:20:04.657113 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="2eeabafd-9779-4486-bde1-9fdaf83cf88a" containerName="ceilometer-notification-agent" Dec 03 13:20:04 crc kubenswrapper[4986]: I1203 13:20:04.657152 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="2eeabafd-9779-4486-bde1-9fdaf83cf88a" containerName="proxy-httpd" Dec 03 13:20:04 crc kubenswrapper[4986]: I1203 13:20:04.657161 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="2eeabafd-9779-4486-bde1-9fdaf83cf88a" containerName="sg-core" Dec 03 13:20:04 crc kubenswrapper[4986]: I1203 13:20:04.664893 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 13:20:04 crc kubenswrapper[4986]: I1203 13:20:04.669163 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 03 13:20:04 crc kubenswrapper[4986]: I1203 13:20:04.669392 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 03 13:20:04 crc kubenswrapper[4986]: I1203 13:20:04.669532 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 03 13:20:04 crc kubenswrapper[4986]: I1203 13:20:04.682240 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 03 13:20:04 crc kubenswrapper[4986]: I1203 13:20:04.694096 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec549fb8-01cd-4c44-840f-430a74bc5cee-log-httpd\") pod \"ceilometer-0\" (UID: \"ec549fb8-01cd-4c44-840f-430a74bc5cee\") " pod="openstack/ceilometer-0" Dec 03 13:20:04 crc kubenswrapper[4986]: I1203 13:20:04.694187 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec549fb8-01cd-4c44-840f-430a74bc5cee-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ec549fb8-01cd-4c44-840f-430a74bc5cee\") " pod="openstack/ceilometer-0" Dec 03 13:20:04 crc kubenswrapper[4986]: I1203 13:20:04.694227 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec549fb8-01cd-4c44-840f-430a74bc5cee-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ec549fb8-01cd-4c44-840f-430a74bc5cee\") " pod="openstack/ceilometer-0" Dec 03 13:20:04 crc kubenswrapper[4986]: I1203 13:20:04.695209 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec549fb8-01cd-4c44-840f-430a74bc5cee-run-httpd\") pod \"ceilometer-0\" (UID: \"ec549fb8-01cd-4c44-840f-430a74bc5cee\") " pod="openstack/ceilometer-0" Dec 03 13:20:04 crc kubenswrapper[4986]: I1203 13:20:04.695232 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec549fb8-01cd-4c44-840f-430a74bc5cee-config-data\") pod \"ceilometer-0\" (UID: \"ec549fb8-01cd-4c44-840f-430a74bc5cee\") " pod="openstack/ceilometer-0" Dec 03 13:20:04 crc kubenswrapper[4986]: I1203 13:20:04.695467 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ec549fb8-01cd-4c44-840f-430a74bc5cee-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ec549fb8-01cd-4c44-840f-430a74bc5cee\") " pod="openstack/ceilometer-0" Dec 03 13:20:04 crc kubenswrapper[4986]: I1203 13:20:04.695536 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec549fb8-01cd-4c44-840f-430a74bc5cee-scripts\") pod \"ceilometer-0\" (UID: \"ec549fb8-01cd-4c44-840f-430a74bc5cee\") " pod="openstack/ceilometer-0" Dec 03 13:20:04 crc kubenswrapper[4986]: I1203 13:20:04.695625 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrzst\" (UniqueName: \"kubernetes.io/projected/ec549fb8-01cd-4c44-840f-430a74bc5cee-kube-api-access-hrzst\") pod \"ceilometer-0\" (UID: \"ec549fb8-01cd-4c44-840f-430a74bc5cee\") " pod="openstack/ceilometer-0" Dec 03 13:20:04 crc kubenswrapper[4986]: I1203 13:20:04.743162 4986 scope.go:117] "RemoveContainer" containerID="8840a305734b65731f05fa8c564f0e6a333ea9694df6852bb2fa5f45380b57e4" Dec 03 13:20:04 crc kubenswrapper[4986]: I1203 13:20:04.797417 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ec549fb8-01cd-4c44-840f-430a74bc5cee-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ec549fb8-01cd-4c44-840f-430a74bc5cee\") " pod="openstack/ceilometer-0" Dec 03 13:20:04 crc kubenswrapper[4986]: I1203 13:20:04.797460 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec549fb8-01cd-4c44-840f-430a74bc5cee-scripts\") pod \"ceilometer-0\" (UID: \"ec549fb8-01cd-4c44-840f-430a74bc5cee\") " pod="openstack/ceilometer-0" Dec 03 13:20:04 crc kubenswrapper[4986]: I1203 13:20:04.797497 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrzst\" (UniqueName: \"kubernetes.io/projected/ec549fb8-01cd-4c44-840f-430a74bc5cee-kube-api-access-hrzst\") pod \"ceilometer-0\" (UID: \"ec549fb8-01cd-4c44-840f-430a74bc5cee\") " pod="openstack/ceilometer-0" Dec 03 13:20:04 crc kubenswrapper[4986]: I1203 13:20:04.797560 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec549fb8-01cd-4c44-840f-430a74bc5cee-log-httpd\") pod \"ceilometer-0\" (UID: \"ec549fb8-01cd-4c44-840f-430a74bc5cee\") " pod="openstack/ceilometer-0" Dec 03 13:20:04 crc kubenswrapper[4986]: I1203 13:20:04.797623 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec549fb8-01cd-4c44-840f-430a74bc5cee-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ec549fb8-01cd-4c44-840f-430a74bc5cee\") " pod="openstack/ceilometer-0" Dec 03 13:20:04 crc kubenswrapper[4986]: I1203 13:20:04.797648 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec549fb8-01cd-4c44-840f-430a74bc5cee-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ec549fb8-01cd-4c44-840f-430a74bc5cee\") " pod="openstack/ceilometer-0" Dec 03 13:20:04 crc kubenswrapper[4986]: I1203 13:20:04.797672 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec549fb8-01cd-4c44-840f-430a74bc5cee-run-httpd\") pod \"ceilometer-0\" (UID: \"ec549fb8-01cd-4c44-840f-430a74bc5cee\") " pod="openstack/ceilometer-0" Dec 03 13:20:04 crc kubenswrapper[4986]: I1203 13:20:04.797695 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec549fb8-01cd-4c44-840f-430a74bc5cee-config-data\") pod \"ceilometer-0\" (UID: \"ec549fb8-01cd-4c44-840f-430a74bc5cee\") " pod="openstack/ceilometer-0" Dec 03 13:20:04 crc kubenswrapper[4986]: I1203 13:20:04.798265 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec549fb8-01cd-4c44-840f-430a74bc5cee-log-httpd\") pod \"ceilometer-0\" (UID: \"ec549fb8-01cd-4c44-840f-430a74bc5cee\") " pod="openstack/ceilometer-0" Dec 03 13:20:04 crc kubenswrapper[4986]: I1203 13:20:04.798356 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec549fb8-01cd-4c44-840f-430a74bc5cee-run-httpd\") pod \"ceilometer-0\" (UID: \"ec549fb8-01cd-4c44-840f-430a74bc5cee\") " pod="openstack/ceilometer-0" Dec 03 13:20:04 crc kubenswrapper[4986]: I1203 13:20:04.802823 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec549fb8-01cd-4c44-840f-430a74bc5cee-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ec549fb8-01cd-4c44-840f-430a74bc5cee\") " pod="openstack/ceilometer-0" Dec 03 13:20:04 crc kubenswrapper[4986]: I1203 13:20:04.802855 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ec549fb8-01cd-4c44-840f-430a74bc5cee-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ec549fb8-01cd-4c44-840f-430a74bc5cee\") " pod="openstack/ceilometer-0" Dec 03 13:20:04 crc kubenswrapper[4986]: I1203 13:20:04.803522 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec549fb8-01cd-4c44-840f-430a74bc5cee-config-data\") pod \"ceilometer-0\" (UID: \"ec549fb8-01cd-4c44-840f-430a74bc5cee\") " pod="openstack/ceilometer-0" Dec 03 13:20:04 crc kubenswrapper[4986]: I1203 13:20:04.804202 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec549fb8-01cd-4c44-840f-430a74bc5cee-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ec549fb8-01cd-4c44-840f-430a74bc5cee\") " pod="openstack/ceilometer-0" Dec 03 13:20:04 crc kubenswrapper[4986]: I1203 13:20:04.815590 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrzst\" (UniqueName: \"kubernetes.io/projected/ec549fb8-01cd-4c44-840f-430a74bc5cee-kube-api-access-hrzst\") pod \"ceilometer-0\" (UID: \"ec549fb8-01cd-4c44-840f-430a74bc5cee\") " pod="openstack/ceilometer-0" Dec 03 13:20:04 crc kubenswrapper[4986]: I1203 13:20:04.815599 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec549fb8-01cd-4c44-840f-430a74bc5cee-scripts\") pod \"ceilometer-0\" (UID: \"ec549fb8-01cd-4c44-840f-430a74bc5cee\") " pod="openstack/ceilometer-0" Dec 03 13:20:04 crc kubenswrapper[4986]: I1203 13:20:04.948923 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hm7d5" Dec 03 13:20:04 crc kubenswrapper[4986]: I1203 13:20:04.954650 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2eeabafd-9779-4486-bde1-9fdaf83cf88a" path="/var/lib/kubelet/pods/2eeabafd-9779-4486-bde1-9fdaf83cf88a/volumes" Dec 03 13:20:05 crc kubenswrapper[4986]: I1203 13:20:05.000085 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4fce6b6-3b41-4cb2-99cf-910b09724dd5-catalog-content\") pod \"b4fce6b6-3b41-4cb2-99cf-910b09724dd5\" (UID: \"b4fce6b6-3b41-4cb2-99cf-910b09724dd5\") " Dec 03 13:20:05 crc kubenswrapper[4986]: I1203 13:20:05.000258 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7msz4\" (UniqueName: \"kubernetes.io/projected/b4fce6b6-3b41-4cb2-99cf-910b09724dd5-kube-api-access-7msz4\") pod \"b4fce6b6-3b41-4cb2-99cf-910b09724dd5\" (UID: \"b4fce6b6-3b41-4cb2-99cf-910b09724dd5\") " Dec 03 13:20:05 crc kubenswrapper[4986]: I1203 13:20:05.000375 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4fce6b6-3b41-4cb2-99cf-910b09724dd5-utilities\") pod \"b4fce6b6-3b41-4cb2-99cf-910b09724dd5\" (UID: \"b4fce6b6-3b41-4cb2-99cf-910b09724dd5\") " Dec 03 13:20:05 crc kubenswrapper[4986]: I1203 13:20:05.001256 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4fce6b6-3b41-4cb2-99cf-910b09724dd5-utilities" (OuterVolumeSpecName: "utilities") pod "b4fce6b6-3b41-4cb2-99cf-910b09724dd5" (UID: "b4fce6b6-3b41-4cb2-99cf-910b09724dd5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 13:20:05 crc kubenswrapper[4986]: I1203 13:20:05.022648 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4fce6b6-3b41-4cb2-99cf-910b09724dd5-kube-api-access-7msz4" (OuterVolumeSpecName: "kube-api-access-7msz4") pod "b4fce6b6-3b41-4cb2-99cf-910b09724dd5" (UID: "b4fce6b6-3b41-4cb2-99cf-910b09724dd5"). InnerVolumeSpecName "kube-api-access-7msz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:20:05 crc kubenswrapper[4986]: I1203 13:20:05.044691 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 13:20:05 crc kubenswrapper[4986]: I1203 13:20:05.046606 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4fce6b6-3b41-4cb2-99cf-910b09724dd5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b4fce6b6-3b41-4cb2-99cf-910b09724dd5" (UID: "b4fce6b6-3b41-4cb2-99cf-910b09724dd5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 13:20:05 crc kubenswrapper[4986]: I1203 13:20:05.103366 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7msz4\" (UniqueName: \"kubernetes.io/projected/b4fce6b6-3b41-4cb2-99cf-910b09724dd5-kube-api-access-7msz4\") on node \"crc\" DevicePath \"\"" Dec 03 13:20:05 crc kubenswrapper[4986]: I1203 13:20:05.103407 4986 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4fce6b6-3b41-4cb2-99cf-910b09724dd5-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 13:20:05 crc kubenswrapper[4986]: I1203 13:20:05.103416 4986 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4fce6b6-3b41-4cb2-99cf-910b09724dd5-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 13:20:05 crc kubenswrapper[4986]: I1203 13:20:05.575635 4986 generic.go:334] "Generic (PLEG): container finished" podID="b4fce6b6-3b41-4cb2-99cf-910b09724dd5" containerID="6b017156d7b165e59c7f38c32951ba9ce033aac63f4a712eef3221340d3073ab" exitCode=0 Dec 03 13:20:05 crc kubenswrapper[4986]: I1203 13:20:05.575721 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hm7d5" Dec 03 13:20:05 crc kubenswrapper[4986]: I1203 13:20:05.575743 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hm7d5" event={"ID":"b4fce6b6-3b41-4cb2-99cf-910b09724dd5","Type":"ContainerDied","Data":"6b017156d7b165e59c7f38c32951ba9ce033aac63f4a712eef3221340d3073ab"} Dec 03 13:20:05 crc kubenswrapper[4986]: I1203 13:20:05.577110 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hm7d5" event={"ID":"b4fce6b6-3b41-4cb2-99cf-910b09724dd5","Type":"ContainerDied","Data":"8bd1e0773383fcf0f9aca22762c67e307160b903ccd58a00f28c7cf39001d28b"} Dec 03 13:20:05 crc kubenswrapper[4986]: I1203 13:20:05.577142 4986 scope.go:117] "RemoveContainer" containerID="6b017156d7b165e59c7f38c32951ba9ce033aac63f4a712eef3221340d3073ab" Dec 03 13:20:05 crc kubenswrapper[4986]: I1203 13:20:05.587561 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 03 13:20:05 crc kubenswrapper[4986]: I1203 13:20:05.606578 4986 scope.go:117] "RemoveContainer" containerID="1436bba8d5bde05954d864a671a3bd68d384aef7985a2a247c900737f1f3018f" Dec 03 13:20:05 crc kubenswrapper[4986]: I1203 13:20:05.637836 4986 scope.go:117] "RemoveContainer" containerID="21358fb275dc97b084cab4c48b9992407c2ed0977ff076078931f51735447e31" Dec 03 13:20:05 crc kubenswrapper[4986]: I1203 13:20:05.689510 4986 scope.go:117] "RemoveContainer" containerID="6b017156d7b165e59c7f38c32951ba9ce033aac63f4a712eef3221340d3073ab" Dec 03 13:20:05 crc kubenswrapper[4986]: E1203 13:20:05.690189 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b017156d7b165e59c7f38c32951ba9ce033aac63f4a712eef3221340d3073ab\": container with ID starting with 6b017156d7b165e59c7f38c32951ba9ce033aac63f4a712eef3221340d3073ab not found: ID does not exist" containerID="6b017156d7b165e59c7f38c32951ba9ce033aac63f4a712eef3221340d3073ab" Dec 03 13:20:05 crc kubenswrapper[4986]: I1203 13:20:05.690233 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b017156d7b165e59c7f38c32951ba9ce033aac63f4a712eef3221340d3073ab"} err="failed to get container status \"6b017156d7b165e59c7f38c32951ba9ce033aac63f4a712eef3221340d3073ab\": rpc error: code = NotFound desc = could not find container \"6b017156d7b165e59c7f38c32951ba9ce033aac63f4a712eef3221340d3073ab\": container with ID starting with 6b017156d7b165e59c7f38c32951ba9ce033aac63f4a712eef3221340d3073ab not found: ID does not exist" Dec 03 13:20:05 crc kubenswrapper[4986]: I1203 13:20:05.690257 4986 scope.go:117] "RemoveContainer" containerID="1436bba8d5bde05954d864a671a3bd68d384aef7985a2a247c900737f1f3018f" Dec 03 13:20:05 crc kubenswrapper[4986]: E1203 13:20:05.691295 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1436bba8d5bde05954d864a671a3bd68d384aef7985a2a247c900737f1f3018f\": container with ID starting with 1436bba8d5bde05954d864a671a3bd68d384aef7985a2a247c900737f1f3018f not found: ID does not exist" containerID="1436bba8d5bde05954d864a671a3bd68d384aef7985a2a247c900737f1f3018f" Dec 03 13:20:05 crc kubenswrapper[4986]: I1203 13:20:05.691324 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1436bba8d5bde05954d864a671a3bd68d384aef7985a2a247c900737f1f3018f"} err="failed to get container status \"1436bba8d5bde05954d864a671a3bd68d384aef7985a2a247c900737f1f3018f\": rpc error: code = NotFound desc = could not find container \"1436bba8d5bde05954d864a671a3bd68d384aef7985a2a247c900737f1f3018f\": container with ID starting with 1436bba8d5bde05954d864a671a3bd68d384aef7985a2a247c900737f1f3018f not found: ID does not exist" Dec 03 13:20:05 crc kubenswrapper[4986]: I1203 13:20:05.691343 4986 scope.go:117] "RemoveContainer" containerID="21358fb275dc97b084cab4c48b9992407c2ed0977ff076078931f51735447e31" Dec 03 13:20:05 crc kubenswrapper[4986]: E1203 13:20:05.691700 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21358fb275dc97b084cab4c48b9992407c2ed0977ff076078931f51735447e31\": container with ID starting with 21358fb275dc97b084cab4c48b9992407c2ed0977ff076078931f51735447e31 not found: ID does not exist" containerID="21358fb275dc97b084cab4c48b9992407c2ed0977ff076078931f51735447e31" Dec 03 13:20:05 crc kubenswrapper[4986]: I1203 13:20:05.691748 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21358fb275dc97b084cab4c48b9992407c2ed0977ff076078931f51735447e31"} err="failed to get container status \"21358fb275dc97b084cab4c48b9992407c2ed0977ff076078931f51735447e31\": rpc error: code = NotFound desc = could not find container \"21358fb275dc97b084cab4c48b9992407c2ed0977ff076078931f51735447e31\": container with ID starting with 21358fb275dc97b084cab4c48b9992407c2ed0977ff076078931f51735447e31 not found: ID does not exist" Dec 03 13:20:05 crc kubenswrapper[4986]: I1203 13:20:05.701957 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hm7d5"] Dec 03 13:20:05 crc kubenswrapper[4986]: I1203 13:20:05.709041 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hm7d5"] Dec 03 13:20:06 crc kubenswrapper[4986]: I1203 13:20:06.591680 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ec549fb8-01cd-4c44-840f-430a74bc5cee","Type":"ContainerStarted","Data":"d938a786309eb75905d43a2c7033163dfb0c0c469bd83e4e5c73c98cb3417d06"} Dec 03 13:20:06 crc kubenswrapper[4986]: I1203 13:20:06.591723 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ec549fb8-01cd-4c44-840f-430a74bc5cee","Type":"ContainerStarted","Data":"18bc5eb239f1b709b87403a3a4a3b0dfcb807010fb30664f782e11ca7a45b9ee"} Dec 03 13:20:06 crc kubenswrapper[4986]: I1203 13:20:06.969982 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4fce6b6-3b41-4cb2-99cf-910b09724dd5" path="/var/lib/kubelet/pods/b4fce6b6-3b41-4cb2-99cf-910b09724dd5/volumes" Dec 03 13:20:07 crc kubenswrapper[4986]: I1203 13:20:07.601812 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ec549fb8-01cd-4c44-840f-430a74bc5cee","Type":"ContainerStarted","Data":"8afc1bb268a4547f05459ece12933ad84c1a10e737f5ff373a520415c52bbc0a"} Dec 03 13:20:07 crc kubenswrapper[4986]: I1203 13:20:07.602071 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ec549fb8-01cd-4c44-840f-430a74bc5cee","Type":"ContainerStarted","Data":"e7f08f7f4dc7edc655c61c14e217b81b71975b656410d27db7566303111db5db"} Dec 03 13:20:09 crc kubenswrapper[4986]: I1203 13:20:09.645310 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ec549fb8-01cd-4c44-840f-430a74bc5cee","Type":"ContainerStarted","Data":"39025e399296b76588d708d6413f8d1abeb9768390837139ebb49d5e47e38f51"} Dec 03 13:20:09 crc kubenswrapper[4986]: I1203 13:20:09.645619 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 03 13:20:09 crc kubenswrapper[4986]: I1203 13:20:09.670657 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.486012152 podStartE2EDuration="5.670637294s" podCreationTimestamp="2025-12-03 13:20:04 +0000 UTC" firstStartedPulling="2025-12-03 13:20:05.606746616 +0000 UTC m=+1465.073177807" lastFinishedPulling="2025-12-03 13:20:08.791371758 +0000 UTC m=+1468.257802949" observedRunningTime="2025-12-03 13:20:09.663100591 +0000 UTC m=+1469.129531782" watchObservedRunningTime="2025-12-03 13:20:09.670637294 +0000 UTC m=+1469.137068485" Dec 03 13:20:12 crc kubenswrapper[4986]: I1203 13:20:12.675183 4986 generic.go:334] "Generic (PLEG): container finished" podID="25ce0dda-434c-4a77-a144-a6956f2e0407" containerID="068ac7536234da0a7a4f4b655aea2c95e576ab15ae3a6933a9e16f518c503a4b" exitCode=0 Dec 03 13:20:12 crc kubenswrapper[4986]: I1203 13:20:12.675310 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-xg69r" event={"ID":"25ce0dda-434c-4a77-a144-a6956f2e0407","Type":"ContainerDied","Data":"068ac7536234da0a7a4f4b655aea2c95e576ab15ae3a6933a9e16f518c503a4b"} Dec 03 13:20:14 crc kubenswrapper[4986]: I1203 13:20:14.041361 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-xg69r" Dec 03 13:20:14 crc kubenswrapper[4986]: I1203 13:20:14.162624 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25ce0dda-434c-4a77-a144-a6956f2e0407-config-data\") pod \"25ce0dda-434c-4a77-a144-a6956f2e0407\" (UID: \"25ce0dda-434c-4a77-a144-a6956f2e0407\") " Dec 03 13:20:14 crc kubenswrapper[4986]: I1203 13:20:14.162710 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25ce0dda-434c-4a77-a144-a6956f2e0407-combined-ca-bundle\") pod \"25ce0dda-434c-4a77-a144-a6956f2e0407\" (UID: \"25ce0dda-434c-4a77-a144-a6956f2e0407\") " Dec 03 13:20:14 crc kubenswrapper[4986]: I1203 13:20:14.163402 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25ce0dda-434c-4a77-a144-a6956f2e0407-scripts\") pod \"25ce0dda-434c-4a77-a144-a6956f2e0407\" (UID: \"25ce0dda-434c-4a77-a144-a6956f2e0407\") " Dec 03 13:20:14 crc kubenswrapper[4986]: I1203 13:20:14.163432 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7zgrn\" (UniqueName: \"kubernetes.io/projected/25ce0dda-434c-4a77-a144-a6956f2e0407-kube-api-access-7zgrn\") pod \"25ce0dda-434c-4a77-a144-a6956f2e0407\" (UID: \"25ce0dda-434c-4a77-a144-a6956f2e0407\") " Dec 03 13:20:14 crc kubenswrapper[4986]: I1203 13:20:14.168627 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25ce0dda-434c-4a77-a144-a6956f2e0407-scripts" (OuterVolumeSpecName: "scripts") pod "25ce0dda-434c-4a77-a144-a6956f2e0407" (UID: "25ce0dda-434c-4a77-a144-a6956f2e0407"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:20:14 crc kubenswrapper[4986]: I1203 13:20:14.171524 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25ce0dda-434c-4a77-a144-a6956f2e0407-kube-api-access-7zgrn" (OuterVolumeSpecName: "kube-api-access-7zgrn") pod "25ce0dda-434c-4a77-a144-a6956f2e0407" (UID: "25ce0dda-434c-4a77-a144-a6956f2e0407"). InnerVolumeSpecName "kube-api-access-7zgrn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:20:14 crc kubenswrapper[4986]: I1203 13:20:14.190318 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25ce0dda-434c-4a77-a144-a6956f2e0407-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "25ce0dda-434c-4a77-a144-a6956f2e0407" (UID: "25ce0dda-434c-4a77-a144-a6956f2e0407"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:20:14 crc kubenswrapper[4986]: I1203 13:20:14.194651 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25ce0dda-434c-4a77-a144-a6956f2e0407-config-data" (OuterVolumeSpecName: "config-data") pod "25ce0dda-434c-4a77-a144-a6956f2e0407" (UID: "25ce0dda-434c-4a77-a144-a6956f2e0407"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:20:14 crc kubenswrapper[4986]: I1203 13:20:14.265785 4986 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25ce0dda-434c-4a77-a144-a6956f2e0407-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 13:20:14 crc kubenswrapper[4986]: I1203 13:20:14.265815 4986 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25ce0dda-434c-4a77-a144-a6956f2e0407-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 13:20:14 crc kubenswrapper[4986]: I1203 13:20:14.265824 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7zgrn\" (UniqueName: \"kubernetes.io/projected/25ce0dda-434c-4a77-a144-a6956f2e0407-kube-api-access-7zgrn\") on node \"crc\" DevicePath \"\"" Dec 03 13:20:14 crc kubenswrapper[4986]: I1203 13:20:14.265834 4986 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25ce0dda-434c-4a77-a144-a6956f2e0407-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 13:20:14 crc kubenswrapper[4986]: I1203 13:20:14.693535 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-xg69r" event={"ID":"25ce0dda-434c-4a77-a144-a6956f2e0407","Type":"ContainerDied","Data":"92214bc63f75dcba1928c515b49031c47ad71fa4375ee8b2c30f60aa4466aede"} Dec 03 13:20:14 crc kubenswrapper[4986]: I1203 13:20:14.693603 4986 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="92214bc63f75dcba1928c515b49031c47ad71fa4375ee8b2c30f60aa4466aede" Dec 03 13:20:14 crc kubenswrapper[4986]: I1203 13:20:14.693608 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-xg69r" Dec 03 13:20:14 crc kubenswrapper[4986]: I1203 13:20:14.806077 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 03 13:20:14 crc kubenswrapper[4986]: E1203 13:20:14.806435 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25ce0dda-434c-4a77-a144-a6956f2e0407" containerName="nova-cell0-conductor-db-sync" Dec 03 13:20:14 crc kubenswrapper[4986]: I1203 13:20:14.806453 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="25ce0dda-434c-4a77-a144-a6956f2e0407" containerName="nova-cell0-conductor-db-sync" Dec 03 13:20:14 crc kubenswrapper[4986]: E1203 13:20:14.806466 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4fce6b6-3b41-4cb2-99cf-910b09724dd5" containerName="extract-content" Dec 03 13:20:14 crc kubenswrapper[4986]: I1203 13:20:14.806473 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4fce6b6-3b41-4cb2-99cf-910b09724dd5" containerName="extract-content" Dec 03 13:20:14 crc kubenswrapper[4986]: E1203 13:20:14.806486 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4fce6b6-3b41-4cb2-99cf-910b09724dd5" containerName="registry-server" Dec 03 13:20:14 crc kubenswrapper[4986]: I1203 13:20:14.806495 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4fce6b6-3b41-4cb2-99cf-910b09724dd5" containerName="registry-server" Dec 03 13:20:14 crc kubenswrapper[4986]: E1203 13:20:14.806514 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4fce6b6-3b41-4cb2-99cf-910b09724dd5" containerName="extract-utilities" Dec 03 13:20:14 crc kubenswrapper[4986]: I1203 13:20:14.806524 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4fce6b6-3b41-4cb2-99cf-910b09724dd5" containerName="extract-utilities" Dec 03 13:20:14 crc kubenswrapper[4986]: I1203 13:20:14.806716 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="25ce0dda-434c-4a77-a144-a6956f2e0407" containerName="nova-cell0-conductor-db-sync" Dec 03 13:20:14 crc kubenswrapper[4986]: I1203 13:20:14.806743 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4fce6b6-3b41-4cb2-99cf-910b09724dd5" containerName="registry-server" Dec 03 13:20:14 crc kubenswrapper[4986]: I1203 13:20:14.807336 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 03 13:20:14 crc kubenswrapper[4986]: I1203 13:20:14.810496 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Dec 03 13:20:14 crc kubenswrapper[4986]: I1203 13:20:14.810684 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-clvtb" Dec 03 13:20:14 crc kubenswrapper[4986]: I1203 13:20:14.839988 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 03 13:20:14 crc kubenswrapper[4986]: I1203 13:20:14.977864 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0e9f871-dd3e-4b2b-813a-01ef0428cb44-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"d0e9f871-dd3e-4b2b-813a-01ef0428cb44\") " pod="openstack/nova-cell0-conductor-0" Dec 03 13:20:14 crc kubenswrapper[4986]: I1203 13:20:14.977906 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0e9f871-dd3e-4b2b-813a-01ef0428cb44-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"d0e9f871-dd3e-4b2b-813a-01ef0428cb44\") " pod="openstack/nova-cell0-conductor-0" Dec 03 13:20:14 crc kubenswrapper[4986]: I1203 13:20:14.978031 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnz8k\" (UniqueName: \"kubernetes.io/projected/d0e9f871-dd3e-4b2b-813a-01ef0428cb44-kube-api-access-cnz8k\") pod \"nova-cell0-conductor-0\" (UID: \"d0e9f871-dd3e-4b2b-813a-01ef0428cb44\") " pod="openstack/nova-cell0-conductor-0" Dec 03 13:20:15 crc kubenswrapper[4986]: I1203 13:20:15.079215 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnz8k\" (UniqueName: \"kubernetes.io/projected/d0e9f871-dd3e-4b2b-813a-01ef0428cb44-kube-api-access-cnz8k\") pod \"nova-cell0-conductor-0\" (UID: \"d0e9f871-dd3e-4b2b-813a-01ef0428cb44\") " pod="openstack/nova-cell0-conductor-0" Dec 03 13:20:15 crc kubenswrapper[4986]: I1203 13:20:15.079317 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0e9f871-dd3e-4b2b-813a-01ef0428cb44-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"d0e9f871-dd3e-4b2b-813a-01ef0428cb44\") " pod="openstack/nova-cell0-conductor-0" Dec 03 13:20:15 crc kubenswrapper[4986]: I1203 13:20:15.079349 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0e9f871-dd3e-4b2b-813a-01ef0428cb44-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"d0e9f871-dd3e-4b2b-813a-01ef0428cb44\") " pod="openstack/nova-cell0-conductor-0" Dec 03 13:20:15 crc kubenswrapper[4986]: I1203 13:20:15.083562 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0e9f871-dd3e-4b2b-813a-01ef0428cb44-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"d0e9f871-dd3e-4b2b-813a-01ef0428cb44\") " pod="openstack/nova-cell0-conductor-0" Dec 03 13:20:15 crc kubenswrapper[4986]: I1203 13:20:15.090880 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0e9f871-dd3e-4b2b-813a-01ef0428cb44-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"d0e9f871-dd3e-4b2b-813a-01ef0428cb44\") " pod="openstack/nova-cell0-conductor-0" Dec 03 13:20:15 crc kubenswrapper[4986]: I1203 13:20:15.095665 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnz8k\" (UniqueName: \"kubernetes.io/projected/d0e9f871-dd3e-4b2b-813a-01ef0428cb44-kube-api-access-cnz8k\") pod \"nova-cell0-conductor-0\" (UID: \"d0e9f871-dd3e-4b2b-813a-01ef0428cb44\") " pod="openstack/nova-cell0-conductor-0" Dec 03 13:20:15 crc kubenswrapper[4986]: I1203 13:20:15.127002 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 03 13:20:15 crc kubenswrapper[4986]: I1203 13:20:15.606228 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 03 13:20:15 crc kubenswrapper[4986]: I1203 13:20:15.705803 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"d0e9f871-dd3e-4b2b-813a-01ef0428cb44","Type":"ContainerStarted","Data":"ede4537f3101cce744e5d377cee8bc969d31c310060dd3dc706507da158d31a3"} Dec 03 13:20:16 crc kubenswrapper[4986]: I1203 13:20:16.717502 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"d0e9f871-dd3e-4b2b-813a-01ef0428cb44","Type":"ContainerStarted","Data":"3e789e37641278e580ecf26478809b9c20cc1d0541f69eecff0b69ebf793d43b"} Dec 03 13:20:16 crc kubenswrapper[4986]: I1203 13:20:16.718113 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Dec 03 13:20:20 crc kubenswrapper[4986]: I1203 13:20:20.152500 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Dec 03 13:20:20 crc kubenswrapper[4986]: I1203 13:20:20.171280 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=6.17125779 podStartE2EDuration="6.17125779s" podCreationTimestamp="2025-12-03 13:20:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:20:16.731399409 +0000 UTC m=+1476.197830600" watchObservedRunningTime="2025-12-03 13:20:20.17125779 +0000 UTC m=+1479.637688981" Dec 03 13:20:20 crc kubenswrapper[4986]: I1203 13:20:20.586056 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-bxnmh"] Dec 03 13:20:20 crc kubenswrapper[4986]: I1203 13:20:20.587924 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-bxnmh" Dec 03 13:20:20 crc kubenswrapper[4986]: I1203 13:20:20.590213 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Dec 03 13:20:20 crc kubenswrapper[4986]: I1203 13:20:20.593521 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Dec 03 13:20:20 crc kubenswrapper[4986]: I1203 13:20:20.599533 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-bxnmh"] Dec 03 13:20:20 crc kubenswrapper[4986]: I1203 13:20:20.785221 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fd749d1-c0e5-4462-a7d0-62586902c0b7-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-bxnmh\" (UID: \"7fd749d1-c0e5-4462-a7d0-62586902c0b7\") " pod="openstack/nova-cell0-cell-mapping-bxnmh" Dec 03 13:20:20 crc kubenswrapper[4986]: I1203 13:20:20.785321 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fd749d1-c0e5-4462-a7d0-62586902c0b7-config-data\") pod \"nova-cell0-cell-mapping-bxnmh\" (UID: \"7fd749d1-c0e5-4462-a7d0-62586902c0b7\") " pod="openstack/nova-cell0-cell-mapping-bxnmh" Dec 03 13:20:20 crc kubenswrapper[4986]: I1203 13:20:20.785439 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cv5dh\" (UniqueName: \"kubernetes.io/projected/7fd749d1-c0e5-4462-a7d0-62586902c0b7-kube-api-access-cv5dh\") pod \"nova-cell0-cell-mapping-bxnmh\" (UID: \"7fd749d1-c0e5-4462-a7d0-62586902c0b7\") " pod="openstack/nova-cell0-cell-mapping-bxnmh" Dec 03 13:20:20 crc kubenswrapper[4986]: I1203 13:20:20.785513 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7fd749d1-c0e5-4462-a7d0-62586902c0b7-scripts\") pod \"nova-cell0-cell-mapping-bxnmh\" (UID: \"7fd749d1-c0e5-4462-a7d0-62586902c0b7\") " pod="openstack/nova-cell0-cell-mapping-bxnmh" Dec 03 13:20:20 crc kubenswrapper[4986]: I1203 13:20:20.792645 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 03 13:20:20 crc kubenswrapper[4986]: I1203 13:20:20.793708 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 03 13:20:20 crc kubenswrapper[4986]: I1203 13:20:20.800258 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 03 13:20:20 crc kubenswrapper[4986]: I1203 13:20:20.805856 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 03 13:20:20 crc kubenswrapper[4986]: I1203 13:20:20.807233 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 03 13:20:20 crc kubenswrapper[4986]: I1203 13:20:20.810104 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Dec 03 13:20:20 crc kubenswrapper[4986]: I1203 13:20:20.816939 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 03 13:20:20 crc kubenswrapper[4986]: I1203 13:20:20.826422 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 03 13:20:20 crc kubenswrapper[4986]: I1203 13:20:20.828323 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 03 13:20:20 crc kubenswrapper[4986]: I1203 13:20:20.832413 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 03 13:20:20 crc kubenswrapper[4986]: I1203 13:20:20.860383 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 03 13:20:20 crc kubenswrapper[4986]: I1203 13:20:20.879393 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 03 13:20:20 crc kubenswrapper[4986]: I1203 13:20:20.890189 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cv5dh\" (UniqueName: \"kubernetes.io/projected/7fd749d1-c0e5-4462-a7d0-62586902c0b7-kube-api-access-cv5dh\") pod \"nova-cell0-cell-mapping-bxnmh\" (UID: \"7fd749d1-c0e5-4462-a7d0-62586902c0b7\") " pod="openstack/nova-cell0-cell-mapping-bxnmh" Dec 03 13:20:20 crc kubenswrapper[4986]: I1203 13:20:20.890242 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7fd749d1-c0e5-4462-a7d0-62586902c0b7-scripts\") pod \"nova-cell0-cell-mapping-bxnmh\" (UID: \"7fd749d1-c0e5-4462-a7d0-62586902c0b7\") " pod="openstack/nova-cell0-cell-mapping-bxnmh" Dec 03 13:20:20 crc kubenswrapper[4986]: I1203 13:20:20.890345 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fd749d1-c0e5-4462-a7d0-62586902c0b7-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-bxnmh\" (UID: \"7fd749d1-c0e5-4462-a7d0-62586902c0b7\") " pod="openstack/nova-cell0-cell-mapping-bxnmh" Dec 03 13:20:20 crc kubenswrapper[4986]: I1203 13:20:20.890392 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fd749d1-c0e5-4462-a7d0-62586902c0b7-config-data\") pod \"nova-cell0-cell-mapping-bxnmh\" (UID: \"7fd749d1-c0e5-4462-a7d0-62586902c0b7\") " pod="openstack/nova-cell0-cell-mapping-bxnmh" Dec 03 13:20:20 crc kubenswrapper[4986]: I1203 13:20:20.896255 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7fd749d1-c0e5-4462-a7d0-62586902c0b7-scripts\") pod \"nova-cell0-cell-mapping-bxnmh\" (UID: \"7fd749d1-c0e5-4462-a7d0-62586902c0b7\") " pod="openstack/nova-cell0-cell-mapping-bxnmh" Dec 03 13:20:20 crc kubenswrapper[4986]: I1203 13:20:20.898800 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fd749d1-c0e5-4462-a7d0-62586902c0b7-config-data\") pod \"nova-cell0-cell-mapping-bxnmh\" (UID: \"7fd749d1-c0e5-4462-a7d0-62586902c0b7\") " pod="openstack/nova-cell0-cell-mapping-bxnmh" Dec 03 13:20:20 crc kubenswrapper[4986]: I1203 13:20:20.901273 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fd749d1-c0e5-4462-a7d0-62586902c0b7-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-bxnmh\" (UID: \"7fd749d1-c0e5-4462-a7d0-62586902c0b7\") " pod="openstack/nova-cell0-cell-mapping-bxnmh" Dec 03 13:20:20 crc kubenswrapper[4986]: I1203 13:20:20.919426 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cv5dh\" (UniqueName: \"kubernetes.io/projected/7fd749d1-c0e5-4462-a7d0-62586902c0b7-kube-api-access-cv5dh\") pod \"nova-cell0-cell-mapping-bxnmh\" (UID: \"7fd749d1-c0e5-4462-a7d0-62586902c0b7\") " pod="openstack/nova-cell0-cell-mapping-bxnmh" Dec 03 13:20:20 crc kubenswrapper[4986]: I1203 13:20:20.919885 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-bxnmh" Dec 03 13:20:20 crc kubenswrapper[4986]: I1203 13:20:20.989037 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 03 13:20:20 crc kubenswrapper[4986]: I1203 13:20:20.995870 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 03 13:20:20 crc kubenswrapper[4986]: I1203 13:20:20.996339 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/217928c1-298a-447c-b601-a153b73b7596-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"217928c1-298a-447c-b601-a153b73b7596\") " pod="openstack/nova-api-0" Dec 03 13:20:20 crc kubenswrapper[4986]: I1203 13:20:20.996388 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42af691e-5613-44b4-b671-def2aca3dfa4-config-data\") pod \"nova-scheduler-0\" (UID: \"42af691e-5613-44b4-b671-def2aca3dfa4\") " pod="openstack/nova-scheduler-0" Dec 03 13:20:20 crc kubenswrapper[4986]: I1203 13:20:20.996446 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/217928c1-298a-447c-b601-a153b73b7596-logs\") pod \"nova-api-0\" (UID: \"217928c1-298a-447c-b601-a153b73b7596\") " pod="openstack/nova-api-0" Dec 03 13:20:20 crc kubenswrapper[4986]: I1203 13:20:20.996484 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42af691e-5613-44b4-b671-def2aca3dfa4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"42af691e-5613-44b4-b671-def2aca3dfa4\") " pod="openstack/nova-scheduler-0" Dec 03 13:20:20 crc kubenswrapper[4986]: I1203 13:20:20.996509 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jv9tx\" (UniqueName: \"kubernetes.io/projected/42af691e-5613-44b4-b671-def2aca3dfa4-kube-api-access-jv9tx\") pod \"nova-scheduler-0\" (UID: \"42af691e-5613-44b4-b671-def2aca3dfa4\") " pod="openstack/nova-scheduler-0" Dec 03 13:20:20 crc kubenswrapper[4986]: I1203 13:20:20.996538 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggpv7\" (UniqueName: \"kubernetes.io/projected/1a5ad857-b212-4ad2-8ddd-89ec8dce1846-kube-api-access-ggpv7\") pod \"nova-cell1-novncproxy-0\" (UID: \"1a5ad857-b212-4ad2-8ddd-89ec8dce1846\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 13:20:20 crc kubenswrapper[4986]: I1203 13:20:20.996554 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a5ad857-b212-4ad2-8ddd-89ec8dce1846-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"1a5ad857-b212-4ad2-8ddd-89ec8dce1846\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 13:20:20 crc kubenswrapper[4986]: I1203 13:20:20.996600 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5k2f2\" (UniqueName: \"kubernetes.io/projected/217928c1-298a-447c-b601-a153b73b7596-kube-api-access-5k2f2\") pod \"nova-api-0\" (UID: \"217928c1-298a-447c-b601-a153b73b7596\") " pod="openstack/nova-api-0" Dec 03 13:20:20 crc kubenswrapper[4986]: I1203 13:20:20.996653 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/217928c1-298a-447c-b601-a153b73b7596-config-data\") pod \"nova-api-0\" (UID: \"217928c1-298a-447c-b601-a153b73b7596\") " pod="openstack/nova-api-0" Dec 03 13:20:20 crc kubenswrapper[4986]: I1203 13:20:20.996673 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a5ad857-b212-4ad2-8ddd-89ec8dce1846-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"1a5ad857-b212-4ad2-8ddd-89ec8dce1846\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 13:20:21 crc kubenswrapper[4986]: I1203 13:20:21.004945 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 03 13:20:21 crc kubenswrapper[4986]: I1203 13:20:21.019065 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 03 13:20:21 crc kubenswrapper[4986]: I1203 13:20:21.058128 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-447fh"] Dec 03 13:20:21 crc kubenswrapper[4986]: I1203 13:20:21.060004 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-845d6d6f59-447fh" Dec 03 13:20:21 crc kubenswrapper[4986]: I1203 13:20:21.071890 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-447fh"] Dec 03 13:20:21 crc kubenswrapper[4986]: I1203 13:20:21.097965 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6bba352c-934e-4070-9722-dc0567a2e0be-ovsdbserver-sb\") pod \"dnsmasq-dns-845d6d6f59-447fh\" (UID: \"6bba352c-934e-4070-9722-dc0567a2e0be\") " pod="openstack/dnsmasq-dns-845d6d6f59-447fh" Dec 03 13:20:21 crc kubenswrapper[4986]: I1203 13:20:21.098017 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/217928c1-298a-447c-b601-a153b73b7596-config-data\") pod \"nova-api-0\" (UID: \"217928c1-298a-447c-b601-a153b73b7596\") " pod="openstack/nova-api-0" Dec 03 13:20:21 crc kubenswrapper[4986]: I1203 13:20:21.098038 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6bba352c-934e-4070-9722-dc0567a2e0be-ovsdbserver-nb\") pod \"dnsmasq-dns-845d6d6f59-447fh\" (UID: \"6bba352c-934e-4070-9722-dc0567a2e0be\") " pod="openstack/dnsmasq-dns-845d6d6f59-447fh" Dec 03 13:20:21 crc kubenswrapper[4986]: I1203 13:20:21.098058 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a5ad857-b212-4ad2-8ddd-89ec8dce1846-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"1a5ad857-b212-4ad2-8ddd-89ec8dce1846\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 13:20:21 crc kubenswrapper[4986]: I1203 13:20:21.098090 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/217928c1-298a-447c-b601-a153b73b7596-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"217928c1-298a-447c-b601-a153b73b7596\") " pod="openstack/nova-api-0" Dec 03 13:20:21 crc kubenswrapper[4986]: I1203 13:20:21.098106 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/560efe6d-f412-43d3-9273-525879183c06-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"560efe6d-f412-43d3-9273-525879183c06\") " pod="openstack/nova-metadata-0" Dec 03 13:20:21 crc kubenswrapper[4986]: I1203 13:20:21.098127 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6bba352c-934e-4070-9722-dc0567a2e0be-dns-swift-storage-0\") pod \"dnsmasq-dns-845d6d6f59-447fh\" (UID: \"6bba352c-934e-4070-9722-dc0567a2e0be\") " pod="openstack/dnsmasq-dns-845d6d6f59-447fh" Dec 03 13:20:21 crc kubenswrapper[4986]: I1203 13:20:21.098146 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42af691e-5613-44b4-b671-def2aca3dfa4-config-data\") pod \"nova-scheduler-0\" (UID: \"42af691e-5613-44b4-b671-def2aca3dfa4\") " pod="openstack/nova-scheduler-0" Dec 03 13:20:21 crc kubenswrapper[4986]: I1203 13:20:21.098170 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bba352c-934e-4070-9722-dc0567a2e0be-config\") pod \"dnsmasq-dns-845d6d6f59-447fh\" (UID: \"6bba352c-934e-4070-9722-dc0567a2e0be\") " pod="openstack/dnsmasq-dns-845d6d6f59-447fh" Dec 03 13:20:21 crc kubenswrapper[4986]: I1203 13:20:21.098189 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/560efe6d-f412-43d3-9273-525879183c06-logs\") pod \"nova-metadata-0\" (UID: \"560efe6d-f412-43d3-9273-525879183c06\") " pod="openstack/nova-metadata-0" Dec 03 13:20:21 crc kubenswrapper[4986]: I1203 13:20:21.098209 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsxhd\" (UniqueName: \"kubernetes.io/projected/6bba352c-934e-4070-9722-dc0567a2e0be-kube-api-access-dsxhd\") pod \"dnsmasq-dns-845d6d6f59-447fh\" (UID: \"6bba352c-934e-4070-9722-dc0567a2e0be\") " pod="openstack/dnsmasq-dns-845d6d6f59-447fh" Dec 03 13:20:21 crc kubenswrapper[4986]: I1203 13:20:21.098230 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/217928c1-298a-447c-b601-a153b73b7596-logs\") pod \"nova-api-0\" (UID: \"217928c1-298a-447c-b601-a153b73b7596\") " pod="openstack/nova-api-0" Dec 03 13:20:21 crc kubenswrapper[4986]: I1203 13:20:21.098259 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42af691e-5613-44b4-b671-def2aca3dfa4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"42af691e-5613-44b4-b671-def2aca3dfa4\") " pod="openstack/nova-scheduler-0" Dec 03 13:20:21 crc kubenswrapper[4986]: I1203 13:20:21.098294 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwpt4\" (UniqueName: \"kubernetes.io/projected/560efe6d-f412-43d3-9273-525879183c06-kube-api-access-gwpt4\") pod \"nova-metadata-0\" (UID: \"560efe6d-f412-43d3-9273-525879183c06\") " pod="openstack/nova-metadata-0" Dec 03 13:20:21 crc kubenswrapper[4986]: I1203 13:20:21.098314 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jv9tx\" (UniqueName: \"kubernetes.io/projected/42af691e-5613-44b4-b671-def2aca3dfa4-kube-api-access-jv9tx\") pod \"nova-scheduler-0\" (UID: \"42af691e-5613-44b4-b671-def2aca3dfa4\") " pod="openstack/nova-scheduler-0" Dec 03 13:20:21 crc kubenswrapper[4986]: I1203 13:20:21.098332 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/560efe6d-f412-43d3-9273-525879183c06-config-data\") pod \"nova-metadata-0\" (UID: \"560efe6d-f412-43d3-9273-525879183c06\") " pod="openstack/nova-metadata-0" Dec 03 13:20:21 crc kubenswrapper[4986]: I1203 13:20:21.098355 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggpv7\" (UniqueName: \"kubernetes.io/projected/1a5ad857-b212-4ad2-8ddd-89ec8dce1846-kube-api-access-ggpv7\") pod \"nova-cell1-novncproxy-0\" (UID: \"1a5ad857-b212-4ad2-8ddd-89ec8dce1846\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 13:20:21 crc kubenswrapper[4986]: I1203 13:20:21.098371 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a5ad857-b212-4ad2-8ddd-89ec8dce1846-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"1a5ad857-b212-4ad2-8ddd-89ec8dce1846\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 13:20:21 crc kubenswrapper[4986]: I1203 13:20:21.098396 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6bba352c-934e-4070-9722-dc0567a2e0be-dns-svc\") pod \"dnsmasq-dns-845d6d6f59-447fh\" (UID: \"6bba352c-934e-4070-9722-dc0567a2e0be\") " pod="openstack/dnsmasq-dns-845d6d6f59-447fh" Dec 03 13:20:21 crc kubenswrapper[4986]: I1203 13:20:21.098423 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5k2f2\" (UniqueName: \"kubernetes.io/projected/217928c1-298a-447c-b601-a153b73b7596-kube-api-access-5k2f2\") pod \"nova-api-0\" (UID: \"217928c1-298a-447c-b601-a153b73b7596\") " pod="openstack/nova-api-0" Dec 03 13:20:21 crc kubenswrapper[4986]: I1203 13:20:21.101164 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/217928c1-298a-447c-b601-a153b73b7596-logs\") pod \"nova-api-0\" (UID: \"217928c1-298a-447c-b601-a153b73b7596\") " pod="openstack/nova-api-0" Dec 03 13:20:21 crc kubenswrapper[4986]: I1203 13:20:21.103755 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42af691e-5613-44b4-b671-def2aca3dfa4-config-data\") pod \"nova-scheduler-0\" (UID: \"42af691e-5613-44b4-b671-def2aca3dfa4\") " pod="openstack/nova-scheduler-0" Dec 03 13:20:21 crc kubenswrapper[4986]: I1203 13:20:21.117222 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a5ad857-b212-4ad2-8ddd-89ec8dce1846-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"1a5ad857-b212-4ad2-8ddd-89ec8dce1846\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 13:20:21 crc kubenswrapper[4986]: I1203 13:20:21.117667 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42af691e-5613-44b4-b671-def2aca3dfa4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"42af691e-5613-44b4-b671-def2aca3dfa4\") " pod="openstack/nova-scheduler-0" Dec 03 13:20:21 crc kubenswrapper[4986]: I1203 13:20:21.117701 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a5ad857-b212-4ad2-8ddd-89ec8dce1846-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"1a5ad857-b212-4ad2-8ddd-89ec8dce1846\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 13:20:21 crc kubenswrapper[4986]: I1203 13:20:21.121934 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/217928c1-298a-447c-b601-a153b73b7596-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"217928c1-298a-447c-b601-a153b73b7596\") " pod="openstack/nova-api-0" Dec 03 13:20:21 crc kubenswrapper[4986]: I1203 13:20:21.122696 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/217928c1-298a-447c-b601-a153b73b7596-config-data\") pod \"nova-api-0\" (UID: \"217928c1-298a-447c-b601-a153b73b7596\") " pod="openstack/nova-api-0" Dec 03 13:20:21 crc kubenswrapper[4986]: I1203 13:20:21.130580 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5k2f2\" (UniqueName: \"kubernetes.io/projected/217928c1-298a-447c-b601-a153b73b7596-kube-api-access-5k2f2\") pod \"nova-api-0\" (UID: \"217928c1-298a-447c-b601-a153b73b7596\") " pod="openstack/nova-api-0" Dec 03 13:20:21 crc kubenswrapper[4986]: I1203 13:20:21.131878 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jv9tx\" (UniqueName: \"kubernetes.io/projected/42af691e-5613-44b4-b671-def2aca3dfa4-kube-api-access-jv9tx\") pod \"nova-scheduler-0\" (UID: \"42af691e-5613-44b4-b671-def2aca3dfa4\") " pod="openstack/nova-scheduler-0" Dec 03 13:20:21 crc kubenswrapper[4986]: I1203 13:20:21.134999 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggpv7\" (UniqueName: \"kubernetes.io/projected/1a5ad857-b212-4ad2-8ddd-89ec8dce1846-kube-api-access-ggpv7\") pod \"nova-cell1-novncproxy-0\" (UID: \"1a5ad857-b212-4ad2-8ddd-89ec8dce1846\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 13:20:21 crc kubenswrapper[4986]: I1203 13:20:21.143560 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 03 13:20:21 crc kubenswrapper[4986]: I1203 13:20:21.201590 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6bba352c-934e-4070-9722-dc0567a2e0be-ovsdbserver-sb\") pod \"dnsmasq-dns-845d6d6f59-447fh\" (UID: \"6bba352c-934e-4070-9722-dc0567a2e0be\") " pod="openstack/dnsmasq-dns-845d6d6f59-447fh" Dec 03 13:20:21 crc kubenswrapper[4986]: I1203 13:20:21.201627 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6bba352c-934e-4070-9722-dc0567a2e0be-ovsdbserver-nb\") pod \"dnsmasq-dns-845d6d6f59-447fh\" (UID: \"6bba352c-934e-4070-9722-dc0567a2e0be\") " pod="openstack/dnsmasq-dns-845d6d6f59-447fh" Dec 03 13:20:21 crc kubenswrapper[4986]: I1203 13:20:21.201664 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/560efe6d-f412-43d3-9273-525879183c06-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"560efe6d-f412-43d3-9273-525879183c06\") " pod="openstack/nova-metadata-0" Dec 03 13:20:21 crc kubenswrapper[4986]: I1203 13:20:21.201684 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6bba352c-934e-4070-9722-dc0567a2e0be-dns-swift-storage-0\") pod \"dnsmasq-dns-845d6d6f59-447fh\" (UID: \"6bba352c-934e-4070-9722-dc0567a2e0be\") " pod="openstack/dnsmasq-dns-845d6d6f59-447fh" Dec 03 13:20:21 crc kubenswrapper[4986]: I1203 13:20:21.201709 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bba352c-934e-4070-9722-dc0567a2e0be-config\") pod \"dnsmasq-dns-845d6d6f59-447fh\" (UID: \"6bba352c-934e-4070-9722-dc0567a2e0be\") " pod="openstack/dnsmasq-dns-845d6d6f59-447fh" Dec 03 13:20:21 crc kubenswrapper[4986]: I1203 13:20:21.201729 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/560efe6d-f412-43d3-9273-525879183c06-logs\") pod \"nova-metadata-0\" (UID: \"560efe6d-f412-43d3-9273-525879183c06\") " pod="openstack/nova-metadata-0" Dec 03 13:20:21 crc kubenswrapper[4986]: I1203 13:20:21.201751 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dsxhd\" (UniqueName: \"kubernetes.io/projected/6bba352c-934e-4070-9722-dc0567a2e0be-kube-api-access-dsxhd\") pod \"dnsmasq-dns-845d6d6f59-447fh\" (UID: \"6bba352c-934e-4070-9722-dc0567a2e0be\") " pod="openstack/dnsmasq-dns-845d6d6f59-447fh" Dec 03 13:20:21 crc kubenswrapper[4986]: I1203 13:20:21.201786 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwpt4\" (UniqueName: \"kubernetes.io/projected/560efe6d-f412-43d3-9273-525879183c06-kube-api-access-gwpt4\") pod \"nova-metadata-0\" (UID: \"560efe6d-f412-43d3-9273-525879183c06\") " pod="openstack/nova-metadata-0" Dec 03 13:20:21 crc kubenswrapper[4986]: I1203 13:20:21.201806 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/560efe6d-f412-43d3-9273-525879183c06-config-data\") pod \"nova-metadata-0\" (UID: \"560efe6d-f412-43d3-9273-525879183c06\") " pod="openstack/nova-metadata-0" Dec 03 13:20:21 crc kubenswrapper[4986]: I1203 13:20:21.201839 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6bba352c-934e-4070-9722-dc0567a2e0be-dns-svc\") pod \"dnsmasq-dns-845d6d6f59-447fh\" (UID: \"6bba352c-934e-4070-9722-dc0567a2e0be\") " pod="openstack/dnsmasq-dns-845d6d6f59-447fh" Dec 03 13:20:21 crc kubenswrapper[4986]: I1203 13:20:21.202712 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6bba352c-934e-4070-9722-dc0567a2e0be-dns-svc\") pod \"dnsmasq-dns-845d6d6f59-447fh\" (UID: \"6bba352c-934e-4070-9722-dc0567a2e0be\") " pod="openstack/dnsmasq-dns-845d6d6f59-447fh" Dec 03 13:20:21 crc kubenswrapper[4986]: I1203 13:20:21.203661 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bba352c-934e-4070-9722-dc0567a2e0be-config\") pod \"dnsmasq-dns-845d6d6f59-447fh\" (UID: \"6bba352c-934e-4070-9722-dc0567a2e0be\") " pod="openstack/dnsmasq-dns-845d6d6f59-447fh" Dec 03 13:20:21 crc kubenswrapper[4986]: I1203 13:20:21.203936 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6bba352c-934e-4070-9722-dc0567a2e0be-ovsdbserver-sb\") pod \"dnsmasq-dns-845d6d6f59-447fh\" (UID: \"6bba352c-934e-4070-9722-dc0567a2e0be\") " pod="openstack/dnsmasq-dns-845d6d6f59-447fh" Dec 03 13:20:21 crc kubenswrapper[4986]: I1203 13:20:21.204466 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/560efe6d-f412-43d3-9273-525879183c06-logs\") pod \"nova-metadata-0\" (UID: \"560efe6d-f412-43d3-9273-525879183c06\") " pod="openstack/nova-metadata-0" Dec 03 13:20:21 crc kubenswrapper[4986]: I1203 13:20:21.204614 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6bba352c-934e-4070-9722-dc0567a2e0be-ovsdbserver-nb\") pod \"dnsmasq-dns-845d6d6f59-447fh\" (UID: \"6bba352c-934e-4070-9722-dc0567a2e0be\") " pod="openstack/dnsmasq-dns-845d6d6f59-447fh" Dec 03 13:20:21 crc kubenswrapper[4986]: I1203 13:20:21.205111 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6bba352c-934e-4070-9722-dc0567a2e0be-dns-swift-storage-0\") pod \"dnsmasq-dns-845d6d6f59-447fh\" (UID: \"6bba352c-934e-4070-9722-dc0567a2e0be\") " pod="openstack/dnsmasq-dns-845d6d6f59-447fh" Dec 03 13:20:21 crc kubenswrapper[4986]: I1203 13:20:21.207975 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/560efe6d-f412-43d3-9273-525879183c06-config-data\") pod \"nova-metadata-0\" (UID: \"560efe6d-f412-43d3-9273-525879183c06\") " pod="openstack/nova-metadata-0" Dec 03 13:20:21 crc kubenswrapper[4986]: I1203 13:20:21.208452 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/560efe6d-f412-43d3-9273-525879183c06-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"560efe6d-f412-43d3-9273-525879183c06\") " pod="openstack/nova-metadata-0" Dec 03 13:20:21 crc kubenswrapper[4986]: I1203 13:20:21.224141 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsxhd\" (UniqueName: \"kubernetes.io/projected/6bba352c-934e-4070-9722-dc0567a2e0be-kube-api-access-dsxhd\") pod \"dnsmasq-dns-845d6d6f59-447fh\" (UID: \"6bba352c-934e-4070-9722-dc0567a2e0be\") " pod="openstack/dnsmasq-dns-845d6d6f59-447fh" Dec 03 13:20:21 crc kubenswrapper[4986]: I1203 13:20:21.225665 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwpt4\" (UniqueName: \"kubernetes.io/projected/560efe6d-f412-43d3-9273-525879183c06-kube-api-access-gwpt4\") pod \"nova-metadata-0\" (UID: \"560efe6d-f412-43d3-9273-525879183c06\") " pod="openstack/nova-metadata-0" Dec 03 13:20:21 crc kubenswrapper[4986]: I1203 13:20:21.409950 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 03 13:20:21 crc kubenswrapper[4986]: I1203 13:20:21.421840 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 03 13:20:21 crc kubenswrapper[4986]: I1203 13:20:21.431880 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 03 13:20:21 crc kubenswrapper[4986]: I1203 13:20:21.449951 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-845d6d6f59-447fh" Dec 03 13:20:21 crc kubenswrapper[4986]: I1203 13:20:21.537720 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-bxnmh"] Dec 03 13:20:21 crc kubenswrapper[4986]: W1203 13:20:21.547508 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7fd749d1_c0e5_4462_a7d0_62586902c0b7.slice/crio-a6a2d462eea4c1003a0ea30e7bd402891dbb12d10e541a868e2a5b0e726928b4 WatchSource:0}: Error finding container a6a2d462eea4c1003a0ea30e7bd402891dbb12d10e541a868e2a5b0e726928b4: Status 404 returned error can't find the container with id a6a2d462eea4c1003a0ea30e7bd402891dbb12d10e541a868e2a5b0e726928b4 Dec 03 13:20:21 crc kubenswrapper[4986]: I1203 13:20:21.616552 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-5z7tp"] Dec 03 13:20:21 crc kubenswrapper[4986]: I1203 13:20:21.617648 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-5z7tp" Dec 03 13:20:21 crc kubenswrapper[4986]: I1203 13:20:21.622473 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Dec 03 13:20:21 crc kubenswrapper[4986]: I1203 13:20:21.622803 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Dec 03 13:20:21 crc kubenswrapper[4986]: I1203 13:20:21.638622 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e6b4f09-32ab-4e2d-95e1-69b0abd29fe3-config-data\") pod \"nova-cell1-conductor-db-sync-5z7tp\" (UID: \"5e6b4f09-32ab-4e2d-95e1-69b0abd29fe3\") " pod="openstack/nova-cell1-conductor-db-sync-5z7tp" Dec 03 13:20:21 crc kubenswrapper[4986]: I1203 13:20:21.638763 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e6b4f09-32ab-4e2d-95e1-69b0abd29fe3-scripts\") pod \"nova-cell1-conductor-db-sync-5z7tp\" (UID: \"5e6b4f09-32ab-4e2d-95e1-69b0abd29fe3\") " pod="openstack/nova-cell1-conductor-db-sync-5z7tp" Dec 03 13:20:21 crc kubenswrapper[4986]: I1203 13:20:21.638822 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e6b4f09-32ab-4e2d-95e1-69b0abd29fe3-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-5z7tp\" (UID: \"5e6b4f09-32ab-4e2d-95e1-69b0abd29fe3\") " pod="openstack/nova-cell1-conductor-db-sync-5z7tp" Dec 03 13:20:21 crc kubenswrapper[4986]: I1203 13:20:21.639883 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpk6l\" (UniqueName: \"kubernetes.io/projected/5e6b4f09-32ab-4e2d-95e1-69b0abd29fe3-kube-api-access-gpk6l\") pod \"nova-cell1-conductor-db-sync-5z7tp\" (UID: \"5e6b4f09-32ab-4e2d-95e1-69b0abd29fe3\") " pod="openstack/nova-cell1-conductor-db-sync-5z7tp" Dec 03 13:20:21 crc kubenswrapper[4986]: I1203 13:20:21.694792 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-5z7tp"] Dec 03 13:20:21 crc kubenswrapper[4986]: I1203 13:20:21.744017 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 03 13:20:21 crc kubenswrapper[4986]: I1203 13:20:21.746364 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e6b4f09-32ab-4e2d-95e1-69b0abd29fe3-scripts\") pod \"nova-cell1-conductor-db-sync-5z7tp\" (UID: \"5e6b4f09-32ab-4e2d-95e1-69b0abd29fe3\") " pod="openstack/nova-cell1-conductor-db-sync-5z7tp" Dec 03 13:20:21 crc kubenswrapper[4986]: I1203 13:20:21.747304 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e6b4f09-32ab-4e2d-95e1-69b0abd29fe3-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-5z7tp\" (UID: \"5e6b4f09-32ab-4e2d-95e1-69b0abd29fe3\") " pod="openstack/nova-cell1-conductor-db-sync-5z7tp" Dec 03 13:20:21 crc kubenswrapper[4986]: I1203 13:20:21.747601 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpk6l\" (UniqueName: \"kubernetes.io/projected/5e6b4f09-32ab-4e2d-95e1-69b0abd29fe3-kube-api-access-gpk6l\") pod \"nova-cell1-conductor-db-sync-5z7tp\" (UID: \"5e6b4f09-32ab-4e2d-95e1-69b0abd29fe3\") " pod="openstack/nova-cell1-conductor-db-sync-5z7tp" Dec 03 13:20:21 crc kubenswrapper[4986]: I1203 13:20:21.747861 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e6b4f09-32ab-4e2d-95e1-69b0abd29fe3-config-data\") pod \"nova-cell1-conductor-db-sync-5z7tp\" (UID: \"5e6b4f09-32ab-4e2d-95e1-69b0abd29fe3\") " pod="openstack/nova-cell1-conductor-db-sync-5z7tp" Dec 03 13:20:21 crc kubenswrapper[4986]: I1203 13:20:21.754153 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e6b4f09-32ab-4e2d-95e1-69b0abd29fe3-scripts\") pod \"nova-cell1-conductor-db-sync-5z7tp\" (UID: \"5e6b4f09-32ab-4e2d-95e1-69b0abd29fe3\") " pod="openstack/nova-cell1-conductor-db-sync-5z7tp" Dec 03 13:20:21 crc kubenswrapper[4986]: I1203 13:20:21.754559 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e6b4f09-32ab-4e2d-95e1-69b0abd29fe3-config-data\") pod \"nova-cell1-conductor-db-sync-5z7tp\" (UID: \"5e6b4f09-32ab-4e2d-95e1-69b0abd29fe3\") " pod="openstack/nova-cell1-conductor-db-sync-5z7tp" Dec 03 13:20:21 crc kubenswrapper[4986]: I1203 13:20:21.754649 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e6b4f09-32ab-4e2d-95e1-69b0abd29fe3-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-5z7tp\" (UID: \"5e6b4f09-32ab-4e2d-95e1-69b0abd29fe3\") " pod="openstack/nova-cell1-conductor-db-sync-5z7tp" Dec 03 13:20:21 crc kubenswrapper[4986]: I1203 13:20:21.767575 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"217928c1-298a-447c-b601-a153b73b7596","Type":"ContainerStarted","Data":"75254b60503df0953b0d42b916c6ef25a616808bdf5c50667b50854dd9f7abf6"} Dec 03 13:20:21 crc kubenswrapper[4986]: I1203 13:20:21.767955 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpk6l\" (UniqueName: \"kubernetes.io/projected/5e6b4f09-32ab-4e2d-95e1-69b0abd29fe3-kube-api-access-gpk6l\") pod \"nova-cell1-conductor-db-sync-5z7tp\" (UID: \"5e6b4f09-32ab-4e2d-95e1-69b0abd29fe3\") " pod="openstack/nova-cell1-conductor-db-sync-5z7tp" Dec 03 13:20:21 crc kubenswrapper[4986]: I1203 13:20:21.769253 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-bxnmh" event={"ID":"7fd749d1-c0e5-4462-a7d0-62586902c0b7","Type":"ContainerStarted","Data":"a6a2d462eea4c1003a0ea30e7bd402891dbb12d10e541a868e2a5b0e726928b4"} Dec 03 13:20:21 crc kubenswrapper[4986]: W1203 13:20:21.853210 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42af691e_5613_44b4_b671_def2aca3dfa4.slice/crio-406c093524aff064b52dc7477cbeba42d9004bdf715ff6b259e6150469a3088d WatchSource:0}: Error finding container 406c093524aff064b52dc7477cbeba42d9004bdf715ff6b259e6150469a3088d: Status 404 returned error can't find the container with id 406c093524aff064b52dc7477cbeba42d9004bdf715ff6b259e6150469a3088d Dec 03 13:20:21 crc kubenswrapper[4986]: I1203 13:20:21.859978 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 03 13:20:21 crc kubenswrapper[4986]: I1203 13:20:21.994987 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-5z7tp" Dec 03 13:20:22 crc kubenswrapper[4986]: I1203 13:20:22.034682 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 03 13:20:22 crc kubenswrapper[4986]: W1203 13:20:22.063043 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1a5ad857_b212_4ad2_8ddd_89ec8dce1846.slice/crio-16259bd08a3c4235d9f314c5b9263e0fc83e039ed7502bc9617aa8fa49e2490e WatchSource:0}: Error finding container 16259bd08a3c4235d9f314c5b9263e0fc83e039ed7502bc9617aa8fa49e2490e: Status 404 returned error can't find the container with id 16259bd08a3c4235d9f314c5b9263e0fc83e039ed7502bc9617aa8fa49e2490e Dec 03 13:20:22 crc kubenswrapper[4986]: I1203 13:20:22.113364 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 03 13:20:22 crc kubenswrapper[4986]: I1203 13:20:22.132349 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-447fh"] Dec 03 13:20:22 crc kubenswrapper[4986]: I1203 13:20:22.554187 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-5z7tp"] Dec 03 13:20:22 crc kubenswrapper[4986]: W1203 13:20:22.562608 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5e6b4f09_32ab_4e2d_95e1_69b0abd29fe3.slice/crio-474660fb131db434128c4853b99df7fa33dc0e1729a11e84ed0ea4b62481945d WatchSource:0}: Error finding container 474660fb131db434128c4853b99df7fa33dc0e1729a11e84ed0ea4b62481945d: Status 404 returned error can't find the container with id 474660fb131db434128c4853b99df7fa33dc0e1729a11e84ed0ea4b62481945d Dec 03 13:20:22 crc kubenswrapper[4986]: I1203 13:20:22.791159 4986 generic.go:334] "Generic (PLEG): container finished" podID="6bba352c-934e-4070-9722-dc0567a2e0be" containerID="411e5b5f5ae9f5af159c1022ee5b1e51476f17b63791e65ee342abcc8d776ff5" exitCode=0 Dec 03 13:20:22 crc kubenswrapper[4986]: I1203 13:20:22.792940 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-447fh" event={"ID":"6bba352c-934e-4070-9722-dc0567a2e0be","Type":"ContainerDied","Data":"411e5b5f5ae9f5af159c1022ee5b1e51476f17b63791e65ee342abcc8d776ff5"} Dec 03 13:20:22 crc kubenswrapper[4986]: I1203 13:20:22.792978 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-447fh" event={"ID":"6bba352c-934e-4070-9722-dc0567a2e0be","Type":"ContainerStarted","Data":"cec6aa46fa92d85ed782f356d349bed71b20b27fd19c53d4faf4b212497fb404"} Dec 03 13:20:22 crc kubenswrapper[4986]: I1203 13:20:22.799472 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"560efe6d-f412-43d3-9273-525879183c06","Type":"ContainerStarted","Data":"5c5648b75c8e3fb0004fcd74855f025bdeb4ce6faa2238c253b654cdf1068e79"} Dec 03 13:20:22 crc kubenswrapper[4986]: I1203 13:20:22.801605 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1a5ad857-b212-4ad2-8ddd-89ec8dce1846","Type":"ContainerStarted","Data":"16259bd08a3c4235d9f314c5b9263e0fc83e039ed7502bc9617aa8fa49e2490e"} Dec 03 13:20:22 crc kubenswrapper[4986]: I1203 13:20:22.802817 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"42af691e-5613-44b4-b671-def2aca3dfa4","Type":"ContainerStarted","Data":"406c093524aff064b52dc7477cbeba42d9004bdf715ff6b259e6150469a3088d"} Dec 03 13:20:22 crc kubenswrapper[4986]: I1203 13:20:22.804952 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-bxnmh" event={"ID":"7fd749d1-c0e5-4462-a7d0-62586902c0b7","Type":"ContainerStarted","Data":"39f52a6451ce714e952e4e69311ae115a7b27864d53b175c0ff574328f8ae3fb"} Dec 03 13:20:22 crc kubenswrapper[4986]: I1203 13:20:22.807124 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-5z7tp" event={"ID":"5e6b4f09-32ab-4e2d-95e1-69b0abd29fe3","Type":"ContainerStarted","Data":"474660fb131db434128c4853b99df7fa33dc0e1729a11e84ed0ea4b62481945d"} Dec 03 13:20:22 crc kubenswrapper[4986]: I1203 13:20:22.851202 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-5z7tp" podStartSLOduration=1.8511805940000001 podStartE2EDuration="1.851180594s" podCreationTimestamp="2025-12-03 13:20:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:20:22.839423737 +0000 UTC m=+1482.305854928" watchObservedRunningTime="2025-12-03 13:20:22.851180594 +0000 UTC m=+1482.317611785" Dec 03 13:20:22 crc kubenswrapper[4986]: I1203 13:20:22.862054 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-bxnmh" podStartSLOduration=2.862036847 podStartE2EDuration="2.862036847s" podCreationTimestamp="2025-12-03 13:20:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:20:22.859095568 +0000 UTC m=+1482.325526759" watchObservedRunningTime="2025-12-03 13:20:22.862036847 +0000 UTC m=+1482.328468038" Dec 03 13:20:23 crc kubenswrapper[4986]: I1203 13:20:23.817198 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-5z7tp" event={"ID":"5e6b4f09-32ab-4e2d-95e1-69b0abd29fe3","Type":"ContainerStarted","Data":"e9e5afb51ce3239b75fca2b2016d9fd12ca14cb8e23518ccf5f77e5f63265b67"} Dec 03 13:20:23 crc kubenswrapper[4986]: I1203 13:20:23.820703 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-447fh" event={"ID":"6bba352c-934e-4070-9722-dc0567a2e0be","Type":"ContainerStarted","Data":"372be5d3e87cafb688f501de584b3f1a3d2af7b2678db6d98a88e494cfb5de0b"} Dec 03 13:20:23 crc kubenswrapper[4986]: I1203 13:20:23.820760 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-845d6d6f59-447fh" Dec 03 13:20:23 crc kubenswrapper[4986]: I1203 13:20:23.845125 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-845d6d6f59-447fh" podStartSLOduration=2.845061902 podStartE2EDuration="2.845061902s" podCreationTimestamp="2025-12-03 13:20:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:20:23.838814103 +0000 UTC m=+1483.305245294" watchObservedRunningTime="2025-12-03 13:20:23.845061902 +0000 UTC m=+1483.311493093" Dec 03 13:20:24 crc kubenswrapper[4986]: I1203 13:20:24.390383 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 03 13:20:24 crc kubenswrapper[4986]: I1203 13:20:24.415530 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 03 13:20:25 crc kubenswrapper[4986]: I1203 13:20:25.842623 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"560efe6d-f412-43d3-9273-525879183c06","Type":"ContainerStarted","Data":"9991da5e68de8fbe28a7f4d78c36863d25d032d8a3b32c8ad47863a4991e2714"} Dec 03 13:20:25 crc kubenswrapper[4986]: I1203 13:20:25.845192 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="1a5ad857-b212-4ad2-8ddd-89ec8dce1846" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://7cc9f81c021b2810e432d03ca3ef05cd4b9c0f72bbb4476e445fca6e97098b27" gracePeriod=30 Dec 03 13:20:25 crc kubenswrapper[4986]: I1203 13:20:25.845304 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1a5ad857-b212-4ad2-8ddd-89ec8dce1846","Type":"ContainerStarted","Data":"7cc9f81c021b2810e432d03ca3ef05cd4b9c0f72bbb4476e445fca6e97098b27"} Dec 03 13:20:25 crc kubenswrapper[4986]: I1203 13:20:25.857828 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"42af691e-5613-44b4-b671-def2aca3dfa4","Type":"ContainerStarted","Data":"e7e127cd7a8ce13540b1b6ee30de93233d9de7af7353c9c437cc58c951d5ca2e"} Dec 03 13:20:25 crc kubenswrapper[4986]: I1203 13:20:25.859623 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"217928c1-298a-447c-b601-a153b73b7596","Type":"ContainerStarted","Data":"3029112c7e97cac2b4783557a4faf7526d6cdec96480dfda248afbb8c48999a5"} Dec 03 13:20:25 crc kubenswrapper[4986]: I1203 13:20:25.871295 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.72798659 podStartE2EDuration="5.871263407s" podCreationTimestamp="2025-12-03 13:20:20 +0000 UTC" firstStartedPulling="2025-12-03 13:20:22.065602516 +0000 UTC m=+1481.532033707" lastFinishedPulling="2025-12-03 13:20:25.208879333 +0000 UTC m=+1484.675310524" observedRunningTime="2025-12-03 13:20:25.867609468 +0000 UTC m=+1485.334040659" watchObservedRunningTime="2025-12-03 13:20:25.871263407 +0000 UTC m=+1485.337694598" Dec 03 13:20:25 crc kubenswrapper[4986]: I1203 13:20:25.887546 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.574740825 podStartE2EDuration="5.887527816s" podCreationTimestamp="2025-12-03 13:20:20 +0000 UTC" firstStartedPulling="2025-12-03 13:20:21.856939266 +0000 UTC m=+1481.323370457" lastFinishedPulling="2025-12-03 13:20:25.169726237 +0000 UTC m=+1484.636157448" observedRunningTime="2025-12-03 13:20:25.885920363 +0000 UTC m=+1485.352351584" watchObservedRunningTime="2025-12-03 13:20:25.887527816 +0000 UTC m=+1485.353959007" Dec 03 13:20:26 crc kubenswrapper[4986]: I1203 13:20:26.411073 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 03 13:20:26 crc kubenswrapper[4986]: I1203 13:20:26.422320 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Dec 03 13:20:26 crc kubenswrapper[4986]: I1203 13:20:26.872073 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"217928c1-298a-447c-b601-a153b73b7596","Type":"ContainerStarted","Data":"5f2078454d1c30138dc752a530fa12392c4d7036699f5fe710f69c968423b2f2"} Dec 03 13:20:26 crc kubenswrapper[4986]: I1203 13:20:26.875818 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"560efe6d-f412-43d3-9273-525879183c06","Type":"ContainerStarted","Data":"453299fe3ae3c31b71ff1c5017d294393844ee068a2e9931b609667c9153e05b"} Dec 03 13:20:26 crc kubenswrapper[4986]: I1203 13:20:26.876232 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="560efe6d-f412-43d3-9273-525879183c06" containerName="nova-metadata-log" containerID="cri-o://9991da5e68de8fbe28a7f4d78c36863d25d032d8a3b32c8ad47863a4991e2714" gracePeriod=30 Dec 03 13:20:26 crc kubenswrapper[4986]: I1203 13:20:26.876294 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="560efe6d-f412-43d3-9273-525879183c06" containerName="nova-metadata-metadata" containerID="cri-o://453299fe3ae3c31b71ff1c5017d294393844ee068a2e9931b609667c9153e05b" gracePeriod=30 Dec 03 13:20:26 crc kubenswrapper[4986]: I1203 13:20:26.904275 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.397991229 podStartE2EDuration="6.904254111s" podCreationTimestamp="2025-12-03 13:20:20 +0000 UTC" firstStartedPulling="2025-12-03 13:20:21.698152681 +0000 UTC m=+1481.164583872" lastFinishedPulling="2025-12-03 13:20:25.204415563 +0000 UTC m=+1484.670846754" observedRunningTime="2025-12-03 13:20:26.896856681 +0000 UTC m=+1486.363287872" watchObservedRunningTime="2025-12-03 13:20:26.904254111 +0000 UTC m=+1486.370685302" Dec 03 13:20:26 crc kubenswrapper[4986]: I1203 13:20:26.923234 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.840640452 podStartE2EDuration="6.923215772s" podCreationTimestamp="2025-12-03 13:20:20 +0000 UTC" firstStartedPulling="2025-12-03 13:20:22.123170399 +0000 UTC m=+1481.589601590" lastFinishedPulling="2025-12-03 13:20:25.205745719 +0000 UTC m=+1484.672176910" observedRunningTime="2025-12-03 13:20:26.922265317 +0000 UTC m=+1486.388696508" watchObservedRunningTime="2025-12-03 13:20:26.923215772 +0000 UTC m=+1486.389646973" Dec 03 13:20:27 crc kubenswrapper[4986]: I1203 13:20:27.446117 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 03 13:20:27 crc kubenswrapper[4986]: I1203 13:20:27.477169 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/560efe6d-f412-43d3-9273-525879183c06-logs\") pod \"560efe6d-f412-43d3-9273-525879183c06\" (UID: \"560efe6d-f412-43d3-9273-525879183c06\") " Dec 03 13:20:27 crc kubenswrapper[4986]: I1203 13:20:27.477245 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/560efe6d-f412-43d3-9273-525879183c06-config-data\") pod \"560efe6d-f412-43d3-9273-525879183c06\" (UID: \"560efe6d-f412-43d3-9273-525879183c06\") " Dec 03 13:20:27 crc kubenswrapper[4986]: I1203 13:20:27.477305 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwpt4\" (UniqueName: \"kubernetes.io/projected/560efe6d-f412-43d3-9273-525879183c06-kube-api-access-gwpt4\") pod \"560efe6d-f412-43d3-9273-525879183c06\" (UID: \"560efe6d-f412-43d3-9273-525879183c06\") " Dec 03 13:20:27 crc kubenswrapper[4986]: I1203 13:20:27.477384 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/560efe6d-f412-43d3-9273-525879183c06-combined-ca-bundle\") pod \"560efe6d-f412-43d3-9273-525879183c06\" (UID: \"560efe6d-f412-43d3-9273-525879183c06\") " Dec 03 13:20:27 crc kubenswrapper[4986]: I1203 13:20:27.477643 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/560efe6d-f412-43d3-9273-525879183c06-logs" (OuterVolumeSpecName: "logs") pod "560efe6d-f412-43d3-9273-525879183c06" (UID: "560efe6d-f412-43d3-9273-525879183c06"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 13:20:27 crc kubenswrapper[4986]: I1203 13:20:27.477840 4986 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/560efe6d-f412-43d3-9273-525879183c06-logs\") on node \"crc\" DevicePath \"\"" Dec 03 13:20:27 crc kubenswrapper[4986]: I1203 13:20:27.490047 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/560efe6d-f412-43d3-9273-525879183c06-kube-api-access-gwpt4" (OuterVolumeSpecName: "kube-api-access-gwpt4") pod "560efe6d-f412-43d3-9273-525879183c06" (UID: "560efe6d-f412-43d3-9273-525879183c06"). InnerVolumeSpecName "kube-api-access-gwpt4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:20:27 crc kubenswrapper[4986]: I1203 13:20:27.528474 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/560efe6d-f412-43d3-9273-525879183c06-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "560efe6d-f412-43d3-9273-525879183c06" (UID: "560efe6d-f412-43d3-9273-525879183c06"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:20:27 crc kubenswrapper[4986]: I1203 13:20:27.545539 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/560efe6d-f412-43d3-9273-525879183c06-config-data" (OuterVolumeSpecName: "config-data") pod "560efe6d-f412-43d3-9273-525879183c06" (UID: "560efe6d-f412-43d3-9273-525879183c06"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:20:27 crc kubenswrapper[4986]: I1203 13:20:27.579827 4986 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/560efe6d-f412-43d3-9273-525879183c06-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 13:20:27 crc kubenswrapper[4986]: I1203 13:20:27.579859 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gwpt4\" (UniqueName: \"kubernetes.io/projected/560efe6d-f412-43d3-9273-525879183c06-kube-api-access-gwpt4\") on node \"crc\" DevicePath \"\"" Dec 03 13:20:27 crc kubenswrapper[4986]: I1203 13:20:27.579874 4986 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/560efe6d-f412-43d3-9273-525879183c06-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 13:20:27 crc kubenswrapper[4986]: I1203 13:20:27.886334 4986 generic.go:334] "Generic (PLEG): container finished" podID="560efe6d-f412-43d3-9273-525879183c06" containerID="453299fe3ae3c31b71ff1c5017d294393844ee068a2e9931b609667c9153e05b" exitCode=0 Dec 03 13:20:27 crc kubenswrapper[4986]: I1203 13:20:27.886608 4986 generic.go:334] "Generic (PLEG): container finished" podID="560efe6d-f412-43d3-9273-525879183c06" containerID="9991da5e68de8fbe28a7f4d78c36863d25d032d8a3b32c8ad47863a4991e2714" exitCode=143 Dec 03 13:20:27 crc kubenswrapper[4986]: I1203 13:20:27.887523 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 03 13:20:27 crc kubenswrapper[4986]: I1203 13:20:27.888912 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"560efe6d-f412-43d3-9273-525879183c06","Type":"ContainerDied","Data":"453299fe3ae3c31b71ff1c5017d294393844ee068a2e9931b609667c9153e05b"} Dec 03 13:20:27 crc kubenswrapper[4986]: I1203 13:20:27.888961 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"560efe6d-f412-43d3-9273-525879183c06","Type":"ContainerDied","Data":"9991da5e68de8fbe28a7f4d78c36863d25d032d8a3b32c8ad47863a4991e2714"} Dec 03 13:20:27 crc kubenswrapper[4986]: I1203 13:20:27.888972 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"560efe6d-f412-43d3-9273-525879183c06","Type":"ContainerDied","Data":"5c5648b75c8e3fb0004fcd74855f025bdeb4ce6faa2238c253b654cdf1068e79"} Dec 03 13:20:27 crc kubenswrapper[4986]: I1203 13:20:27.888987 4986 scope.go:117] "RemoveContainer" containerID="453299fe3ae3c31b71ff1c5017d294393844ee068a2e9931b609667c9153e05b" Dec 03 13:20:27 crc kubenswrapper[4986]: I1203 13:20:27.919417 4986 scope.go:117] "RemoveContainer" containerID="9991da5e68de8fbe28a7f4d78c36863d25d032d8a3b32c8ad47863a4991e2714" Dec 03 13:20:27 crc kubenswrapper[4986]: I1203 13:20:27.924451 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 03 13:20:27 crc kubenswrapper[4986]: I1203 13:20:27.933418 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 03 13:20:27 crc kubenswrapper[4986]: I1203 13:20:27.942051 4986 scope.go:117] "RemoveContainer" containerID="453299fe3ae3c31b71ff1c5017d294393844ee068a2e9931b609667c9153e05b" Dec 03 13:20:27 crc kubenswrapper[4986]: E1203 13:20:27.943984 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"453299fe3ae3c31b71ff1c5017d294393844ee068a2e9931b609667c9153e05b\": container with ID starting with 453299fe3ae3c31b71ff1c5017d294393844ee068a2e9931b609667c9153e05b not found: ID does not exist" containerID="453299fe3ae3c31b71ff1c5017d294393844ee068a2e9931b609667c9153e05b" Dec 03 13:20:27 crc kubenswrapper[4986]: I1203 13:20:27.944026 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"453299fe3ae3c31b71ff1c5017d294393844ee068a2e9931b609667c9153e05b"} err="failed to get container status \"453299fe3ae3c31b71ff1c5017d294393844ee068a2e9931b609667c9153e05b\": rpc error: code = NotFound desc = could not find container \"453299fe3ae3c31b71ff1c5017d294393844ee068a2e9931b609667c9153e05b\": container with ID starting with 453299fe3ae3c31b71ff1c5017d294393844ee068a2e9931b609667c9153e05b not found: ID does not exist" Dec 03 13:20:27 crc kubenswrapper[4986]: I1203 13:20:27.944049 4986 scope.go:117] "RemoveContainer" containerID="9991da5e68de8fbe28a7f4d78c36863d25d032d8a3b32c8ad47863a4991e2714" Dec 03 13:20:27 crc kubenswrapper[4986]: E1203 13:20:27.945193 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9991da5e68de8fbe28a7f4d78c36863d25d032d8a3b32c8ad47863a4991e2714\": container with ID starting with 9991da5e68de8fbe28a7f4d78c36863d25d032d8a3b32c8ad47863a4991e2714 not found: ID does not exist" containerID="9991da5e68de8fbe28a7f4d78c36863d25d032d8a3b32c8ad47863a4991e2714" Dec 03 13:20:27 crc kubenswrapper[4986]: I1203 13:20:27.945220 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9991da5e68de8fbe28a7f4d78c36863d25d032d8a3b32c8ad47863a4991e2714"} err="failed to get container status \"9991da5e68de8fbe28a7f4d78c36863d25d032d8a3b32c8ad47863a4991e2714\": rpc error: code = NotFound desc = could not find container \"9991da5e68de8fbe28a7f4d78c36863d25d032d8a3b32c8ad47863a4991e2714\": container with ID starting with 9991da5e68de8fbe28a7f4d78c36863d25d032d8a3b32c8ad47863a4991e2714 not found: ID does not exist" Dec 03 13:20:27 crc kubenswrapper[4986]: I1203 13:20:27.945237 4986 scope.go:117] "RemoveContainer" containerID="453299fe3ae3c31b71ff1c5017d294393844ee068a2e9931b609667c9153e05b" Dec 03 13:20:27 crc kubenswrapper[4986]: I1203 13:20:27.945495 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"453299fe3ae3c31b71ff1c5017d294393844ee068a2e9931b609667c9153e05b"} err="failed to get container status \"453299fe3ae3c31b71ff1c5017d294393844ee068a2e9931b609667c9153e05b\": rpc error: code = NotFound desc = could not find container \"453299fe3ae3c31b71ff1c5017d294393844ee068a2e9931b609667c9153e05b\": container with ID starting with 453299fe3ae3c31b71ff1c5017d294393844ee068a2e9931b609667c9153e05b not found: ID does not exist" Dec 03 13:20:27 crc kubenswrapper[4986]: I1203 13:20:27.945535 4986 scope.go:117] "RemoveContainer" containerID="9991da5e68de8fbe28a7f4d78c36863d25d032d8a3b32c8ad47863a4991e2714" Dec 03 13:20:27 crc kubenswrapper[4986]: I1203 13:20:27.946013 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9991da5e68de8fbe28a7f4d78c36863d25d032d8a3b32c8ad47863a4991e2714"} err="failed to get container status \"9991da5e68de8fbe28a7f4d78c36863d25d032d8a3b32c8ad47863a4991e2714\": rpc error: code = NotFound desc = could not find container \"9991da5e68de8fbe28a7f4d78c36863d25d032d8a3b32c8ad47863a4991e2714\": container with ID starting with 9991da5e68de8fbe28a7f4d78c36863d25d032d8a3b32c8ad47863a4991e2714 not found: ID does not exist" Dec 03 13:20:27 crc kubenswrapper[4986]: I1203 13:20:27.961589 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 03 13:20:27 crc kubenswrapper[4986]: E1203 13:20:27.962157 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="560efe6d-f412-43d3-9273-525879183c06" containerName="nova-metadata-log" Dec 03 13:20:27 crc kubenswrapper[4986]: I1203 13:20:27.962220 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="560efe6d-f412-43d3-9273-525879183c06" containerName="nova-metadata-log" Dec 03 13:20:27 crc kubenswrapper[4986]: E1203 13:20:27.962329 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="560efe6d-f412-43d3-9273-525879183c06" containerName="nova-metadata-metadata" Dec 03 13:20:27 crc kubenswrapper[4986]: I1203 13:20:27.962383 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="560efe6d-f412-43d3-9273-525879183c06" containerName="nova-metadata-metadata" Dec 03 13:20:27 crc kubenswrapper[4986]: I1203 13:20:27.962592 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="560efe6d-f412-43d3-9273-525879183c06" containerName="nova-metadata-metadata" Dec 03 13:20:27 crc kubenswrapper[4986]: I1203 13:20:27.962677 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="560efe6d-f412-43d3-9273-525879183c06" containerName="nova-metadata-log" Dec 03 13:20:27 crc kubenswrapper[4986]: I1203 13:20:27.963694 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 03 13:20:27 crc kubenswrapper[4986]: I1203 13:20:27.967632 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 03 13:20:27 crc kubenswrapper[4986]: I1203 13:20:27.970247 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 03 13:20:27 crc kubenswrapper[4986]: I1203 13:20:27.973523 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 03 13:20:28 crc kubenswrapper[4986]: I1203 13:20:28.089262 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2fa86dd-b33a-48fe-a2bf-646498f0db5a-logs\") pod \"nova-metadata-0\" (UID: \"a2fa86dd-b33a-48fe-a2bf-646498f0db5a\") " pod="openstack/nova-metadata-0" Dec 03 13:20:28 crc kubenswrapper[4986]: I1203 13:20:28.089355 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqh7h\" (UniqueName: \"kubernetes.io/projected/a2fa86dd-b33a-48fe-a2bf-646498f0db5a-kube-api-access-bqh7h\") pod \"nova-metadata-0\" (UID: \"a2fa86dd-b33a-48fe-a2bf-646498f0db5a\") " pod="openstack/nova-metadata-0" Dec 03 13:20:28 crc kubenswrapper[4986]: I1203 13:20:28.089378 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2fa86dd-b33a-48fe-a2bf-646498f0db5a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a2fa86dd-b33a-48fe-a2bf-646498f0db5a\") " pod="openstack/nova-metadata-0" Dec 03 13:20:28 crc kubenswrapper[4986]: I1203 13:20:28.089396 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2fa86dd-b33a-48fe-a2bf-646498f0db5a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a2fa86dd-b33a-48fe-a2bf-646498f0db5a\") " pod="openstack/nova-metadata-0" Dec 03 13:20:28 crc kubenswrapper[4986]: I1203 13:20:28.089455 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2fa86dd-b33a-48fe-a2bf-646498f0db5a-config-data\") pod \"nova-metadata-0\" (UID: \"a2fa86dd-b33a-48fe-a2bf-646498f0db5a\") " pod="openstack/nova-metadata-0" Dec 03 13:20:28 crc kubenswrapper[4986]: I1203 13:20:28.191175 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2fa86dd-b33a-48fe-a2bf-646498f0db5a-config-data\") pod \"nova-metadata-0\" (UID: \"a2fa86dd-b33a-48fe-a2bf-646498f0db5a\") " pod="openstack/nova-metadata-0" Dec 03 13:20:28 crc kubenswrapper[4986]: I1203 13:20:28.191350 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2fa86dd-b33a-48fe-a2bf-646498f0db5a-logs\") pod \"nova-metadata-0\" (UID: \"a2fa86dd-b33a-48fe-a2bf-646498f0db5a\") " pod="openstack/nova-metadata-0" Dec 03 13:20:28 crc kubenswrapper[4986]: I1203 13:20:28.191440 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqh7h\" (UniqueName: \"kubernetes.io/projected/a2fa86dd-b33a-48fe-a2bf-646498f0db5a-kube-api-access-bqh7h\") pod \"nova-metadata-0\" (UID: \"a2fa86dd-b33a-48fe-a2bf-646498f0db5a\") " pod="openstack/nova-metadata-0" Dec 03 13:20:28 crc kubenswrapper[4986]: I1203 13:20:28.191466 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2fa86dd-b33a-48fe-a2bf-646498f0db5a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a2fa86dd-b33a-48fe-a2bf-646498f0db5a\") " pod="openstack/nova-metadata-0" Dec 03 13:20:28 crc kubenswrapper[4986]: I1203 13:20:28.191489 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2fa86dd-b33a-48fe-a2bf-646498f0db5a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a2fa86dd-b33a-48fe-a2bf-646498f0db5a\") " pod="openstack/nova-metadata-0" Dec 03 13:20:28 crc kubenswrapper[4986]: I1203 13:20:28.193089 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2fa86dd-b33a-48fe-a2bf-646498f0db5a-logs\") pod \"nova-metadata-0\" (UID: \"a2fa86dd-b33a-48fe-a2bf-646498f0db5a\") " pod="openstack/nova-metadata-0" Dec 03 13:20:28 crc kubenswrapper[4986]: I1203 13:20:28.196254 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2fa86dd-b33a-48fe-a2bf-646498f0db5a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a2fa86dd-b33a-48fe-a2bf-646498f0db5a\") " pod="openstack/nova-metadata-0" Dec 03 13:20:28 crc kubenswrapper[4986]: I1203 13:20:28.196617 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2fa86dd-b33a-48fe-a2bf-646498f0db5a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a2fa86dd-b33a-48fe-a2bf-646498f0db5a\") " pod="openstack/nova-metadata-0" Dec 03 13:20:28 crc kubenswrapper[4986]: I1203 13:20:28.198208 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2fa86dd-b33a-48fe-a2bf-646498f0db5a-config-data\") pod \"nova-metadata-0\" (UID: \"a2fa86dd-b33a-48fe-a2bf-646498f0db5a\") " pod="openstack/nova-metadata-0" Dec 03 13:20:28 crc kubenswrapper[4986]: I1203 13:20:28.220373 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqh7h\" (UniqueName: \"kubernetes.io/projected/a2fa86dd-b33a-48fe-a2bf-646498f0db5a-kube-api-access-bqh7h\") pod \"nova-metadata-0\" (UID: \"a2fa86dd-b33a-48fe-a2bf-646498f0db5a\") " pod="openstack/nova-metadata-0" Dec 03 13:20:28 crc kubenswrapper[4986]: I1203 13:20:28.285373 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 03 13:20:28 crc kubenswrapper[4986]: I1203 13:20:28.866327 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 03 13:20:28 crc kubenswrapper[4986]: W1203 13:20:28.869796 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda2fa86dd_b33a_48fe_a2bf_646498f0db5a.slice/crio-0fde23e63ee5b42c284b7cfed00282432bffa59ebd0fb0bb2781b2ae5fa73527 WatchSource:0}: Error finding container 0fde23e63ee5b42c284b7cfed00282432bffa59ebd0fb0bb2781b2ae5fa73527: Status 404 returned error can't find the container with id 0fde23e63ee5b42c284b7cfed00282432bffa59ebd0fb0bb2781b2ae5fa73527 Dec 03 13:20:28 crc kubenswrapper[4986]: I1203 13:20:28.908954 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a2fa86dd-b33a-48fe-a2bf-646498f0db5a","Type":"ContainerStarted","Data":"0fde23e63ee5b42c284b7cfed00282432bffa59ebd0fb0bb2781b2ae5fa73527"} Dec 03 13:20:28 crc kubenswrapper[4986]: I1203 13:20:28.954524 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="560efe6d-f412-43d3-9273-525879183c06" path="/var/lib/kubelet/pods/560efe6d-f412-43d3-9273-525879183c06/volumes" Dec 03 13:20:29 crc kubenswrapper[4986]: I1203 13:20:29.924809 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a2fa86dd-b33a-48fe-a2bf-646498f0db5a","Type":"ContainerStarted","Data":"92951fe17102b40465e5e3c92bc65571bc1963d40d01fe5ff1d081aaaabd9fd2"} Dec 03 13:20:29 crc kubenswrapper[4986]: I1203 13:20:29.925135 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a2fa86dd-b33a-48fe-a2bf-646498f0db5a","Type":"ContainerStarted","Data":"aab43c6f006d6ab9fd0fe6e0f6173cbc31962ee097e48bd533174ea7cc821829"} Dec 03 13:20:29 crc kubenswrapper[4986]: I1203 13:20:29.926901 4986 generic.go:334] "Generic (PLEG): container finished" podID="7fd749d1-c0e5-4462-a7d0-62586902c0b7" containerID="39f52a6451ce714e952e4e69311ae115a7b27864d53b175c0ff574328f8ae3fb" exitCode=0 Dec 03 13:20:29 crc kubenswrapper[4986]: I1203 13:20:29.926930 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-bxnmh" event={"ID":"7fd749d1-c0e5-4462-a7d0-62586902c0b7","Type":"ContainerDied","Data":"39f52a6451ce714e952e4e69311ae115a7b27864d53b175c0ff574328f8ae3fb"} Dec 03 13:20:29 crc kubenswrapper[4986]: I1203 13:20:29.967741 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.967722114 podStartE2EDuration="2.967722114s" podCreationTimestamp="2025-12-03 13:20:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:20:29.950476558 +0000 UTC m=+1489.416907759" watchObservedRunningTime="2025-12-03 13:20:29.967722114 +0000 UTC m=+1489.434153305" Dec 03 13:20:30 crc kubenswrapper[4986]: I1203 13:20:30.940206 4986 generic.go:334] "Generic (PLEG): container finished" podID="5e6b4f09-32ab-4e2d-95e1-69b0abd29fe3" containerID="e9e5afb51ce3239b75fca2b2016d9fd12ca14cb8e23518ccf5f77e5f63265b67" exitCode=0 Dec 03 13:20:30 crc kubenswrapper[4986]: I1203 13:20:30.940311 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-5z7tp" event={"ID":"5e6b4f09-32ab-4e2d-95e1-69b0abd29fe3","Type":"ContainerDied","Data":"e9e5afb51ce3239b75fca2b2016d9fd12ca14cb8e23518ccf5f77e5f63265b67"} Dec 03 13:20:31 crc kubenswrapper[4986]: I1203 13:20:31.144881 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 03 13:20:31 crc kubenswrapper[4986]: I1203 13:20:31.145333 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 03 13:20:31 crc kubenswrapper[4986]: I1203 13:20:31.401151 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-bxnmh" Dec 03 13:20:31 crc kubenswrapper[4986]: I1203 13:20:31.411870 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 03 13:20:31 crc kubenswrapper[4986]: I1203 13:20:31.447976 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 03 13:20:31 crc kubenswrapper[4986]: I1203 13:20:31.453466 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-845d6d6f59-447fh" Dec 03 13:20:31 crc kubenswrapper[4986]: I1203 13:20:31.461797 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cv5dh\" (UniqueName: \"kubernetes.io/projected/7fd749d1-c0e5-4462-a7d0-62586902c0b7-kube-api-access-cv5dh\") pod \"7fd749d1-c0e5-4462-a7d0-62586902c0b7\" (UID: \"7fd749d1-c0e5-4462-a7d0-62586902c0b7\") " Dec 03 13:20:31 crc kubenswrapper[4986]: I1203 13:20:31.461966 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fd749d1-c0e5-4462-a7d0-62586902c0b7-config-data\") pod \"7fd749d1-c0e5-4462-a7d0-62586902c0b7\" (UID: \"7fd749d1-c0e5-4462-a7d0-62586902c0b7\") " Dec 03 13:20:31 crc kubenswrapper[4986]: I1203 13:20:31.462017 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fd749d1-c0e5-4462-a7d0-62586902c0b7-combined-ca-bundle\") pod \"7fd749d1-c0e5-4462-a7d0-62586902c0b7\" (UID: \"7fd749d1-c0e5-4462-a7d0-62586902c0b7\") " Dec 03 13:20:31 crc kubenswrapper[4986]: I1203 13:20:31.462076 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7fd749d1-c0e5-4462-a7d0-62586902c0b7-scripts\") pod \"7fd749d1-c0e5-4462-a7d0-62586902c0b7\" (UID: \"7fd749d1-c0e5-4462-a7d0-62586902c0b7\") " Dec 03 13:20:31 crc kubenswrapper[4986]: I1203 13:20:31.483407 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fd749d1-c0e5-4462-a7d0-62586902c0b7-scripts" (OuterVolumeSpecName: "scripts") pod "7fd749d1-c0e5-4462-a7d0-62586902c0b7" (UID: "7fd749d1-c0e5-4462-a7d0-62586902c0b7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:20:31 crc kubenswrapper[4986]: I1203 13:20:31.483546 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fd749d1-c0e5-4462-a7d0-62586902c0b7-kube-api-access-cv5dh" (OuterVolumeSpecName: "kube-api-access-cv5dh") pod "7fd749d1-c0e5-4462-a7d0-62586902c0b7" (UID: "7fd749d1-c0e5-4462-a7d0-62586902c0b7"). InnerVolumeSpecName "kube-api-access-cv5dh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:20:31 crc kubenswrapper[4986]: I1203 13:20:31.511534 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fd749d1-c0e5-4462-a7d0-62586902c0b7-config-data" (OuterVolumeSpecName: "config-data") pod "7fd749d1-c0e5-4462-a7d0-62586902c0b7" (UID: "7fd749d1-c0e5-4462-a7d0-62586902c0b7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:20:31 crc kubenswrapper[4986]: I1203 13:20:31.520738 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fd749d1-c0e5-4462-a7d0-62586902c0b7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7fd749d1-c0e5-4462-a7d0-62586902c0b7" (UID: "7fd749d1-c0e5-4462-a7d0-62586902c0b7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:20:31 crc kubenswrapper[4986]: I1203 13:20:31.543703 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-xrr75"] Dec 03 13:20:31 crc kubenswrapper[4986]: I1203 13:20:31.543951 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5784cf869f-xrr75" podUID="e622eeb7-7b89-4328-a775-bc363cb7ebd5" containerName="dnsmasq-dns" containerID="cri-o://ff502cffb9a9a0fe3917269c6f052b0c2801688cba77eb00d830b4a8272f5ba7" gracePeriod=10 Dec 03 13:20:31 crc kubenswrapper[4986]: I1203 13:20:31.568517 4986 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7fd749d1-c0e5-4462-a7d0-62586902c0b7-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 13:20:31 crc kubenswrapper[4986]: I1203 13:20:31.568555 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cv5dh\" (UniqueName: \"kubernetes.io/projected/7fd749d1-c0e5-4462-a7d0-62586902c0b7-kube-api-access-cv5dh\") on node \"crc\" DevicePath \"\"" Dec 03 13:20:31 crc kubenswrapper[4986]: I1203 13:20:31.568567 4986 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fd749d1-c0e5-4462-a7d0-62586902c0b7-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 13:20:31 crc kubenswrapper[4986]: I1203 13:20:31.568575 4986 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fd749d1-c0e5-4462-a7d0-62586902c0b7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 13:20:31 crc kubenswrapper[4986]: I1203 13:20:31.952707 4986 generic.go:334] "Generic (PLEG): container finished" podID="e622eeb7-7b89-4328-a775-bc363cb7ebd5" containerID="ff502cffb9a9a0fe3917269c6f052b0c2801688cba77eb00d830b4a8272f5ba7" exitCode=0 Dec 03 13:20:31 crc kubenswrapper[4986]: I1203 13:20:31.952779 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-xrr75" event={"ID":"e622eeb7-7b89-4328-a775-bc363cb7ebd5","Type":"ContainerDied","Data":"ff502cffb9a9a0fe3917269c6f052b0c2801688cba77eb00d830b4a8272f5ba7"} Dec 03 13:20:31 crc kubenswrapper[4986]: I1203 13:20:31.954691 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-bxnmh" Dec 03 13:20:31 crc kubenswrapper[4986]: I1203 13:20:31.957551 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-bxnmh" event={"ID":"7fd749d1-c0e5-4462-a7d0-62586902c0b7","Type":"ContainerDied","Data":"a6a2d462eea4c1003a0ea30e7bd402891dbb12d10e541a868e2a5b0e726928b4"} Dec 03 13:20:31 crc kubenswrapper[4986]: I1203 13:20:31.957622 4986 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a6a2d462eea4c1003a0ea30e7bd402891dbb12d10e541a868e2a5b0e726928b4" Dec 03 13:20:32 crc kubenswrapper[4986]: I1203 13:20:32.025746 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 03 13:20:32 crc kubenswrapper[4986]: I1203 13:20:32.030510 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-xrr75" Dec 03 13:20:32 crc kubenswrapper[4986]: I1203 13:20:32.078721 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrffk\" (UniqueName: \"kubernetes.io/projected/e622eeb7-7b89-4328-a775-bc363cb7ebd5-kube-api-access-jrffk\") pod \"e622eeb7-7b89-4328-a775-bc363cb7ebd5\" (UID: \"e622eeb7-7b89-4328-a775-bc363cb7ebd5\") " Dec 03 13:20:32 crc kubenswrapper[4986]: I1203 13:20:32.078845 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e622eeb7-7b89-4328-a775-bc363cb7ebd5-dns-svc\") pod \"e622eeb7-7b89-4328-a775-bc363cb7ebd5\" (UID: \"e622eeb7-7b89-4328-a775-bc363cb7ebd5\") " Dec 03 13:20:32 crc kubenswrapper[4986]: I1203 13:20:32.078866 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e622eeb7-7b89-4328-a775-bc363cb7ebd5-dns-swift-storage-0\") pod \"e622eeb7-7b89-4328-a775-bc363cb7ebd5\" (UID: \"e622eeb7-7b89-4328-a775-bc363cb7ebd5\") " Dec 03 13:20:32 crc kubenswrapper[4986]: I1203 13:20:32.078882 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e622eeb7-7b89-4328-a775-bc363cb7ebd5-config\") pod \"e622eeb7-7b89-4328-a775-bc363cb7ebd5\" (UID: \"e622eeb7-7b89-4328-a775-bc363cb7ebd5\") " Dec 03 13:20:32 crc kubenswrapper[4986]: I1203 13:20:32.078953 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e622eeb7-7b89-4328-a775-bc363cb7ebd5-ovsdbserver-nb\") pod \"e622eeb7-7b89-4328-a775-bc363cb7ebd5\" (UID: \"e622eeb7-7b89-4328-a775-bc363cb7ebd5\") " Dec 03 13:20:32 crc kubenswrapper[4986]: I1203 13:20:32.079008 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e622eeb7-7b89-4328-a775-bc363cb7ebd5-ovsdbserver-sb\") pod \"e622eeb7-7b89-4328-a775-bc363cb7ebd5\" (UID: \"e622eeb7-7b89-4328-a775-bc363cb7ebd5\") " Dec 03 13:20:32 crc kubenswrapper[4986]: I1203 13:20:32.105363 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e622eeb7-7b89-4328-a775-bc363cb7ebd5-kube-api-access-jrffk" (OuterVolumeSpecName: "kube-api-access-jrffk") pod "e622eeb7-7b89-4328-a775-bc363cb7ebd5" (UID: "e622eeb7-7b89-4328-a775-bc363cb7ebd5"). InnerVolumeSpecName "kube-api-access-jrffk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:20:32 crc kubenswrapper[4986]: I1203 13:20:32.157604 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e622eeb7-7b89-4328-a775-bc363cb7ebd5-config" (OuterVolumeSpecName: "config") pod "e622eeb7-7b89-4328-a775-bc363cb7ebd5" (UID: "e622eeb7-7b89-4328-a775-bc363cb7ebd5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:20:32 crc kubenswrapper[4986]: I1203 13:20:32.157902 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e622eeb7-7b89-4328-a775-bc363cb7ebd5-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e622eeb7-7b89-4328-a775-bc363cb7ebd5" (UID: "e622eeb7-7b89-4328-a775-bc363cb7ebd5"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:20:32 crc kubenswrapper[4986]: I1203 13:20:32.165996 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e622eeb7-7b89-4328-a775-bc363cb7ebd5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e622eeb7-7b89-4328-a775-bc363cb7ebd5" (UID: "e622eeb7-7b89-4328-a775-bc363cb7ebd5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:20:32 crc kubenswrapper[4986]: I1203 13:20:32.185049 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrffk\" (UniqueName: \"kubernetes.io/projected/e622eeb7-7b89-4328-a775-bc363cb7ebd5-kube-api-access-jrffk\") on node \"crc\" DevicePath \"\"" Dec 03 13:20:32 crc kubenswrapper[4986]: I1203 13:20:32.185090 4986 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e622eeb7-7b89-4328-a775-bc363cb7ebd5-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 03 13:20:32 crc kubenswrapper[4986]: I1203 13:20:32.185099 4986 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e622eeb7-7b89-4328-a775-bc363cb7ebd5-config\") on node \"crc\" DevicePath \"\"" Dec 03 13:20:32 crc kubenswrapper[4986]: I1203 13:20:32.185109 4986 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e622eeb7-7b89-4328-a775-bc363cb7ebd5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 03 13:20:32 crc kubenswrapper[4986]: I1203 13:20:32.185974 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e622eeb7-7b89-4328-a775-bc363cb7ebd5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e622eeb7-7b89-4328-a775-bc363cb7ebd5" (UID: "e622eeb7-7b89-4328-a775-bc363cb7ebd5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:20:32 crc kubenswrapper[4986]: I1203 13:20:32.202382 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e622eeb7-7b89-4328-a775-bc363cb7ebd5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e622eeb7-7b89-4328-a775-bc363cb7ebd5" (UID: "e622eeb7-7b89-4328-a775-bc363cb7ebd5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:20:32 crc kubenswrapper[4986]: I1203 13:20:32.213170 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 03 13:20:32 crc kubenswrapper[4986]: I1203 13:20:32.213400 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="217928c1-298a-447c-b601-a153b73b7596" containerName="nova-api-log" containerID="cri-o://3029112c7e97cac2b4783557a4faf7526d6cdec96480dfda248afbb8c48999a5" gracePeriod=30 Dec 03 13:20:32 crc kubenswrapper[4986]: I1203 13:20:32.213541 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="217928c1-298a-447c-b601-a153b73b7596" containerName="nova-api-api" containerID="cri-o://5f2078454d1c30138dc752a530fa12392c4d7036699f5fe710f69c968423b2f2" gracePeriod=30 Dec 03 13:20:32 crc kubenswrapper[4986]: I1203 13:20:32.229471 4986 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="217928c1-298a-447c-b601-a153b73b7596" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.185:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 03 13:20:32 crc kubenswrapper[4986]: I1203 13:20:32.229589 4986 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="217928c1-298a-447c-b601-a153b73b7596" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.185:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 03 13:20:32 crc kubenswrapper[4986]: I1203 13:20:32.242854 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 03 13:20:32 crc kubenswrapper[4986]: I1203 13:20:32.243062 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="a2fa86dd-b33a-48fe-a2bf-646498f0db5a" containerName="nova-metadata-log" containerID="cri-o://aab43c6f006d6ab9fd0fe6e0f6173cbc31962ee097e48bd533174ea7cc821829" gracePeriod=30 Dec 03 13:20:32 crc kubenswrapper[4986]: I1203 13:20:32.243480 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="a2fa86dd-b33a-48fe-a2bf-646498f0db5a" containerName="nova-metadata-metadata" containerID="cri-o://92951fe17102b40465e5e3c92bc65571bc1963d40d01fe5ff1d081aaaabd9fd2" gracePeriod=30 Dec 03 13:20:32 crc kubenswrapper[4986]: I1203 13:20:32.286815 4986 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e622eeb7-7b89-4328-a775-bc363cb7ebd5-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 03 13:20:32 crc kubenswrapper[4986]: I1203 13:20:32.286856 4986 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e622eeb7-7b89-4328-a775-bc363cb7ebd5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 03 13:20:32 crc kubenswrapper[4986]: I1203 13:20:32.411471 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-5z7tp" Dec 03 13:20:32 crc kubenswrapper[4986]: I1203 13:20:32.489812 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e6b4f09-32ab-4e2d-95e1-69b0abd29fe3-scripts\") pod \"5e6b4f09-32ab-4e2d-95e1-69b0abd29fe3\" (UID: \"5e6b4f09-32ab-4e2d-95e1-69b0abd29fe3\") " Dec 03 13:20:32 crc kubenswrapper[4986]: I1203 13:20:32.490058 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gpk6l\" (UniqueName: \"kubernetes.io/projected/5e6b4f09-32ab-4e2d-95e1-69b0abd29fe3-kube-api-access-gpk6l\") pod \"5e6b4f09-32ab-4e2d-95e1-69b0abd29fe3\" (UID: \"5e6b4f09-32ab-4e2d-95e1-69b0abd29fe3\") " Dec 03 13:20:32 crc kubenswrapper[4986]: I1203 13:20:32.490118 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e6b4f09-32ab-4e2d-95e1-69b0abd29fe3-config-data\") pod \"5e6b4f09-32ab-4e2d-95e1-69b0abd29fe3\" (UID: \"5e6b4f09-32ab-4e2d-95e1-69b0abd29fe3\") " Dec 03 13:20:32 crc kubenswrapper[4986]: I1203 13:20:32.490190 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e6b4f09-32ab-4e2d-95e1-69b0abd29fe3-combined-ca-bundle\") pod \"5e6b4f09-32ab-4e2d-95e1-69b0abd29fe3\" (UID: \"5e6b4f09-32ab-4e2d-95e1-69b0abd29fe3\") " Dec 03 13:20:32 crc kubenswrapper[4986]: I1203 13:20:32.499850 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e6b4f09-32ab-4e2d-95e1-69b0abd29fe3-scripts" (OuterVolumeSpecName: "scripts") pod "5e6b4f09-32ab-4e2d-95e1-69b0abd29fe3" (UID: "5e6b4f09-32ab-4e2d-95e1-69b0abd29fe3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:20:32 crc kubenswrapper[4986]: I1203 13:20:32.500511 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e6b4f09-32ab-4e2d-95e1-69b0abd29fe3-kube-api-access-gpk6l" (OuterVolumeSpecName: "kube-api-access-gpk6l") pod "5e6b4f09-32ab-4e2d-95e1-69b0abd29fe3" (UID: "5e6b4f09-32ab-4e2d-95e1-69b0abd29fe3"). InnerVolumeSpecName "kube-api-access-gpk6l". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:20:32 crc kubenswrapper[4986]: I1203 13:20:32.580448 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e6b4f09-32ab-4e2d-95e1-69b0abd29fe3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5e6b4f09-32ab-4e2d-95e1-69b0abd29fe3" (UID: "5e6b4f09-32ab-4e2d-95e1-69b0abd29fe3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:20:32 crc kubenswrapper[4986]: I1203 13:20:32.603138 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gpk6l\" (UniqueName: \"kubernetes.io/projected/5e6b4f09-32ab-4e2d-95e1-69b0abd29fe3-kube-api-access-gpk6l\") on node \"crc\" DevicePath \"\"" Dec 03 13:20:32 crc kubenswrapper[4986]: I1203 13:20:32.603184 4986 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e6b4f09-32ab-4e2d-95e1-69b0abd29fe3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 13:20:32 crc kubenswrapper[4986]: I1203 13:20:32.603195 4986 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e6b4f09-32ab-4e2d-95e1-69b0abd29fe3-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 13:20:32 crc kubenswrapper[4986]: I1203 13:20:32.610045 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e6b4f09-32ab-4e2d-95e1-69b0abd29fe3-config-data" (OuterVolumeSpecName: "config-data") pod "5e6b4f09-32ab-4e2d-95e1-69b0abd29fe3" (UID: "5e6b4f09-32ab-4e2d-95e1-69b0abd29fe3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:20:32 crc kubenswrapper[4986]: I1203 13:20:32.705171 4986 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e6b4f09-32ab-4e2d-95e1-69b0abd29fe3-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 13:20:32 crc kubenswrapper[4986]: I1203 13:20:32.770742 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 03 13:20:32 crc kubenswrapper[4986]: I1203 13:20:32.997343 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-5z7tp" Dec 03 13:20:32 crc kubenswrapper[4986]: I1203 13:20:32.998442 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-5z7tp" event={"ID":"5e6b4f09-32ab-4e2d-95e1-69b0abd29fe3","Type":"ContainerDied","Data":"474660fb131db434128c4853b99df7fa33dc0e1729a11e84ed0ea4b62481945d"} Dec 03 13:20:32 crc kubenswrapper[4986]: I1203 13:20:32.998803 4986 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="474660fb131db434128c4853b99df7fa33dc0e1729a11e84ed0ea4b62481945d" Dec 03 13:20:33 crc kubenswrapper[4986]: I1203 13:20:33.041649 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-xrr75" event={"ID":"e622eeb7-7b89-4328-a775-bc363cb7ebd5","Type":"ContainerDied","Data":"b17c17ef05f26f6096d4c38db5284c6cef39cdd9a0bfb49e9c98dea2070f33b1"} Dec 03 13:20:33 crc kubenswrapper[4986]: I1203 13:20:33.041694 4986 scope.go:117] "RemoveContainer" containerID="ff502cffb9a9a0fe3917269c6f052b0c2801688cba77eb00d830b4a8272f5ba7" Dec 03 13:20:33 crc kubenswrapper[4986]: I1203 13:20:33.041801 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-xrr75" Dec 03 13:20:33 crc kubenswrapper[4986]: I1203 13:20:33.104568 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 03 13:20:33 crc kubenswrapper[4986]: E1203 13:20:33.105005 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e622eeb7-7b89-4328-a775-bc363cb7ebd5" containerName="dnsmasq-dns" Dec 03 13:20:33 crc kubenswrapper[4986]: I1203 13:20:33.105017 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="e622eeb7-7b89-4328-a775-bc363cb7ebd5" containerName="dnsmasq-dns" Dec 03 13:20:33 crc kubenswrapper[4986]: E1203 13:20:33.105036 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e6b4f09-32ab-4e2d-95e1-69b0abd29fe3" containerName="nova-cell1-conductor-db-sync" Dec 03 13:20:33 crc kubenswrapper[4986]: I1203 13:20:33.105042 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e6b4f09-32ab-4e2d-95e1-69b0abd29fe3" containerName="nova-cell1-conductor-db-sync" Dec 03 13:20:33 crc kubenswrapper[4986]: E1203 13:20:33.105066 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fd749d1-c0e5-4462-a7d0-62586902c0b7" containerName="nova-manage" Dec 03 13:20:33 crc kubenswrapper[4986]: I1203 13:20:33.105072 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fd749d1-c0e5-4462-a7d0-62586902c0b7" containerName="nova-manage" Dec 03 13:20:33 crc kubenswrapper[4986]: E1203 13:20:33.105084 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e622eeb7-7b89-4328-a775-bc363cb7ebd5" containerName="init" Dec 03 13:20:33 crc kubenswrapper[4986]: I1203 13:20:33.105090 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="e622eeb7-7b89-4328-a775-bc363cb7ebd5" containerName="init" Dec 03 13:20:33 crc kubenswrapper[4986]: I1203 13:20:33.105253 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e6b4f09-32ab-4e2d-95e1-69b0abd29fe3" containerName="nova-cell1-conductor-db-sync" Dec 03 13:20:33 crc kubenswrapper[4986]: I1203 13:20:33.105270 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="e622eeb7-7b89-4328-a775-bc363cb7ebd5" containerName="dnsmasq-dns" Dec 03 13:20:33 crc kubenswrapper[4986]: I1203 13:20:33.105338 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fd749d1-c0e5-4462-a7d0-62586902c0b7" containerName="nova-manage" Dec 03 13:20:33 crc kubenswrapper[4986]: I1203 13:20:33.106096 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 03 13:20:33 crc kubenswrapper[4986]: I1203 13:20:33.106499 4986 generic.go:334] "Generic (PLEG): container finished" podID="217928c1-298a-447c-b601-a153b73b7596" containerID="3029112c7e97cac2b4783557a4faf7526d6cdec96480dfda248afbb8c48999a5" exitCode=143 Dec 03 13:20:33 crc kubenswrapper[4986]: I1203 13:20:33.106590 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"217928c1-298a-447c-b601-a153b73b7596","Type":"ContainerDied","Data":"3029112c7e97cac2b4783557a4faf7526d6cdec96480dfda248afbb8c48999a5"} Dec 03 13:20:33 crc kubenswrapper[4986]: I1203 13:20:33.111695 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Dec 03 13:20:33 crc kubenswrapper[4986]: I1203 13:20:33.113398 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70518715-2cd8-4268-ae57-aaa98fa28843-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"70518715-2cd8-4268-ae57-aaa98fa28843\") " pod="openstack/nova-cell1-conductor-0" Dec 03 13:20:33 crc kubenswrapper[4986]: I1203 13:20:33.113462 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bczql\" (UniqueName: \"kubernetes.io/projected/70518715-2cd8-4268-ae57-aaa98fa28843-kube-api-access-bczql\") pod \"nova-cell1-conductor-0\" (UID: \"70518715-2cd8-4268-ae57-aaa98fa28843\") " pod="openstack/nova-cell1-conductor-0" Dec 03 13:20:33 crc kubenswrapper[4986]: I1203 13:20:33.113609 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70518715-2cd8-4268-ae57-aaa98fa28843-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"70518715-2cd8-4268-ae57-aaa98fa28843\") " pod="openstack/nova-cell1-conductor-0" Dec 03 13:20:33 crc kubenswrapper[4986]: I1203 13:20:33.131139 4986 generic.go:334] "Generic (PLEG): container finished" podID="a2fa86dd-b33a-48fe-a2bf-646498f0db5a" containerID="92951fe17102b40465e5e3c92bc65571bc1963d40d01fe5ff1d081aaaabd9fd2" exitCode=0 Dec 03 13:20:33 crc kubenswrapper[4986]: I1203 13:20:33.131173 4986 generic.go:334] "Generic (PLEG): container finished" podID="a2fa86dd-b33a-48fe-a2bf-646498f0db5a" containerID="aab43c6f006d6ab9fd0fe6e0f6173cbc31962ee097e48bd533174ea7cc821829" exitCode=143 Dec 03 13:20:33 crc kubenswrapper[4986]: I1203 13:20:33.133324 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a2fa86dd-b33a-48fe-a2bf-646498f0db5a","Type":"ContainerDied","Data":"92951fe17102b40465e5e3c92bc65571bc1963d40d01fe5ff1d081aaaabd9fd2"} Dec 03 13:20:33 crc kubenswrapper[4986]: I1203 13:20:33.133368 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a2fa86dd-b33a-48fe-a2bf-646498f0db5a","Type":"ContainerDied","Data":"aab43c6f006d6ab9fd0fe6e0f6173cbc31962ee097e48bd533174ea7cc821829"} Dec 03 13:20:33 crc kubenswrapper[4986]: I1203 13:20:33.137679 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 03 13:20:33 crc kubenswrapper[4986]: I1203 13:20:33.165943 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-xrr75"] Dec 03 13:20:33 crc kubenswrapper[4986]: I1203 13:20:33.179008 4986 scope.go:117] "RemoveContainer" containerID="3b452d881bfbe7e6782935c827b149861070ea7f9b3cc398c0161668aa50c0f8" Dec 03 13:20:33 crc kubenswrapper[4986]: I1203 13:20:33.191518 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-xrr75"] Dec 03 13:20:33 crc kubenswrapper[4986]: I1203 13:20:33.214181 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70518715-2cd8-4268-ae57-aaa98fa28843-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"70518715-2cd8-4268-ae57-aaa98fa28843\") " pod="openstack/nova-cell1-conductor-0" Dec 03 13:20:33 crc kubenswrapper[4986]: I1203 13:20:33.214241 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70518715-2cd8-4268-ae57-aaa98fa28843-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"70518715-2cd8-4268-ae57-aaa98fa28843\") " pod="openstack/nova-cell1-conductor-0" Dec 03 13:20:33 crc kubenswrapper[4986]: I1203 13:20:33.214294 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bczql\" (UniqueName: \"kubernetes.io/projected/70518715-2cd8-4268-ae57-aaa98fa28843-kube-api-access-bczql\") pod \"nova-cell1-conductor-0\" (UID: \"70518715-2cd8-4268-ae57-aaa98fa28843\") " pod="openstack/nova-cell1-conductor-0" Dec 03 13:20:33 crc kubenswrapper[4986]: I1203 13:20:33.220961 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 03 13:20:33 crc kubenswrapper[4986]: I1203 13:20:33.225908 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70518715-2cd8-4268-ae57-aaa98fa28843-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"70518715-2cd8-4268-ae57-aaa98fa28843\") " pod="openstack/nova-cell1-conductor-0" Dec 03 13:20:33 crc kubenswrapper[4986]: I1203 13:20:33.232837 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70518715-2cd8-4268-ae57-aaa98fa28843-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"70518715-2cd8-4268-ae57-aaa98fa28843\") " pod="openstack/nova-cell1-conductor-0" Dec 03 13:20:33 crc kubenswrapper[4986]: I1203 13:20:33.236830 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bczql\" (UniqueName: \"kubernetes.io/projected/70518715-2cd8-4268-ae57-aaa98fa28843-kube-api-access-bczql\") pod \"nova-cell1-conductor-0\" (UID: \"70518715-2cd8-4268-ae57-aaa98fa28843\") " pod="openstack/nova-cell1-conductor-0" Dec 03 13:20:33 crc kubenswrapper[4986]: I1203 13:20:33.315920 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2fa86dd-b33a-48fe-a2bf-646498f0db5a-logs\") pod \"a2fa86dd-b33a-48fe-a2bf-646498f0db5a\" (UID: \"a2fa86dd-b33a-48fe-a2bf-646498f0db5a\") " Dec 03 13:20:33 crc kubenswrapper[4986]: I1203 13:20:33.316142 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2fa86dd-b33a-48fe-a2bf-646498f0db5a-config-data\") pod \"a2fa86dd-b33a-48fe-a2bf-646498f0db5a\" (UID: \"a2fa86dd-b33a-48fe-a2bf-646498f0db5a\") " Dec 03 13:20:33 crc kubenswrapper[4986]: I1203 13:20:33.316214 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2fa86dd-b33a-48fe-a2bf-646498f0db5a-logs" (OuterVolumeSpecName: "logs") pod "a2fa86dd-b33a-48fe-a2bf-646498f0db5a" (UID: "a2fa86dd-b33a-48fe-a2bf-646498f0db5a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 13:20:33 crc kubenswrapper[4986]: I1203 13:20:33.316229 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2fa86dd-b33a-48fe-a2bf-646498f0db5a-combined-ca-bundle\") pod \"a2fa86dd-b33a-48fe-a2bf-646498f0db5a\" (UID: \"a2fa86dd-b33a-48fe-a2bf-646498f0db5a\") " Dec 03 13:20:33 crc kubenswrapper[4986]: I1203 13:20:33.316337 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bqh7h\" (UniqueName: \"kubernetes.io/projected/a2fa86dd-b33a-48fe-a2bf-646498f0db5a-kube-api-access-bqh7h\") pod \"a2fa86dd-b33a-48fe-a2bf-646498f0db5a\" (UID: \"a2fa86dd-b33a-48fe-a2bf-646498f0db5a\") " Dec 03 13:20:33 crc kubenswrapper[4986]: I1203 13:20:33.316399 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2fa86dd-b33a-48fe-a2bf-646498f0db5a-nova-metadata-tls-certs\") pod \"a2fa86dd-b33a-48fe-a2bf-646498f0db5a\" (UID: \"a2fa86dd-b33a-48fe-a2bf-646498f0db5a\") " Dec 03 13:20:33 crc kubenswrapper[4986]: I1203 13:20:33.316994 4986 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2fa86dd-b33a-48fe-a2bf-646498f0db5a-logs\") on node \"crc\" DevicePath \"\"" Dec 03 13:20:33 crc kubenswrapper[4986]: I1203 13:20:33.320311 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2fa86dd-b33a-48fe-a2bf-646498f0db5a-kube-api-access-bqh7h" (OuterVolumeSpecName: "kube-api-access-bqh7h") pod "a2fa86dd-b33a-48fe-a2bf-646498f0db5a" (UID: "a2fa86dd-b33a-48fe-a2bf-646498f0db5a"). InnerVolumeSpecName "kube-api-access-bqh7h". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:20:33 crc kubenswrapper[4986]: I1203 13:20:33.349484 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2fa86dd-b33a-48fe-a2bf-646498f0db5a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a2fa86dd-b33a-48fe-a2bf-646498f0db5a" (UID: "a2fa86dd-b33a-48fe-a2bf-646498f0db5a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:20:33 crc kubenswrapper[4986]: I1203 13:20:33.355391 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2fa86dd-b33a-48fe-a2bf-646498f0db5a-config-data" (OuterVolumeSpecName: "config-data") pod "a2fa86dd-b33a-48fe-a2bf-646498f0db5a" (UID: "a2fa86dd-b33a-48fe-a2bf-646498f0db5a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:20:33 crc kubenswrapper[4986]: I1203 13:20:33.386561 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2fa86dd-b33a-48fe-a2bf-646498f0db5a-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "a2fa86dd-b33a-48fe-a2bf-646498f0db5a" (UID: "a2fa86dd-b33a-48fe-a2bf-646498f0db5a"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:20:33 crc kubenswrapper[4986]: I1203 13:20:33.419174 4986 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2fa86dd-b33a-48fe-a2bf-646498f0db5a-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 13:20:33 crc kubenswrapper[4986]: I1203 13:20:33.419222 4986 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2fa86dd-b33a-48fe-a2bf-646498f0db5a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 13:20:33 crc kubenswrapper[4986]: I1203 13:20:33.419239 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bqh7h\" (UniqueName: \"kubernetes.io/projected/a2fa86dd-b33a-48fe-a2bf-646498f0db5a-kube-api-access-bqh7h\") on node \"crc\" DevicePath \"\"" Dec 03 13:20:33 crc kubenswrapper[4986]: I1203 13:20:33.419250 4986 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2fa86dd-b33a-48fe-a2bf-646498f0db5a-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 03 13:20:33 crc kubenswrapper[4986]: I1203 13:20:33.466440 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 03 13:20:33 crc kubenswrapper[4986]: I1203 13:20:33.490655 4986 patch_prober.go:28] interesting pod/machine-config-daemon-xggpj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 13:20:33 crc kubenswrapper[4986]: I1203 13:20:33.490708 4986 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 13:20:33 crc kubenswrapper[4986]: I1203 13:20:33.923184 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 03 13:20:33 crc kubenswrapper[4986]: W1203 13:20:33.924002 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod70518715_2cd8_4268_ae57_aaa98fa28843.slice/crio-f717904d02dca6f61dd307b7892ec5cfd6b4a611c40ad76943d2366c1e16ee02 WatchSource:0}: Error finding container f717904d02dca6f61dd307b7892ec5cfd6b4a611c40ad76943d2366c1e16ee02: Status 404 returned error can't find the container with id f717904d02dca6f61dd307b7892ec5cfd6b4a611c40ad76943d2366c1e16ee02 Dec 03 13:20:34 crc kubenswrapper[4986]: I1203 13:20:34.143199 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"70518715-2cd8-4268-ae57-aaa98fa28843","Type":"ContainerStarted","Data":"7a7c98f235760cfaa54196fe9ffc39c52b278bd6549839cff107a75955ca73bd"} Dec 03 13:20:34 crc kubenswrapper[4986]: I1203 13:20:34.143540 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Dec 03 13:20:34 crc kubenswrapper[4986]: I1203 13:20:34.143551 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"70518715-2cd8-4268-ae57-aaa98fa28843","Type":"ContainerStarted","Data":"f717904d02dca6f61dd307b7892ec5cfd6b4a611c40ad76943d2366c1e16ee02"} Dec 03 13:20:34 crc kubenswrapper[4986]: I1203 13:20:34.148569 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 03 13:20:34 crc kubenswrapper[4986]: I1203 13:20:34.148559 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a2fa86dd-b33a-48fe-a2bf-646498f0db5a","Type":"ContainerDied","Data":"0fde23e63ee5b42c284b7cfed00282432bffa59ebd0fb0bb2781b2ae5fa73527"} Dec 03 13:20:34 crc kubenswrapper[4986]: I1203 13:20:34.148700 4986 scope.go:117] "RemoveContainer" containerID="92951fe17102b40465e5e3c92bc65571bc1963d40d01fe5ff1d081aaaabd9fd2" Dec 03 13:20:34 crc kubenswrapper[4986]: I1203 13:20:34.150301 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="42af691e-5613-44b4-b671-def2aca3dfa4" containerName="nova-scheduler-scheduler" containerID="cri-o://e7e127cd7a8ce13540b1b6ee30de93233d9de7af7353c9c437cc58c951d5ca2e" gracePeriod=30 Dec 03 13:20:34 crc kubenswrapper[4986]: I1203 13:20:34.164722 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=1.164704214 podStartE2EDuration="1.164704214s" podCreationTimestamp="2025-12-03 13:20:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:20:34.163405038 +0000 UTC m=+1493.629836229" watchObservedRunningTime="2025-12-03 13:20:34.164704214 +0000 UTC m=+1493.631135405" Dec 03 13:20:34 crc kubenswrapper[4986]: I1203 13:20:34.178091 4986 scope.go:117] "RemoveContainer" containerID="aab43c6f006d6ab9fd0fe6e0f6173cbc31962ee097e48bd533174ea7cc821829" Dec 03 13:20:34 crc kubenswrapper[4986]: I1203 13:20:34.207565 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 03 13:20:34 crc kubenswrapper[4986]: I1203 13:20:34.220855 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 03 13:20:34 crc kubenswrapper[4986]: I1203 13:20:34.229169 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 03 13:20:34 crc kubenswrapper[4986]: E1203 13:20:34.229573 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2fa86dd-b33a-48fe-a2bf-646498f0db5a" containerName="nova-metadata-log" Dec 03 13:20:34 crc kubenswrapper[4986]: I1203 13:20:34.229593 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2fa86dd-b33a-48fe-a2bf-646498f0db5a" containerName="nova-metadata-log" Dec 03 13:20:34 crc kubenswrapper[4986]: E1203 13:20:34.229625 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2fa86dd-b33a-48fe-a2bf-646498f0db5a" containerName="nova-metadata-metadata" Dec 03 13:20:34 crc kubenswrapper[4986]: I1203 13:20:34.229632 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2fa86dd-b33a-48fe-a2bf-646498f0db5a" containerName="nova-metadata-metadata" Dec 03 13:20:34 crc kubenswrapper[4986]: I1203 13:20:34.229835 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2fa86dd-b33a-48fe-a2bf-646498f0db5a" containerName="nova-metadata-log" Dec 03 13:20:34 crc kubenswrapper[4986]: I1203 13:20:34.229861 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2fa86dd-b33a-48fe-a2bf-646498f0db5a" containerName="nova-metadata-metadata" Dec 03 13:20:34 crc kubenswrapper[4986]: I1203 13:20:34.231165 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 03 13:20:34 crc kubenswrapper[4986]: I1203 13:20:34.235762 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 03 13:20:34 crc kubenswrapper[4986]: I1203 13:20:34.235787 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 03 13:20:34 crc kubenswrapper[4986]: I1203 13:20:34.238588 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 03 13:20:34 crc kubenswrapper[4986]: I1203 13:20:34.340166 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndp92\" (UniqueName: \"kubernetes.io/projected/0b17a506-eb92-4170-a2cf-0a563e427197-kube-api-access-ndp92\") pod \"nova-metadata-0\" (UID: \"0b17a506-eb92-4170-a2cf-0a563e427197\") " pod="openstack/nova-metadata-0" Dec 03 13:20:34 crc kubenswrapper[4986]: I1203 13:20:34.340473 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b17a506-eb92-4170-a2cf-0a563e427197-logs\") pod \"nova-metadata-0\" (UID: \"0b17a506-eb92-4170-a2cf-0a563e427197\") " pod="openstack/nova-metadata-0" Dec 03 13:20:34 crc kubenswrapper[4986]: I1203 13:20:34.340627 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b17a506-eb92-4170-a2cf-0a563e427197-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0b17a506-eb92-4170-a2cf-0a563e427197\") " pod="openstack/nova-metadata-0" Dec 03 13:20:34 crc kubenswrapper[4986]: I1203 13:20:34.340798 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b17a506-eb92-4170-a2cf-0a563e427197-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0b17a506-eb92-4170-a2cf-0a563e427197\") " pod="openstack/nova-metadata-0" Dec 03 13:20:34 crc kubenswrapper[4986]: I1203 13:20:34.340904 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b17a506-eb92-4170-a2cf-0a563e427197-config-data\") pod \"nova-metadata-0\" (UID: \"0b17a506-eb92-4170-a2cf-0a563e427197\") " pod="openstack/nova-metadata-0" Dec 03 13:20:34 crc kubenswrapper[4986]: I1203 13:20:34.442392 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b17a506-eb92-4170-a2cf-0a563e427197-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0b17a506-eb92-4170-a2cf-0a563e427197\") " pod="openstack/nova-metadata-0" Dec 03 13:20:34 crc kubenswrapper[4986]: I1203 13:20:34.442467 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b17a506-eb92-4170-a2cf-0a563e427197-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0b17a506-eb92-4170-a2cf-0a563e427197\") " pod="openstack/nova-metadata-0" Dec 03 13:20:34 crc kubenswrapper[4986]: I1203 13:20:34.442488 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b17a506-eb92-4170-a2cf-0a563e427197-config-data\") pod \"nova-metadata-0\" (UID: \"0b17a506-eb92-4170-a2cf-0a563e427197\") " pod="openstack/nova-metadata-0" Dec 03 13:20:34 crc kubenswrapper[4986]: I1203 13:20:34.442573 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndp92\" (UniqueName: \"kubernetes.io/projected/0b17a506-eb92-4170-a2cf-0a563e427197-kube-api-access-ndp92\") pod \"nova-metadata-0\" (UID: \"0b17a506-eb92-4170-a2cf-0a563e427197\") " pod="openstack/nova-metadata-0" Dec 03 13:20:34 crc kubenswrapper[4986]: I1203 13:20:34.442591 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b17a506-eb92-4170-a2cf-0a563e427197-logs\") pod \"nova-metadata-0\" (UID: \"0b17a506-eb92-4170-a2cf-0a563e427197\") " pod="openstack/nova-metadata-0" Dec 03 13:20:34 crc kubenswrapper[4986]: I1203 13:20:34.442995 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b17a506-eb92-4170-a2cf-0a563e427197-logs\") pod \"nova-metadata-0\" (UID: \"0b17a506-eb92-4170-a2cf-0a563e427197\") " pod="openstack/nova-metadata-0" Dec 03 13:20:34 crc kubenswrapper[4986]: I1203 13:20:34.447520 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b17a506-eb92-4170-a2cf-0a563e427197-config-data\") pod \"nova-metadata-0\" (UID: \"0b17a506-eb92-4170-a2cf-0a563e427197\") " pod="openstack/nova-metadata-0" Dec 03 13:20:34 crc kubenswrapper[4986]: I1203 13:20:34.448250 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b17a506-eb92-4170-a2cf-0a563e427197-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0b17a506-eb92-4170-a2cf-0a563e427197\") " pod="openstack/nova-metadata-0" Dec 03 13:20:34 crc kubenswrapper[4986]: I1203 13:20:34.459034 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b17a506-eb92-4170-a2cf-0a563e427197-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0b17a506-eb92-4170-a2cf-0a563e427197\") " pod="openstack/nova-metadata-0" Dec 03 13:20:34 crc kubenswrapper[4986]: I1203 13:20:34.464440 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndp92\" (UniqueName: \"kubernetes.io/projected/0b17a506-eb92-4170-a2cf-0a563e427197-kube-api-access-ndp92\") pod \"nova-metadata-0\" (UID: \"0b17a506-eb92-4170-a2cf-0a563e427197\") " pod="openstack/nova-metadata-0" Dec 03 13:20:34 crc kubenswrapper[4986]: I1203 13:20:34.552274 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 03 13:20:34 crc kubenswrapper[4986]: I1203 13:20:34.957036 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2fa86dd-b33a-48fe-a2bf-646498f0db5a" path="/var/lib/kubelet/pods/a2fa86dd-b33a-48fe-a2bf-646498f0db5a/volumes" Dec 03 13:20:34 crc kubenswrapper[4986]: I1203 13:20:34.957644 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e622eeb7-7b89-4328-a775-bc363cb7ebd5" path="/var/lib/kubelet/pods/e622eeb7-7b89-4328-a775-bc363cb7ebd5/volumes" Dec 03 13:20:34 crc kubenswrapper[4986]: I1203 13:20:34.996170 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 03 13:20:35 crc kubenswrapper[4986]: I1203 13:20:35.062371 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 03 13:20:35 crc kubenswrapper[4986]: I1203 13:20:35.167108 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0b17a506-eb92-4170-a2cf-0a563e427197","Type":"ContainerStarted","Data":"1140fadaa85eaee0fbdb38e87caea1583d6a3f7bebe8af59868412337b5fa1aa"} Dec 03 13:20:35 crc kubenswrapper[4986]: I1203 13:20:35.167157 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0b17a506-eb92-4170-a2cf-0a563e427197","Type":"ContainerStarted","Data":"e2abd383b3cefbc7f5006a716517e19f4972c34e48ae5b6904c66d35773208d6"} Dec 03 13:20:36 crc kubenswrapper[4986]: I1203 13:20:36.176857 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0b17a506-eb92-4170-a2cf-0a563e427197","Type":"ContainerStarted","Data":"d913854847519427e500aca4536d98af73991ac62f6d1f3106d55f98a92ad271"} Dec 03 13:20:36 crc kubenswrapper[4986]: I1203 13:20:36.198582 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.198562065 podStartE2EDuration="2.198562065s" podCreationTimestamp="2025-12-03 13:20:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:20:36.196666464 +0000 UTC m=+1495.663097655" watchObservedRunningTime="2025-12-03 13:20:36.198562065 +0000 UTC m=+1495.664993256" Dec 03 13:20:36 crc kubenswrapper[4986]: E1203 13:20:36.416383 4986 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e7e127cd7a8ce13540b1b6ee30de93233d9de7af7353c9c437cc58c951d5ca2e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 03 13:20:36 crc kubenswrapper[4986]: E1203 13:20:36.418163 4986 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e7e127cd7a8ce13540b1b6ee30de93233d9de7af7353c9c437cc58c951d5ca2e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 03 13:20:36 crc kubenswrapper[4986]: E1203 13:20:36.419326 4986 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e7e127cd7a8ce13540b1b6ee30de93233d9de7af7353c9c437cc58c951d5ca2e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 03 13:20:36 crc kubenswrapper[4986]: E1203 13:20:36.419355 4986 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="42af691e-5613-44b4-b671-def2aca3dfa4" containerName="nova-scheduler-scheduler" Dec 03 13:20:37 crc kubenswrapper[4986]: I1203 13:20:37.794603 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 03 13:20:37 crc kubenswrapper[4986]: I1203 13:20:37.809407 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42af691e-5613-44b4-b671-def2aca3dfa4-combined-ca-bundle\") pod \"42af691e-5613-44b4-b671-def2aca3dfa4\" (UID: \"42af691e-5613-44b4-b671-def2aca3dfa4\") " Dec 03 13:20:37 crc kubenswrapper[4986]: I1203 13:20:37.809594 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42af691e-5613-44b4-b671-def2aca3dfa4-config-data\") pod \"42af691e-5613-44b4-b671-def2aca3dfa4\" (UID: \"42af691e-5613-44b4-b671-def2aca3dfa4\") " Dec 03 13:20:37 crc kubenswrapper[4986]: I1203 13:20:37.809625 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jv9tx\" (UniqueName: \"kubernetes.io/projected/42af691e-5613-44b4-b671-def2aca3dfa4-kube-api-access-jv9tx\") pod \"42af691e-5613-44b4-b671-def2aca3dfa4\" (UID: \"42af691e-5613-44b4-b671-def2aca3dfa4\") " Dec 03 13:20:37 crc kubenswrapper[4986]: I1203 13:20:37.846579 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42af691e-5613-44b4-b671-def2aca3dfa4-kube-api-access-jv9tx" (OuterVolumeSpecName: "kube-api-access-jv9tx") pod "42af691e-5613-44b4-b671-def2aca3dfa4" (UID: "42af691e-5613-44b4-b671-def2aca3dfa4"). InnerVolumeSpecName "kube-api-access-jv9tx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:20:37 crc kubenswrapper[4986]: E1203 13:20:37.848700 4986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/42af691e-5613-44b4-b671-def2aca3dfa4-config-data podName:42af691e-5613-44b4-b671-def2aca3dfa4 nodeName:}" failed. No retries permitted until 2025-12-03 13:20:38.348676462 +0000 UTC m=+1497.815107653 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "config-data" (UniqueName: "kubernetes.io/secret/42af691e-5613-44b4-b671-def2aca3dfa4-config-data") pod "42af691e-5613-44b4-b671-def2aca3dfa4" (UID: "42af691e-5613-44b4-b671-def2aca3dfa4") : error deleting /var/lib/kubelet/pods/42af691e-5613-44b4-b671-def2aca3dfa4/volume-subpaths: remove /var/lib/kubelet/pods/42af691e-5613-44b4-b671-def2aca3dfa4/volume-subpaths: no such file or directory Dec 03 13:20:37 crc kubenswrapper[4986]: I1203 13:20:37.854441 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42af691e-5613-44b4-b671-def2aca3dfa4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "42af691e-5613-44b4-b671-def2aca3dfa4" (UID: "42af691e-5613-44b4-b671-def2aca3dfa4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:20:37 crc kubenswrapper[4986]: I1203 13:20:37.912405 4986 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42af691e-5613-44b4-b671-def2aca3dfa4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 13:20:37 crc kubenswrapper[4986]: I1203 13:20:37.912440 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jv9tx\" (UniqueName: \"kubernetes.io/projected/42af691e-5613-44b4-b671-def2aca3dfa4-kube-api-access-jv9tx\") on node \"crc\" DevicePath \"\"" Dec 03 13:20:38 crc kubenswrapper[4986]: I1203 13:20:38.130150 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 03 13:20:38 crc kubenswrapper[4986]: I1203 13:20:38.193071 4986 generic.go:334] "Generic (PLEG): container finished" podID="217928c1-298a-447c-b601-a153b73b7596" containerID="5f2078454d1c30138dc752a530fa12392c4d7036699f5fe710f69c968423b2f2" exitCode=0 Dec 03 13:20:38 crc kubenswrapper[4986]: I1203 13:20:38.193123 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"217928c1-298a-447c-b601-a153b73b7596","Type":"ContainerDied","Data":"5f2078454d1c30138dc752a530fa12392c4d7036699f5fe710f69c968423b2f2"} Dec 03 13:20:38 crc kubenswrapper[4986]: I1203 13:20:38.193147 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"217928c1-298a-447c-b601-a153b73b7596","Type":"ContainerDied","Data":"75254b60503df0953b0d42b916c6ef25a616808bdf5c50667b50854dd9f7abf6"} Dec 03 13:20:38 crc kubenswrapper[4986]: I1203 13:20:38.193164 4986 scope.go:117] "RemoveContainer" containerID="5f2078454d1c30138dc752a530fa12392c4d7036699f5fe710f69c968423b2f2" Dec 03 13:20:38 crc kubenswrapper[4986]: I1203 13:20:38.193352 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 03 13:20:38 crc kubenswrapper[4986]: I1203 13:20:38.196600 4986 generic.go:334] "Generic (PLEG): container finished" podID="42af691e-5613-44b4-b671-def2aca3dfa4" containerID="e7e127cd7a8ce13540b1b6ee30de93233d9de7af7353c9c437cc58c951d5ca2e" exitCode=0 Dec 03 13:20:38 crc kubenswrapper[4986]: I1203 13:20:38.196644 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"42af691e-5613-44b4-b671-def2aca3dfa4","Type":"ContainerDied","Data":"e7e127cd7a8ce13540b1b6ee30de93233d9de7af7353c9c437cc58c951d5ca2e"} Dec 03 13:20:38 crc kubenswrapper[4986]: I1203 13:20:38.196695 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"42af691e-5613-44b4-b671-def2aca3dfa4","Type":"ContainerDied","Data":"406c093524aff064b52dc7477cbeba42d9004bdf715ff6b259e6150469a3088d"} Dec 03 13:20:38 crc kubenswrapper[4986]: I1203 13:20:38.196749 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 03 13:20:38 crc kubenswrapper[4986]: I1203 13:20:38.216972 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/217928c1-298a-447c-b601-a153b73b7596-combined-ca-bundle\") pod \"217928c1-298a-447c-b601-a153b73b7596\" (UID: \"217928c1-298a-447c-b601-a153b73b7596\") " Dec 03 13:20:38 crc kubenswrapper[4986]: I1203 13:20:38.217089 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5k2f2\" (UniqueName: \"kubernetes.io/projected/217928c1-298a-447c-b601-a153b73b7596-kube-api-access-5k2f2\") pod \"217928c1-298a-447c-b601-a153b73b7596\" (UID: \"217928c1-298a-447c-b601-a153b73b7596\") " Dec 03 13:20:38 crc kubenswrapper[4986]: I1203 13:20:38.217245 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/217928c1-298a-447c-b601-a153b73b7596-config-data\") pod \"217928c1-298a-447c-b601-a153b73b7596\" (UID: \"217928c1-298a-447c-b601-a153b73b7596\") " Dec 03 13:20:38 crc kubenswrapper[4986]: I1203 13:20:38.217314 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/217928c1-298a-447c-b601-a153b73b7596-logs\") pod \"217928c1-298a-447c-b601-a153b73b7596\" (UID: \"217928c1-298a-447c-b601-a153b73b7596\") " Dec 03 13:20:38 crc kubenswrapper[4986]: I1203 13:20:38.218181 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/217928c1-298a-447c-b601-a153b73b7596-logs" (OuterVolumeSpecName: "logs") pod "217928c1-298a-447c-b601-a153b73b7596" (UID: "217928c1-298a-447c-b601-a153b73b7596"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 13:20:38 crc kubenswrapper[4986]: I1203 13:20:38.223508 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/217928c1-298a-447c-b601-a153b73b7596-kube-api-access-5k2f2" (OuterVolumeSpecName: "kube-api-access-5k2f2") pod "217928c1-298a-447c-b601-a153b73b7596" (UID: "217928c1-298a-447c-b601-a153b73b7596"). InnerVolumeSpecName "kube-api-access-5k2f2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:20:38 crc kubenswrapper[4986]: I1203 13:20:38.236662 4986 scope.go:117] "RemoveContainer" containerID="3029112c7e97cac2b4783557a4faf7526d6cdec96480dfda248afbb8c48999a5" Dec 03 13:20:38 crc kubenswrapper[4986]: I1203 13:20:38.247475 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/217928c1-298a-447c-b601-a153b73b7596-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "217928c1-298a-447c-b601-a153b73b7596" (UID: "217928c1-298a-447c-b601-a153b73b7596"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:20:38 crc kubenswrapper[4986]: I1203 13:20:38.253403 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/217928c1-298a-447c-b601-a153b73b7596-config-data" (OuterVolumeSpecName: "config-data") pod "217928c1-298a-447c-b601-a153b73b7596" (UID: "217928c1-298a-447c-b601-a153b73b7596"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:20:38 crc kubenswrapper[4986]: I1203 13:20:38.267464 4986 scope.go:117] "RemoveContainer" containerID="5f2078454d1c30138dc752a530fa12392c4d7036699f5fe710f69c968423b2f2" Dec 03 13:20:38 crc kubenswrapper[4986]: E1203 13:20:38.267993 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f2078454d1c30138dc752a530fa12392c4d7036699f5fe710f69c968423b2f2\": container with ID starting with 5f2078454d1c30138dc752a530fa12392c4d7036699f5fe710f69c968423b2f2 not found: ID does not exist" containerID="5f2078454d1c30138dc752a530fa12392c4d7036699f5fe710f69c968423b2f2" Dec 03 13:20:38 crc kubenswrapper[4986]: I1203 13:20:38.268023 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f2078454d1c30138dc752a530fa12392c4d7036699f5fe710f69c968423b2f2"} err="failed to get container status \"5f2078454d1c30138dc752a530fa12392c4d7036699f5fe710f69c968423b2f2\": rpc error: code = NotFound desc = could not find container \"5f2078454d1c30138dc752a530fa12392c4d7036699f5fe710f69c968423b2f2\": container with ID starting with 5f2078454d1c30138dc752a530fa12392c4d7036699f5fe710f69c968423b2f2 not found: ID does not exist" Dec 03 13:20:38 crc kubenswrapper[4986]: I1203 13:20:38.268044 4986 scope.go:117] "RemoveContainer" containerID="3029112c7e97cac2b4783557a4faf7526d6cdec96480dfda248afbb8c48999a5" Dec 03 13:20:38 crc kubenswrapper[4986]: E1203 13:20:38.268364 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3029112c7e97cac2b4783557a4faf7526d6cdec96480dfda248afbb8c48999a5\": container with ID starting with 3029112c7e97cac2b4783557a4faf7526d6cdec96480dfda248afbb8c48999a5 not found: ID does not exist" containerID="3029112c7e97cac2b4783557a4faf7526d6cdec96480dfda248afbb8c48999a5" Dec 03 13:20:38 crc kubenswrapper[4986]: I1203 13:20:38.268386 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3029112c7e97cac2b4783557a4faf7526d6cdec96480dfda248afbb8c48999a5"} err="failed to get container status \"3029112c7e97cac2b4783557a4faf7526d6cdec96480dfda248afbb8c48999a5\": rpc error: code = NotFound desc = could not find container \"3029112c7e97cac2b4783557a4faf7526d6cdec96480dfda248afbb8c48999a5\": container with ID starting with 3029112c7e97cac2b4783557a4faf7526d6cdec96480dfda248afbb8c48999a5 not found: ID does not exist" Dec 03 13:20:38 crc kubenswrapper[4986]: I1203 13:20:38.268401 4986 scope.go:117] "RemoveContainer" containerID="e7e127cd7a8ce13540b1b6ee30de93233d9de7af7353c9c437cc58c951d5ca2e" Dec 03 13:20:38 crc kubenswrapper[4986]: I1203 13:20:38.284801 4986 scope.go:117] "RemoveContainer" containerID="e7e127cd7a8ce13540b1b6ee30de93233d9de7af7353c9c437cc58c951d5ca2e" Dec 03 13:20:38 crc kubenswrapper[4986]: E1203 13:20:38.285268 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7e127cd7a8ce13540b1b6ee30de93233d9de7af7353c9c437cc58c951d5ca2e\": container with ID starting with e7e127cd7a8ce13540b1b6ee30de93233d9de7af7353c9c437cc58c951d5ca2e not found: ID does not exist" containerID="e7e127cd7a8ce13540b1b6ee30de93233d9de7af7353c9c437cc58c951d5ca2e" Dec 03 13:20:38 crc kubenswrapper[4986]: I1203 13:20:38.285381 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7e127cd7a8ce13540b1b6ee30de93233d9de7af7353c9c437cc58c951d5ca2e"} err="failed to get container status \"e7e127cd7a8ce13540b1b6ee30de93233d9de7af7353c9c437cc58c951d5ca2e\": rpc error: code = NotFound desc = could not find container \"e7e127cd7a8ce13540b1b6ee30de93233d9de7af7353c9c437cc58c951d5ca2e\": container with ID starting with e7e127cd7a8ce13540b1b6ee30de93233d9de7af7353c9c437cc58c951d5ca2e not found: ID does not exist" Dec 03 13:20:38 crc kubenswrapper[4986]: I1203 13:20:38.319095 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5k2f2\" (UniqueName: \"kubernetes.io/projected/217928c1-298a-447c-b601-a153b73b7596-kube-api-access-5k2f2\") on node \"crc\" DevicePath \"\"" Dec 03 13:20:38 crc kubenswrapper[4986]: I1203 13:20:38.319133 4986 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/217928c1-298a-447c-b601-a153b73b7596-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 13:20:38 crc kubenswrapper[4986]: I1203 13:20:38.319147 4986 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/217928c1-298a-447c-b601-a153b73b7596-logs\") on node \"crc\" DevicePath \"\"" Dec 03 13:20:38 crc kubenswrapper[4986]: I1203 13:20:38.319159 4986 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/217928c1-298a-447c-b601-a153b73b7596-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 13:20:38 crc kubenswrapper[4986]: I1203 13:20:38.420361 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42af691e-5613-44b4-b671-def2aca3dfa4-config-data\") pod \"42af691e-5613-44b4-b671-def2aca3dfa4\" (UID: \"42af691e-5613-44b4-b671-def2aca3dfa4\") " Dec 03 13:20:38 crc kubenswrapper[4986]: I1203 13:20:38.426568 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42af691e-5613-44b4-b671-def2aca3dfa4-config-data" (OuterVolumeSpecName: "config-data") pod "42af691e-5613-44b4-b671-def2aca3dfa4" (UID: "42af691e-5613-44b4-b671-def2aca3dfa4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:20:38 crc kubenswrapper[4986]: I1203 13:20:38.522663 4986 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42af691e-5613-44b4-b671-def2aca3dfa4-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 13:20:38 crc kubenswrapper[4986]: I1203 13:20:38.562159 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 03 13:20:38 crc kubenswrapper[4986]: I1203 13:20:38.616394 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 03 13:20:38 crc kubenswrapper[4986]: I1203 13:20:38.670371 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 03 13:20:38 crc kubenswrapper[4986]: I1203 13:20:38.680203 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 03 13:20:38 crc kubenswrapper[4986]: E1203 13:20:38.680662 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42af691e-5613-44b4-b671-def2aca3dfa4" containerName="nova-scheduler-scheduler" Dec 03 13:20:38 crc kubenswrapper[4986]: I1203 13:20:38.680680 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="42af691e-5613-44b4-b671-def2aca3dfa4" containerName="nova-scheduler-scheduler" Dec 03 13:20:38 crc kubenswrapper[4986]: E1203 13:20:38.680708 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="217928c1-298a-447c-b601-a153b73b7596" containerName="nova-api-log" Dec 03 13:20:38 crc kubenswrapper[4986]: I1203 13:20:38.680716 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="217928c1-298a-447c-b601-a153b73b7596" containerName="nova-api-log" Dec 03 13:20:38 crc kubenswrapper[4986]: E1203 13:20:38.680734 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="217928c1-298a-447c-b601-a153b73b7596" containerName="nova-api-api" Dec 03 13:20:38 crc kubenswrapper[4986]: I1203 13:20:38.680742 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="217928c1-298a-447c-b601-a153b73b7596" containerName="nova-api-api" Dec 03 13:20:38 crc kubenswrapper[4986]: I1203 13:20:38.680983 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="217928c1-298a-447c-b601-a153b73b7596" containerName="nova-api-api" Dec 03 13:20:38 crc kubenswrapper[4986]: I1203 13:20:38.681016 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="217928c1-298a-447c-b601-a153b73b7596" containerName="nova-api-log" Dec 03 13:20:38 crc kubenswrapper[4986]: I1203 13:20:38.681040 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="42af691e-5613-44b4-b671-def2aca3dfa4" containerName="nova-scheduler-scheduler" Dec 03 13:20:38 crc kubenswrapper[4986]: I1203 13:20:38.681794 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 03 13:20:38 crc kubenswrapper[4986]: I1203 13:20:38.684214 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 03 13:20:38 crc kubenswrapper[4986]: I1203 13:20:38.724608 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 03 13:20:38 crc kubenswrapper[4986]: I1203 13:20:38.747469 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 03 13:20:38 crc kubenswrapper[4986]: I1203 13:20:38.749071 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 03 13:20:38 crc kubenswrapper[4986]: I1203 13:20:38.751605 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 03 13:20:38 crc kubenswrapper[4986]: I1203 13:20:38.754314 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vqz5\" (UniqueName: \"kubernetes.io/projected/96ddaceb-69ae-45af-92b2-511ccc92f2c2-kube-api-access-5vqz5\") pod \"nova-scheduler-0\" (UID: \"96ddaceb-69ae-45af-92b2-511ccc92f2c2\") " pod="openstack/nova-scheduler-0" Dec 03 13:20:38 crc kubenswrapper[4986]: I1203 13:20:38.754395 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96ddaceb-69ae-45af-92b2-511ccc92f2c2-config-data\") pod \"nova-scheduler-0\" (UID: \"96ddaceb-69ae-45af-92b2-511ccc92f2c2\") " pod="openstack/nova-scheduler-0" Dec 03 13:20:38 crc kubenswrapper[4986]: I1203 13:20:38.754447 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96ddaceb-69ae-45af-92b2-511ccc92f2c2-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"96ddaceb-69ae-45af-92b2-511ccc92f2c2\") " pod="openstack/nova-scheduler-0" Dec 03 13:20:38 crc kubenswrapper[4986]: I1203 13:20:38.770755 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 03 13:20:38 crc kubenswrapper[4986]: I1203 13:20:38.780015 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 03 13:20:38 crc kubenswrapper[4986]: I1203 13:20:38.856065 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vqz5\" (UniqueName: \"kubernetes.io/projected/96ddaceb-69ae-45af-92b2-511ccc92f2c2-kube-api-access-5vqz5\") pod \"nova-scheduler-0\" (UID: \"96ddaceb-69ae-45af-92b2-511ccc92f2c2\") " pod="openstack/nova-scheduler-0" Dec 03 13:20:38 crc kubenswrapper[4986]: I1203 13:20:38.856155 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96ddaceb-69ae-45af-92b2-511ccc92f2c2-config-data\") pod \"nova-scheduler-0\" (UID: \"96ddaceb-69ae-45af-92b2-511ccc92f2c2\") " pod="openstack/nova-scheduler-0" Dec 03 13:20:38 crc kubenswrapper[4986]: I1203 13:20:38.856202 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96ddaceb-69ae-45af-92b2-511ccc92f2c2-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"96ddaceb-69ae-45af-92b2-511ccc92f2c2\") " pod="openstack/nova-scheduler-0" Dec 03 13:20:38 crc kubenswrapper[4986]: I1203 13:20:38.856317 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adfcf304-e178-412f-8638-0ded9cfa72e1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"adfcf304-e178-412f-8638-0ded9cfa72e1\") " pod="openstack/nova-api-0" Dec 03 13:20:38 crc kubenswrapper[4986]: I1203 13:20:38.856422 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2g7g\" (UniqueName: \"kubernetes.io/projected/adfcf304-e178-412f-8638-0ded9cfa72e1-kube-api-access-q2g7g\") pod \"nova-api-0\" (UID: \"adfcf304-e178-412f-8638-0ded9cfa72e1\") " pod="openstack/nova-api-0" Dec 03 13:20:38 crc kubenswrapper[4986]: I1203 13:20:38.856468 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/adfcf304-e178-412f-8638-0ded9cfa72e1-logs\") pod \"nova-api-0\" (UID: \"adfcf304-e178-412f-8638-0ded9cfa72e1\") " pod="openstack/nova-api-0" Dec 03 13:20:38 crc kubenswrapper[4986]: I1203 13:20:38.856529 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adfcf304-e178-412f-8638-0ded9cfa72e1-config-data\") pod \"nova-api-0\" (UID: \"adfcf304-e178-412f-8638-0ded9cfa72e1\") " pod="openstack/nova-api-0" Dec 03 13:20:38 crc kubenswrapper[4986]: I1203 13:20:38.861047 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96ddaceb-69ae-45af-92b2-511ccc92f2c2-config-data\") pod \"nova-scheduler-0\" (UID: \"96ddaceb-69ae-45af-92b2-511ccc92f2c2\") " pod="openstack/nova-scheduler-0" Dec 03 13:20:38 crc kubenswrapper[4986]: I1203 13:20:38.861084 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96ddaceb-69ae-45af-92b2-511ccc92f2c2-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"96ddaceb-69ae-45af-92b2-511ccc92f2c2\") " pod="openstack/nova-scheduler-0" Dec 03 13:20:38 crc kubenswrapper[4986]: I1203 13:20:38.873185 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vqz5\" (UniqueName: \"kubernetes.io/projected/96ddaceb-69ae-45af-92b2-511ccc92f2c2-kube-api-access-5vqz5\") pod \"nova-scheduler-0\" (UID: \"96ddaceb-69ae-45af-92b2-511ccc92f2c2\") " pod="openstack/nova-scheduler-0" Dec 03 13:20:38 crc kubenswrapper[4986]: I1203 13:20:38.953805 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="217928c1-298a-447c-b601-a153b73b7596" path="/var/lib/kubelet/pods/217928c1-298a-447c-b601-a153b73b7596/volumes" Dec 03 13:20:38 crc kubenswrapper[4986]: I1203 13:20:38.954798 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42af691e-5613-44b4-b671-def2aca3dfa4" path="/var/lib/kubelet/pods/42af691e-5613-44b4-b671-def2aca3dfa4/volumes" Dec 03 13:20:38 crc kubenswrapper[4986]: I1203 13:20:38.958416 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2g7g\" (UniqueName: \"kubernetes.io/projected/adfcf304-e178-412f-8638-0ded9cfa72e1-kube-api-access-q2g7g\") pod \"nova-api-0\" (UID: \"adfcf304-e178-412f-8638-0ded9cfa72e1\") " pod="openstack/nova-api-0" Dec 03 13:20:38 crc kubenswrapper[4986]: I1203 13:20:38.958503 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/adfcf304-e178-412f-8638-0ded9cfa72e1-logs\") pod \"nova-api-0\" (UID: \"adfcf304-e178-412f-8638-0ded9cfa72e1\") " pod="openstack/nova-api-0" Dec 03 13:20:38 crc kubenswrapper[4986]: I1203 13:20:38.958594 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adfcf304-e178-412f-8638-0ded9cfa72e1-config-data\") pod \"nova-api-0\" (UID: \"adfcf304-e178-412f-8638-0ded9cfa72e1\") " pod="openstack/nova-api-0" Dec 03 13:20:38 crc kubenswrapper[4986]: I1203 13:20:38.958879 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adfcf304-e178-412f-8638-0ded9cfa72e1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"adfcf304-e178-412f-8638-0ded9cfa72e1\") " pod="openstack/nova-api-0" Dec 03 13:20:38 crc kubenswrapper[4986]: I1203 13:20:38.958924 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/adfcf304-e178-412f-8638-0ded9cfa72e1-logs\") pod \"nova-api-0\" (UID: \"adfcf304-e178-412f-8638-0ded9cfa72e1\") " pod="openstack/nova-api-0" Dec 03 13:20:38 crc kubenswrapper[4986]: I1203 13:20:38.962997 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adfcf304-e178-412f-8638-0ded9cfa72e1-config-data\") pod \"nova-api-0\" (UID: \"adfcf304-e178-412f-8638-0ded9cfa72e1\") " pod="openstack/nova-api-0" Dec 03 13:20:38 crc kubenswrapper[4986]: I1203 13:20:38.965645 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adfcf304-e178-412f-8638-0ded9cfa72e1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"adfcf304-e178-412f-8638-0ded9cfa72e1\") " pod="openstack/nova-api-0" Dec 03 13:20:38 crc kubenswrapper[4986]: I1203 13:20:38.989131 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2g7g\" (UniqueName: \"kubernetes.io/projected/adfcf304-e178-412f-8638-0ded9cfa72e1-kube-api-access-q2g7g\") pod \"nova-api-0\" (UID: \"adfcf304-e178-412f-8638-0ded9cfa72e1\") " pod="openstack/nova-api-0" Dec 03 13:20:39 crc kubenswrapper[4986]: I1203 13:20:39.001695 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 03 13:20:39 crc kubenswrapper[4986]: I1203 13:20:39.069764 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 03 13:20:39 crc kubenswrapper[4986]: I1203 13:20:39.504832 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 03 13:20:39 crc kubenswrapper[4986]: W1203 13:20:39.512986 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod96ddaceb_69ae_45af_92b2_511ccc92f2c2.slice/crio-c1c4c6b75559c3bdbfaeac153c4fb19b0b84b00e072a611b4f0f5071b9fdb64a WatchSource:0}: Error finding container c1c4c6b75559c3bdbfaeac153c4fb19b0b84b00e072a611b4f0f5071b9fdb64a: Status 404 returned error can't find the container with id c1c4c6b75559c3bdbfaeac153c4fb19b0b84b00e072a611b4f0f5071b9fdb64a Dec 03 13:20:39 crc kubenswrapper[4986]: I1203 13:20:39.552845 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 03 13:20:39 crc kubenswrapper[4986]: I1203 13:20:39.552897 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 03 13:20:39 crc kubenswrapper[4986]: I1203 13:20:39.586188 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 03 13:20:39 crc kubenswrapper[4986]: W1203 13:20:39.588213 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podadfcf304_e178_412f_8638_0ded9cfa72e1.slice/crio-aa0e65c0d25e8f6c1c48f31a78f537b83e9c8970a0f8ebe38833f37003d15cb4 WatchSource:0}: Error finding container aa0e65c0d25e8f6c1c48f31a78f537b83e9c8970a0f8ebe38833f37003d15cb4: Status 404 returned error can't find the container with id aa0e65c0d25e8f6c1c48f31a78f537b83e9c8970a0f8ebe38833f37003d15cb4 Dec 03 13:20:40 crc kubenswrapper[4986]: I1203 13:20:40.222732 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"adfcf304-e178-412f-8638-0ded9cfa72e1","Type":"ContainerStarted","Data":"742e27cb1a2f9237b83d1ea173f4846a3035db2c64d147b0ee5283bd50dd482c"} Dec 03 13:20:40 crc kubenswrapper[4986]: I1203 13:20:40.222780 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"adfcf304-e178-412f-8638-0ded9cfa72e1","Type":"ContainerStarted","Data":"542ac2d391719eae755e99a41a1ef243d17e1238e8f41435a7bf6c920266ace3"} Dec 03 13:20:40 crc kubenswrapper[4986]: I1203 13:20:40.222793 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"adfcf304-e178-412f-8638-0ded9cfa72e1","Type":"ContainerStarted","Data":"aa0e65c0d25e8f6c1c48f31a78f537b83e9c8970a0f8ebe38833f37003d15cb4"} Dec 03 13:20:40 crc kubenswrapper[4986]: I1203 13:20:40.227738 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"96ddaceb-69ae-45af-92b2-511ccc92f2c2","Type":"ContainerStarted","Data":"3bd9833a0c6c9a11353bfaf94bb93ec4574d2fa5b6f4fb0282fa315c48f53387"} Dec 03 13:20:40 crc kubenswrapper[4986]: I1203 13:20:40.227777 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"96ddaceb-69ae-45af-92b2-511ccc92f2c2","Type":"ContainerStarted","Data":"c1c4c6b75559c3bdbfaeac153c4fb19b0b84b00e072a611b4f0f5071b9fdb64a"} Dec 03 13:20:40 crc kubenswrapper[4986]: I1203 13:20:40.247689 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.247672307 podStartE2EDuration="2.247672307s" podCreationTimestamp="2025-12-03 13:20:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:20:40.238502219 +0000 UTC m=+1499.704933430" watchObservedRunningTime="2025-12-03 13:20:40.247672307 +0000 UTC m=+1499.714103498" Dec 03 13:20:40 crc kubenswrapper[4986]: I1203 13:20:40.260553 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.260526214 podStartE2EDuration="2.260526214s" podCreationTimestamp="2025-12-03 13:20:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:20:40.253639168 +0000 UTC m=+1499.720070379" watchObservedRunningTime="2025-12-03 13:20:40.260526214 +0000 UTC m=+1499.726957405" Dec 03 13:20:43 crc kubenswrapper[4986]: I1203 13:20:43.493114 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Dec 03 13:20:44 crc kubenswrapper[4986]: I1203 13:20:44.002410 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 03 13:20:44 crc kubenswrapper[4986]: I1203 13:20:44.553002 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 03 13:20:44 crc kubenswrapper[4986]: I1203 13:20:44.553102 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 03 13:20:45 crc kubenswrapper[4986]: I1203 13:20:45.564441 4986 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="0b17a506-eb92-4170-a2cf-0a563e427197" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.191:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 03 13:20:45 crc kubenswrapper[4986]: I1203 13:20:45.564442 4986 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="0b17a506-eb92-4170-a2cf-0a563e427197" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.191:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 03 13:20:49 crc kubenswrapper[4986]: I1203 13:20:49.002777 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 03 13:20:49 crc kubenswrapper[4986]: I1203 13:20:49.029013 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 03 13:20:49 crc kubenswrapper[4986]: I1203 13:20:49.070813 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 03 13:20:49 crc kubenswrapper[4986]: I1203 13:20:49.070880 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 03 13:20:49 crc kubenswrapper[4986]: I1203 13:20:49.354643 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 03 13:20:50 crc kubenswrapper[4986]: I1203 13:20:50.154770 4986 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="adfcf304-e178-412f-8638-0ded9cfa72e1" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.193:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 03 13:20:50 crc kubenswrapper[4986]: I1203 13:20:50.154986 4986 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="adfcf304-e178-412f-8638-0ded9cfa72e1" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.193:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 03 13:20:54 crc kubenswrapper[4986]: I1203 13:20:54.559145 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 03 13:20:54 crc kubenswrapper[4986]: I1203 13:20:54.562628 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 03 13:20:54 crc kubenswrapper[4986]: I1203 13:20:54.568495 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 03 13:20:55 crc kubenswrapper[4986]: I1203 13:20:55.392644 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 03 13:20:56 crc kubenswrapper[4986]: I1203 13:20:56.257601 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 03 13:20:56 crc kubenswrapper[4986]: I1203 13:20:56.320746 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a5ad857-b212-4ad2-8ddd-89ec8dce1846-config-data\") pod \"1a5ad857-b212-4ad2-8ddd-89ec8dce1846\" (UID: \"1a5ad857-b212-4ad2-8ddd-89ec8dce1846\") " Dec 03 13:20:56 crc kubenswrapper[4986]: I1203 13:20:56.320907 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggpv7\" (UniqueName: \"kubernetes.io/projected/1a5ad857-b212-4ad2-8ddd-89ec8dce1846-kube-api-access-ggpv7\") pod \"1a5ad857-b212-4ad2-8ddd-89ec8dce1846\" (UID: \"1a5ad857-b212-4ad2-8ddd-89ec8dce1846\") " Dec 03 13:20:56 crc kubenswrapper[4986]: I1203 13:20:56.321015 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a5ad857-b212-4ad2-8ddd-89ec8dce1846-combined-ca-bundle\") pod \"1a5ad857-b212-4ad2-8ddd-89ec8dce1846\" (UID: \"1a5ad857-b212-4ad2-8ddd-89ec8dce1846\") " Dec 03 13:20:56 crc kubenswrapper[4986]: I1203 13:20:56.326532 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a5ad857-b212-4ad2-8ddd-89ec8dce1846-kube-api-access-ggpv7" (OuterVolumeSpecName: "kube-api-access-ggpv7") pod "1a5ad857-b212-4ad2-8ddd-89ec8dce1846" (UID: "1a5ad857-b212-4ad2-8ddd-89ec8dce1846"). InnerVolumeSpecName "kube-api-access-ggpv7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:20:56 crc kubenswrapper[4986]: I1203 13:20:56.346724 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a5ad857-b212-4ad2-8ddd-89ec8dce1846-config-data" (OuterVolumeSpecName: "config-data") pod "1a5ad857-b212-4ad2-8ddd-89ec8dce1846" (UID: "1a5ad857-b212-4ad2-8ddd-89ec8dce1846"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:20:56 crc kubenswrapper[4986]: I1203 13:20:56.347219 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a5ad857-b212-4ad2-8ddd-89ec8dce1846-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1a5ad857-b212-4ad2-8ddd-89ec8dce1846" (UID: "1a5ad857-b212-4ad2-8ddd-89ec8dce1846"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:20:56 crc kubenswrapper[4986]: I1203 13:20:56.394578 4986 generic.go:334] "Generic (PLEG): container finished" podID="1a5ad857-b212-4ad2-8ddd-89ec8dce1846" containerID="7cc9f81c021b2810e432d03ca3ef05cd4b9c0f72bbb4476e445fca6e97098b27" exitCode=137 Dec 03 13:20:56 crc kubenswrapper[4986]: I1203 13:20:56.394621 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 03 13:20:56 crc kubenswrapper[4986]: I1203 13:20:56.394658 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1a5ad857-b212-4ad2-8ddd-89ec8dce1846","Type":"ContainerDied","Data":"7cc9f81c021b2810e432d03ca3ef05cd4b9c0f72bbb4476e445fca6e97098b27"} Dec 03 13:20:56 crc kubenswrapper[4986]: I1203 13:20:56.394706 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1a5ad857-b212-4ad2-8ddd-89ec8dce1846","Type":"ContainerDied","Data":"16259bd08a3c4235d9f314c5b9263e0fc83e039ed7502bc9617aa8fa49e2490e"} Dec 03 13:20:56 crc kubenswrapper[4986]: I1203 13:20:56.394743 4986 scope.go:117] "RemoveContainer" containerID="7cc9f81c021b2810e432d03ca3ef05cd4b9c0f72bbb4476e445fca6e97098b27" Dec 03 13:20:56 crc kubenswrapper[4986]: I1203 13:20:56.423108 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ggpv7\" (UniqueName: \"kubernetes.io/projected/1a5ad857-b212-4ad2-8ddd-89ec8dce1846-kube-api-access-ggpv7\") on node \"crc\" DevicePath \"\"" Dec 03 13:20:56 crc kubenswrapper[4986]: I1203 13:20:56.423147 4986 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a5ad857-b212-4ad2-8ddd-89ec8dce1846-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 13:20:56 crc kubenswrapper[4986]: I1203 13:20:56.423162 4986 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a5ad857-b212-4ad2-8ddd-89ec8dce1846-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 13:20:56 crc kubenswrapper[4986]: I1203 13:20:56.435553 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 03 13:20:56 crc kubenswrapper[4986]: I1203 13:20:56.437629 4986 scope.go:117] "RemoveContainer" containerID="7cc9f81c021b2810e432d03ca3ef05cd4b9c0f72bbb4476e445fca6e97098b27" Dec 03 13:20:56 crc kubenswrapper[4986]: E1203 13:20:56.438227 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7cc9f81c021b2810e432d03ca3ef05cd4b9c0f72bbb4476e445fca6e97098b27\": container with ID starting with 7cc9f81c021b2810e432d03ca3ef05cd4b9c0f72bbb4476e445fca6e97098b27 not found: ID does not exist" containerID="7cc9f81c021b2810e432d03ca3ef05cd4b9c0f72bbb4476e445fca6e97098b27" Dec 03 13:20:56 crc kubenswrapper[4986]: I1203 13:20:56.438416 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7cc9f81c021b2810e432d03ca3ef05cd4b9c0f72bbb4476e445fca6e97098b27"} err="failed to get container status \"7cc9f81c021b2810e432d03ca3ef05cd4b9c0f72bbb4476e445fca6e97098b27\": rpc error: code = NotFound desc = could not find container \"7cc9f81c021b2810e432d03ca3ef05cd4b9c0f72bbb4476e445fca6e97098b27\": container with ID starting with 7cc9f81c021b2810e432d03ca3ef05cd4b9c0f72bbb4476e445fca6e97098b27 not found: ID does not exist" Dec 03 13:20:56 crc kubenswrapper[4986]: I1203 13:20:56.464620 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 03 13:20:56 crc kubenswrapper[4986]: I1203 13:20:56.481693 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 03 13:20:56 crc kubenswrapper[4986]: E1203 13:20:56.482134 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a5ad857-b212-4ad2-8ddd-89ec8dce1846" containerName="nova-cell1-novncproxy-novncproxy" Dec 03 13:20:56 crc kubenswrapper[4986]: I1203 13:20:56.482156 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a5ad857-b212-4ad2-8ddd-89ec8dce1846" containerName="nova-cell1-novncproxy-novncproxy" Dec 03 13:20:56 crc kubenswrapper[4986]: I1203 13:20:56.482488 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a5ad857-b212-4ad2-8ddd-89ec8dce1846" containerName="nova-cell1-novncproxy-novncproxy" Dec 03 13:20:56 crc kubenswrapper[4986]: I1203 13:20:56.483297 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 03 13:20:56 crc kubenswrapper[4986]: I1203 13:20:56.487794 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Dec 03 13:20:56 crc kubenswrapper[4986]: I1203 13:20:56.487887 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Dec 03 13:20:56 crc kubenswrapper[4986]: I1203 13:20:56.488051 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Dec 03 13:20:56 crc kubenswrapper[4986]: I1203 13:20:56.510848 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 03 13:20:56 crc kubenswrapper[4986]: I1203 13:20:56.625775 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b4f56d6-d999-4f05-ace4-61b79327feec-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"4b4f56d6-d999-4f05-ace4-61b79327feec\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 13:20:56 crc kubenswrapper[4986]: I1203 13:20:56.625859 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9csjn\" (UniqueName: \"kubernetes.io/projected/4b4f56d6-d999-4f05-ace4-61b79327feec-kube-api-access-9csjn\") pod \"nova-cell1-novncproxy-0\" (UID: \"4b4f56d6-d999-4f05-ace4-61b79327feec\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 13:20:56 crc kubenswrapper[4986]: I1203 13:20:56.626014 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b4f56d6-d999-4f05-ace4-61b79327feec-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"4b4f56d6-d999-4f05-ace4-61b79327feec\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 13:20:56 crc kubenswrapper[4986]: I1203 13:20:56.626214 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b4f56d6-d999-4f05-ace4-61b79327feec-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"4b4f56d6-d999-4f05-ace4-61b79327feec\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 13:20:56 crc kubenswrapper[4986]: I1203 13:20:56.626237 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b4f56d6-d999-4f05-ace4-61b79327feec-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"4b4f56d6-d999-4f05-ace4-61b79327feec\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 13:20:56 crc kubenswrapper[4986]: I1203 13:20:56.728894 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b4f56d6-d999-4f05-ace4-61b79327feec-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"4b4f56d6-d999-4f05-ace4-61b79327feec\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 13:20:56 crc kubenswrapper[4986]: I1203 13:20:56.729373 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9csjn\" (UniqueName: \"kubernetes.io/projected/4b4f56d6-d999-4f05-ace4-61b79327feec-kube-api-access-9csjn\") pod \"nova-cell1-novncproxy-0\" (UID: \"4b4f56d6-d999-4f05-ace4-61b79327feec\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 13:20:56 crc kubenswrapper[4986]: I1203 13:20:56.729419 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b4f56d6-d999-4f05-ace4-61b79327feec-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"4b4f56d6-d999-4f05-ace4-61b79327feec\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 13:20:56 crc kubenswrapper[4986]: I1203 13:20:56.729486 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b4f56d6-d999-4f05-ace4-61b79327feec-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"4b4f56d6-d999-4f05-ace4-61b79327feec\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 13:20:56 crc kubenswrapper[4986]: I1203 13:20:56.729506 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b4f56d6-d999-4f05-ace4-61b79327feec-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"4b4f56d6-d999-4f05-ace4-61b79327feec\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 13:20:56 crc kubenswrapper[4986]: I1203 13:20:56.734510 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b4f56d6-d999-4f05-ace4-61b79327feec-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"4b4f56d6-d999-4f05-ace4-61b79327feec\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 13:20:56 crc kubenswrapper[4986]: I1203 13:20:56.734537 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b4f56d6-d999-4f05-ace4-61b79327feec-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"4b4f56d6-d999-4f05-ace4-61b79327feec\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 13:20:56 crc kubenswrapper[4986]: I1203 13:20:56.737063 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b4f56d6-d999-4f05-ace4-61b79327feec-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"4b4f56d6-d999-4f05-ace4-61b79327feec\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 13:20:56 crc kubenswrapper[4986]: I1203 13:20:56.744630 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b4f56d6-d999-4f05-ace4-61b79327feec-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"4b4f56d6-d999-4f05-ace4-61b79327feec\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 13:20:56 crc kubenswrapper[4986]: I1203 13:20:56.751138 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9csjn\" (UniqueName: \"kubernetes.io/projected/4b4f56d6-d999-4f05-ace4-61b79327feec-kube-api-access-9csjn\") pod \"nova-cell1-novncproxy-0\" (UID: \"4b4f56d6-d999-4f05-ace4-61b79327feec\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 13:20:56 crc kubenswrapper[4986]: I1203 13:20:56.811574 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 03 13:20:56 crc kubenswrapper[4986]: I1203 13:20:56.959832 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a5ad857-b212-4ad2-8ddd-89ec8dce1846" path="/var/lib/kubelet/pods/1a5ad857-b212-4ad2-8ddd-89ec8dce1846/volumes" Dec 03 13:20:57 crc kubenswrapper[4986]: I1203 13:20:57.256699 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 03 13:20:57 crc kubenswrapper[4986]: W1203 13:20:57.262664 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4b4f56d6_d999_4f05_ace4_61b79327feec.slice/crio-c42a350e097a5bb08cdd160f3127a6a8f475a736b5ee58c5d2af21afb0274a2b WatchSource:0}: Error finding container c42a350e097a5bb08cdd160f3127a6a8f475a736b5ee58c5d2af21afb0274a2b: Status 404 returned error can't find the container with id c42a350e097a5bb08cdd160f3127a6a8f475a736b5ee58c5d2af21afb0274a2b Dec 03 13:20:57 crc kubenswrapper[4986]: I1203 13:20:57.413032 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"4b4f56d6-d999-4f05-ace4-61b79327feec","Type":"ContainerStarted","Data":"c42a350e097a5bb08cdd160f3127a6a8f475a736b5ee58c5d2af21afb0274a2b"} Dec 03 13:20:58 crc kubenswrapper[4986]: I1203 13:20:58.422893 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"4b4f56d6-d999-4f05-ace4-61b79327feec","Type":"ContainerStarted","Data":"627394ca143fbf4acdd7669e0a4eceff32e8ca3b198d22a60e9c7b60e3a91af8"} Dec 03 13:20:58 crc kubenswrapper[4986]: I1203 13:20:58.458604 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.458579234 podStartE2EDuration="2.458579234s" podCreationTimestamp="2025-12-03 13:20:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:20:58.443044425 +0000 UTC m=+1517.909475626" watchObservedRunningTime="2025-12-03 13:20:58.458579234 +0000 UTC m=+1517.925010425" Dec 03 13:20:58 crc kubenswrapper[4986]: I1203 13:20:58.708953 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-c7s6k"] Dec 03 13:20:58 crc kubenswrapper[4986]: I1203 13:20:58.711431 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c7s6k" Dec 03 13:20:58 crc kubenswrapper[4986]: I1203 13:20:58.724498 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-c7s6k"] Dec 03 13:20:58 crc kubenswrapper[4986]: I1203 13:20:58.779016 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4f4a308-6339-4346-98f6-562cc445769e-catalog-content\") pod \"redhat-marketplace-c7s6k\" (UID: \"d4f4a308-6339-4346-98f6-562cc445769e\") " pod="openshift-marketplace/redhat-marketplace-c7s6k" Dec 03 13:20:58 crc kubenswrapper[4986]: I1203 13:20:58.779151 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsvtz\" (UniqueName: \"kubernetes.io/projected/d4f4a308-6339-4346-98f6-562cc445769e-kube-api-access-rsvtz\") pod \"redhat-marketplace-c7s6k\" (UID: \"d4f4a308-6339-4346-98f6-562cc445769e\") " pod="openshift-marketplace/redhat-marketplace-c7s6k" Dec 03 13:20:58 crc kubenswrapper[4986]: I1203 13:20:58.779241 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4f4a308-6339-4346-98f6-562cc445769e-utilities\") pod \"redhat-marketplace-c7s6k\" (UID: \"d4f4a308-6339-4346-98f6-562cc445769e\") " pod="openshift-marketplace/redhat-marketplace-c7s6k" Dec 03 13:20:58 crc kubenswrapper[4986]: I1203 13:20:58.880723 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4f4a308-6339-4346-98f6-562cc445769e-catalog-content\") pod \"redhat-marketplace-c7s6k\" (UID: \"d4f4a308-6339-4346-98f6-562cc445769e\") " pod="openshift-marketplace/redhat-marketplace-c7s6k" Dec 03 13:20:58 crc kubenswrapper[4986]: I1203 13:20:58.880817 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rsvtz\" (UniqueName: \"kubernetes.io/projected/d4f4a308-6339-4346-98f6-562cc445769e-kube-api-access-rsvtz\") pod \"redhat-marketplace-c7s6k\" (UID: \"d4f4a308-6339-4346-98f6-562cc445769e\") " pod="openshift-marketplace/redhat-marketplace-c7s6k" Dec 03 13:20:58 crc kubenswrapper[4986]: I1203 13:20:58.880880 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4f4a308-6339-4346-98f6-562cc445769e-utilities\") pod \"redhat-marketplace-c7s6k\" (UID: \"d4f4a308-6339-4346-98f6-562cc445769e\") " pod="openshift-marketplace/redhat-marketplace-c7s6k" Dec 03 13:20:58 crc kubenswrapper[4986]: I1203 13:20:58.881313 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4f4a308-6339-4346-98f6-562cc445769e-catalog-content\") pod \"redhat-marketplace-c7s6k\" (UID: \"d4f4a308-6339-4346-98f6-562cc445769e\") " pod="openshift-marketplace/redhat-marketplace-c7s6k" Dec 03 13:20:58 crc kubenswrapper[4986]: I1203 13:20:58.881327 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4f4a308-6339-4346-98f6-562cc445769e-utilities\") pod \"redhat-marketplace-c7s6k\" (UID: \"d4f4a308-6339-4346-98f6-562cc445769e\") " pod="openshift-marketplace/redhat-marketplace-c7s6k" Dec 03 13:20:58 crc kubenswrapper[4986]: I1203 13:20:58.912109 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsvtz\" (UniqueName: \"kubernetes.io/projected/d4f4a308-6339-4346-98f6-562cc445769e-kube-api-access-rsvtz\") pod \"redhat-marketplace-c7s6k\" (UID: \"d4f4a308-6339-4346-98f6-562cc445769e\") " pod="openshift-marketplace/redhat-marketplace-c7s6k" Dec 03 13:20:59 crc kubenswrapper[4986]: I1203 13:20:59.034940 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c7s6k" Dec 03 13:20:59 crc kubenswrapper[4986]: I1203 13:20:59.075759 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 03 13:20:59 crc kubenswrapper[4986]: I1203 13:20:59.076496 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 03 13:20:59 crc kubenswrapper[4986]: I1203 13:20:59.076897 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 03 13:20:59 crc kubenswrapper[4986]: I1203 13:20:59.089992 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 03 13:20:59 crc kubenswrapper[4986]: I1203 13:20:59.431440 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 03 13:20:59 crc kubenswrapper[4986]: I1203 13:20:59.436407 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 03 13:20:59 crc kubenswrapper[4986]: I1203 13:20:59.594114 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-c7s6k"] Dec 03 13:20:59 crc kubenswrapper[4986]: I1203 13:20:59.765112 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-gkvxp"] Dec 03 13:20:59 crc kubenswrapper[4986]: I1203 13:20:59.785603 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59cf4bdb65-gkvxp" Dec 03 13:20:59 crc kubenswrapper[4986]: I1203 13:20:59.811744 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-gkvxp"] Dec 03 13:20:59 crc kubenswrapper[4986]: I1203 13:20:59.909497 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23dca844-dc9b-4e62-b6e0-1e4da1adc56c-dns-svc\") pod \"dnsmasq-dns-59cf4bdb65-gkvxp\" (UID: \"23dca844-dc9b-4e62-b6e0-1e4da1adc56c\") " pod="openstack/dnsmasq-dns-59cf4bdb65-gkvxp" Dec 03 13:20:59 crc kubenswrapper[4986]: I1203 13:20:59.909813 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23dca844-dc9b-4e62-b6e0-1e4da1adc56c-config\") pod \"dnsmasq-dns-59cf4bdb65-gkvxp\" (UID: \"23dca844-dc9b-4e62-b6e0-1e4da1adc56c\") " pod="openstack/dnsmasq-dns-59cf4bdb65-gkvxp" Dec 03 13:20:59 crc kubenswrapper[4986]: I1203 13:20:59.909884 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/23dca844-dc9b-4e62-b6e0-1e4da1adc56c-ovsdbserver-sb\") pod \"dnsmasq-dns-59cf4bdb65-gkvxp\" (UID: \"23dca844-dc9b-4e62-b6e0-1e4da1adc56c\") " pod="openstack/dnsmasq-dns-59cf4bdb65-gkvxp" Dec 03 13:20:59 crc kubenswrapper[4986]: I1203 13:20:59.909913 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjl4g\" (UniqueName: \"kubernetes.io/projected/23dca844-dc9b-4e62-b6e0-1e4da1adc56c-kube-api-access-gjl4g\") pod \"dnsmasq-dns-59cf4bdb65-gkvxp\" (UID: \"23dca844-dc9b-4e62-b6e0-1e4da1adc56c\") " pod="openstack/dnsmasq-dns-59cf4bdb65-gkvxp" Dec 03 13:20:59 crc kubenswrapper[4986]: I1203 13:20:59.909945 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/23dca844-dc9b-4e62-b6e0-1e4da1adc56c-dns-swift-storage-0\") pod \"dnsmasq-dns-59cf4bdb65-gkvxp\" (UID: \"23dca844-dc9b-4e62-b6e0-1e4da1adc56c\") " pod="openstack/dnsmasq-dns-59cf4bdb65-gkvxp" Dec 03 13:20:59 crc kubenswrapper[4986]: I1203 13:20:59.909966 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/23dca844-dc9b-4e62-b6e0-1e4da1adc56c-ovsdbserver-nb\") pod \"dnsmasq-dns-59cf4bdb65-gkvxp\" (UID: \"23dca844-dc9b-4e62-b6e0-1e4da1adc56c\") " pod="openstack/dnsmasq-dns-59cf4bdb65-gkvxp" Dec 03 13:21:00 crc kubenswrapper[4986]: I1203 13:21:00.011655 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/23dca844-dc9b-4e62-b6e0-1e4da1adc56c-ovsdbserver-sb\") pod \"dnsmasq-dns-59cf4bdb65-gkvxp\" (UID: \"23dca844-dc9b-4e62-b6e0-1e4da1adc56c\") " pod="openstack/dnsmasq-dns-59cf4bdb65-gkvxp" Dec 03 13:21:00 crc kubenswrapper[4986]: I1203 13:21:00.012961 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjl4g\" (UniqueName: \"kubernetes.io/projected/23dca844-dc9b-4e62-b6e0-1e4da1adc56c-kube-api-access-gjl4g\") pod \"dnsmasq-dns-59cf4bdb65-gkvxp\" (UID: \"23dca844-dc9b-4e62-b6e0-1e4da1adc56c\") " pod="openstack/dnsmasq-dns-59cf4bdb65-gkvxp" Dec 03 13:21:00 crc kubenswrapper[4986]: I1203 13:21:00.013381 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/23dca844-dc9b-4e62-b6e0-1e4da1adc56c-dns-swift-storage-0\") pod \"dnsmasq-dns-59cf4bdb65-gkvxp\" (UID: \"23dca844-dc9b-4e62-b6e0-1e4da1adc56c\") " pod="openstack/dnsmasq-dns-59cf4bdb65-gkvxp" Dec 03 13:21:00 crc kubenswrapper[4986]: I1203 13:21:00.013435 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/23dca844-dc9b-4e62-b6e0-1e4da1adc56c-ovsdbserver-nb\") pod \"dnsmasq-dns-59cf4bdb65-gkvxp\" (UID: \"23dca844-dc9b-4e62-b6e0-1e4da1adc56c\") " pod="openstack/dnsmasq-dns-59cf4bdb65-gkvxp" Dec 03 13:21:00 crc kubenswrapper[4986]: I1203 13:21:00.013651 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23dca844-dc9b-4e62-b6e0-1e4da1adc56c-dns-svc\") pod \"dnsmasq-dns-59cf4bdb65-gkvxp\" (UID: \"23dca844-dc9b-4e62-b6e0-1e4da1adc56c\") " pod="openstack/dnsmasq-dns-59cf4bdb65-gkvxp" Dec 03 13:21:00 crc kubenswrapper[4986]: I1203 13:21:00.013702 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/23dca844-dc9b-4e62-b6e0-1e4da1adc56c-ovsdbserver-sb\") pod \"dnsmasq-dns-59cf4bdb65-gkvxp\" (UID: \"23dca844-dc9b-4e62-b6e0-1e4da1adc56c\") " pod="openstack/dnsmasq-dns-59cf4bdb65-gkvxp" Dec 03 13:21:00 crc kubenswrapper[4986]: I1203 13:21:00.013720 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23dca844-dc9b-4e62-b6e0-1e4da1adc56c-config\") pod \"dnsmasq-dns-59cf4bdb65-gkvxp\" (UID: \"23dca844-dc9b-4e62-b6e0-1e4da1adc56c\") " pod="openstack/dnsmasq-dns-59cf4bdb65-gkvxp" Dec 03 13:21:00 crc kubenswrapper[4986]: I1203 13:21:00.014550 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23dca844-dc9b-4e62-b6e0-1e4da1adc56c-config\") pod \"dnsmasq-dns-59cf4bdb65-gkvxp\" (UID: \"23dca844-dc9b-4e62-b6e0-1e4da1adc56c\") " pod="openstack/dnsmasq-dns-59cf4bdb65-gkvxp" Dec 03 13:21:00 crc kubenswrapper[4986]: I1203 13:21:00.015076 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/23dca844-dc9b-4e62-b6e0-1e4da1adc56c-dns-swift-storage-0\") pod \"dnsmasq-dns-59cf4bdb65-gkvxp\" (UID: \"23dca844-dc9b-4e62-b6e0-1e4da1adc56c\") " pod="openstack/dnsmasq-dns-59cf4bdb65-gkvxp" Dec 03 13:21:00 crc kubenswrapper[4986]: I1203 13:21:00.015402 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23dca844-dc9b-4e62-b6e0-1e4da1adc56c-dns-svc\") pod \"dnsmasq-dns-59cf4bdb65-gkvxp\" (UID: \"23dca844-dc9b-4e62-b6e0-1e4da1adc56c\") " pod="openstack/dnsmasq-dns-59cf4bdb65-gkvxp" Dec 03 13:21:00 crc kubenswrapper[4986]: I1203 13:21:00.015965 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/23dca844-dc9b-4e62-b6e0-1e4da1adc56c-ovsdbserver-nb\") pod \"dnsmasq-dns-59cf4bdb65-gkvxp\" (UID: \"23dca844-dc9b-4e62-b6e0-1e4da1adc56c\") " pod="openstack/dnsmasq-dns-59cf4bdb65-gkvxp" Dec 03 13:21:00 crc kubenswrapper[4986]: I1203 13:21:00.031495 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjl4g\" (UniqueName: \"kubernetes.io/projected/23dca844-dc9b-4e62-b6e0-1e4da1adc56c-kube-api-access-gjl4g\") pod \"dnsmasq-dns-59cf4bdb65-gkvxp\" (UID: \"23dca844-dc9b-4e62-b6e0-1e4da1adc56c\") " pod="openstack/dnsmasq-dns-59cf4bdb65-gkvxp" Dec 03 13:21:00 crc kubenswrapper[4986]: I1203 13:21:00.148830 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59cf4bdb65-gkvxp" Dec 03 13:21:00 crc kubenswrapper[4986]: I1203 13:21:00.443449 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c7s6k" event={"ID":"d4f4a308-6339-4346-98f6-562cc445769e","Type":"ContainerDied","Data":"5ece9dcc073b1f1247e0fabcd78dbd8e1ca8fbc1ee47abdd9fbaf801bd9a03f5"} Dec 03 13:21:00 crc kubenswrapper[4986]: I1203 13:21:00.443331 4986 generic.go:334] "Generic (PLEG): container finished" podID="d4f4a308-6339-4346-98f6-562cc445769e" containerID="5ece9dcc073b1f1247e0fabcd78dbd8e1ca8fbc1ee47abdd9fbaf801bd9a03f5" exitCode=0 Dec 03 13:21:00 crc kubenswrapper[4986]: I1203 13:21:00.444245 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c7s6k" event={"ID":"d4f4a308-6339-4346-98f6-562cc445769e","Type":"ContainerStarted","Data":"94050e0dd2e87d58556e8add8b113894f14801e63bc5a8379194f13caeda5e1c"} Dec 03 13:21:00 crc kubenswrapper[4986]: I1203 13:21:00.650079 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-gkvxp"] Dec 03 13:21:00 crc kubenswrapper[4986]: W1203 13:21:00.653697 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod23dca844_dc9b_4e62_b6e0_1e4da1adc56c.slice/crio-d69dbd973f2d6efe90d6f800e7e49b6791df67a59c29bec6a850a71169980047 WatchSource:0}: Error finding container d69dbd973f2d6efe90d6f800e7e49b6791df67a59c29bec6a850a71169980047: Status 404 returned error can't find the container with id d69dbd973f2d6efe90d6f800e7e49b6791df67a59c29bec6a850a71169980047 Dec 03 13:21:01 crc kubenswrapper[4986]: I1203 13:21:01.454963 4986 generic.go:334] "Generic (PLEG): container finished" podID="d4f4a308-6339-4346-98f6-562cc445769e" containerID="1e0b8f90f4efbaf0d28ed7727788fccf18f4c5b7c74b482ed665bfd8c398f631" exitCode=0 Dec 03 13:21:01 crc kubenswrapper[4986]: I1203 13:21:01.455179 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c7s6k" event={"ID":"d4f4a308-6339-4346-98f6-562cc445769e","Type":"ContainerDied","Data":"1e0b8f90f4efbaf0d28ed7727788fccf18f4c5b7c74b482ed665bfd8c398f631"} Dec 03 13:21:01 crc kubenswrapper[4986]: I1203 13:21:01.459537 4986 generic.go:334] "Generic (PLEG): container finished" podID="23dca844-dc9b-4e62-b6e0-1e4da1adc56c" containerID="a8892e2c480f999002948bba340f5f58de1d1a642642ff0e3cc8d0b088850f34" exitCode=0 Dec 03 13:21:01 crc kubenswrapper[4986]: I1203 13:21:01.459593 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-gkvxp" event={"ID":"23dca844-dc9b-4e62-b6e0-1e4da1adc56c","Type":"ContainerDied","Data":"a8892e2c480f999002948bba340f5f58de1d1a642642ff0e3cc8d0b088850f34"} Dec 03 13:21:01 crc kubenswrapper[4986]: I1203 13:21:01.459627 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-gkvxp" event={"ID":"23dca844-dc9b-4e62-b6e0-1e4da1adc56c","Type":"ContainerStarted","Data":"d69dbd973f2d6efe90d6f800e7e49b6791df67a59c29bec6a850a71169980047"} Dec 03 13:21:01 crc kubenswrapper[4986]: I1203 13:21:01.728625 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 03 13:21:01 crc kubenswrapper[4986]: I1203 13:21:01.729275 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ec549fb8-01cd-4c44-840f-430a74bc5cee" containerName="ceilometer-central-agent" containerID="cri-o://d938a786309eb75905d43a2c7033163dfb0c0c469bd83e4e5c73c98cb3417d06" gracePeriod=30 Dec 03 13:21:01 crc kubenswrapper[4986]: I1203 13:21:01.729305 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ec549fb8-01cd-4c44-840f-430a74bc5cee" containerName="proxy-httpd" containerID="cri-o://39025e399296b76588d708d6413f8d1abeb9768390837139ebb49d5e47e38f51" gracePeriod=30 Dec 03 13:21:01 crc kubenswrapper[4986]: I1203 13:21:01.729314 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ec549fb8-01cd-4c44-840f-430a74bc5cee" containerName="sg-core" containerID="cri-o://8afc1bb268a4547f05459ece12933ad84c1a10e737f5ff373a520415c52bbc0a" gracePeriod=30 Dec 03 13:21:01 crc kubenswrapper[4986]: I1203 13:21:01.729341 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ec549fb8-01cd-4c44-840f-430a74bc5cee" containerName="ceilometer-notification-agent" containerID="cri-o://e7f08f7f4dc7edc655c61c14e217b81b71975b656410d27db7566303111db5db" gracePeriod=30 Dec 03 13:21:01 crc kubenswrapper[4986]: I1203 13:21:01.811718 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Dec 03 13:21:02 crc kubenswrapper[4986]: I1203 13:21:02.470756 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-gkvxp" event={"ID":"23dca844-dc9b-4e62-b6e0-1e4da1adc56c","Type":"ContainerStarted","Data":"2e70e48617359050c096cc8404d18b9a95d775bbc8e20b2a463af72bad203f5f"} Dec 03 13:21:02 crc kubenswrapper[4986]: I1203 13:21:02.470853 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-59cf4bdb65-gkvxp" Dec 03 13:21:02 crc kubenswrapper[4986]: I1203 13:21:02.474019 4986 generic.go:334] "Generic (PLEG): container finished" podID="ec549fb8-01cd-4c44-840f-430a74bc5cee" containerID="39025e399296b76588d708d6413f8d1abeb9768390837139ebb49d5e47e38f51" exitCode=0 Dec 03 13:21:02 crc kubenswrapper[4986]: I1203 13:21:02.474047 4986 generic.go:334] "Generic (PLEG): container finished" podID="ec549fb8-01cd-4c44-840f-430a74bc5cee" containerID="8afc1bb268a4547f05459ece12933ad84c1a10e737f5ff373a520415c52bbc0a" exitCode=2 Dec 03 13:21:02 crc kubenswrapper[4986]: I1203 13:21:02.474061 4986 generic.go:334] "Generic (PLEG): container finished" podID="ec549fb8-01cd-4c44-840f-430a74bc5cee" containerID="d938a786309eb75905d43a2c7033163dfb0c0c469bd83e4e5c73c98cb3417d06" exitCode=0 Dec 03 13:21:02 crc kubenswrapper[4986]: I1203 13:21:02.474100 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ec549fb8-01cd-4c44-840f-430a74bc5cee","Type":"ContainerDied","Data":"39025e399296b76588d708d6413f8d1abeb9768390837139ebb49d5e47e38f51"} Dec 03 13:21:02 crc kubenswrapper[4986]: I1203 13:21:02.474168 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ec549fb8-01cd-4c44-840f-430a74bc5cee","Type":"ContainerDied","Data":"8afc1bb268a4547f05459ece12933ad84c1a10e737f5ff373a520415c52bbc0a"} Dec 03 13:21:02 crc kubenswrapper[4986]: I1203 13:21:02.474185 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ec549fb8-01cd-4c44-840f-430a74bc5cee","Type":"ContainerDied","Data":"d938a786309eb75905d43a2c7033163dfb0c0c469bd83e4e5c73c98cb3417d06"} Dec 03 13:21:02 crc kubenswrapper[4986]: I1203 13:21:02.476731 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c7s6k" event={"ID":"d4f4a308-6339-4346-98f6-562cc445769e","Type":"ContainerStarted","Data":"e6b989323ebcad458cdb47c7bfa7b43f3e6270d1bb748b913ff8dbb623f21348"} Dec 03 13:21:02 crc kubenswrapper[4986]: I1203 13:21:02.501334 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-59cf4bdb65-gkvxp" podStartSLOduration=3.501313513 podStartE2EDuration="3.501313513s" podCreationTimestamp="2025-12-03 13:20:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:21:02.494655633 +0000 UTC m=+1521.961086824" watchObservedRunningTime="2025-12-03 13:21:02.501313513 +0000 UTC m=+1521.967744704" Dec 03 13:21:02 crc kubenswrapper[4986]: I1203 13:21:02.521073 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-c7s6k" podStartSLOduration=3.044666967 podStartE2EDuration="4.521053665s" podCreationTimestamp="2025-12-03 13:20:58 +0000 UTC" firstStartedPulling="2025-12-03 13:21:00.445767106 +0000 UTC m=+1519.912198297" lastFinishedPulling="2025-12-03 13:21:01.922153794 +0000 UTC m=+1521.388584995" observedRunningTime="2025-12-03 13:21:02.516147032 +0000 UTC m=+1521.982578233" watchObservedRunningTime="2025-12-03 13:21:02.521053665 +0000 UTC m=+1521.987484856" Dec 03 13:21:02 crc kubenswrapper[4986]: I1203 13:21:02.756592 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 03 13:21:02 crc kubenswrapper[4986]: I1203 13:21:02.757200 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="adfcf304-e178-412f-8638-0ded9cfa72e1" containerName="nova-api-log" containerID="cri-o://542ac2d391719eae755e99a41a1ef243d17e1238e8f41435a7bf6c920266ace3" gracePeriod=30 Dec 03 13:21:02 crc kubenswrapper[4986]: I1203 13:21:02.757232 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="adfcf304-e178-412f-8638-0ded9cfa72e1" containerName="nova-api-api" containerID="cri-o://742e27cb1a2f9237b83d1ea173f4846a3035db2c64d147b0ee5283bd50dd482c" gracePeriod=30 Dec 03 13:21:03 crc kubenswrapper[4986]: I1203 13:21:03.488555 4986 generic.go:334] "Generic (PLEG): container finished" podID="adfcf304-e178-412f-8638-0ded9cfa72e1" containerID="542ac2d391719eae755e99a41a1ef243d17e1238e8f41435a7bf6c920266ace3" exitCode=143 Dec 03 13:21:03 crc kubenswrapper[4986]: I1203 13:21:03.488639 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"adfcf304-e178-412f-8638-0ded9cfa72e1","Type":"ContainerDied","Data":"542ac2d391719eae755e99a41a1ef243d17e1238e8f41435a7bf6c920266ace3"} Dec 03 13:21:03 crc kubenswrapper[4986]: I1203 13:21:03.491580 4986 patch_prober.go:28] interesting pod/machine-config-daemon-xggpj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 13:21:03 crc kubenswrapper[4986]: I1203 13:21:03.491640 4986 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 13:21:04 crc kubenswrapper[4986]: I1203 13:21:04.396128 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 13:21:04 crc kubenswrapper[4986]: I1203 13:21:04.502645 4986 generic.go:334] "Generic (PLEG): container finished" podID="ec549fb8-01cd-4c44-840f-430a74bc5cee" containerID="e7f08f7f4dc7edc655c61c14e217b81b71975b656410d27db7566303111db5db" exitCode=0 Dec 03 13:21:04 crc kubenswrapper[4986]: I1203 13:21:04.502683 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ec549fb8-01cd-4c44-840f-430a74bc5cee","Type":"ContainerDied","Data":"e7f08f7f4dc7edc655c61c14e217b81b71975b656410d27db7566303111db5db"} Dec 03 13:21:04 crc kubenswrapper[4986]: I1203 13:21:04.502712 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ec549fb8-01cd-4c44-840f-430a74bc5cee","Type":"ContainerDied","Data":"18bc5eb239f1b709b87403a3a4a3b0dfcb807010fb30664f782e11ca7a45b9ee"} Dec 03 13:21:04 crc kubenswrapper[4986]: I1203 13:21:04.502729 4986 scope.go:117] "RemoveContainer" containerID="39025e399296b76588d708d6413f8d1abeb9768390837139ebb49d5e47e38f51" Dec 03 13:21:04 crc kubenswrapper[4986]: I1203 13:21:04.502849 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 13:21:04 crc kubenswrapper[4986]: I1203 13:21:04.512683 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ec549fb8-01cd-4c44-840f-430a74bc5cee-sg-core-conf-yaml\") pod \"ec549fb8-01cd-4c44-840f-430a74bc5cee\" (UID: \"ec549fb8-01cd-4c44-840f-430a74bc5cee\") " Dec 03 13:21:04 crc kubenswrapper[4986]: I1203 13:21:04.512777 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec549fb8-01cd-4c44-840f-430a74bc5cee-log-httpd\") pod \"ec549fb8-01cd-4c44-840f-430a74bc5cee\" (UID: \"ec549fb8-01cd-4c44-840f-430a74bc5cee\") " Dec 03 13:21:04 crc kubenswrapper[4986]: I1203 13:21:04.512843 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec549fb8-01cd-4c44-840f-430a74bc5cee-combined-ca-bundle\") pod \"ec549fb8-01cd-4c44-840f-430a74bc5cee\" (UID: \"ec549fb8-01cd-4c44-840f-430a74bc5cee\") " Dec 03 13:21:04 crc kubenswrapper[4986]: I1203 13:21:04.512872 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec549fb8-01cd-4c44-840f-430a74bc5cee-scripts\") pod \"ec549fb8-01cd-4c44-840f-430a74bc5cee\" (UID: \"ec549fb8-01cd-4c44-840f-430a74bc5cee\") " Dec 03 13:21:04 crc kubenswrapper[4986]: I1203 13:21:04.512933 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec549fb8-01cd-4c44-840f-430a74bc5cee-ceilometer-tls-certs\") pod \"ec549fb8-01cd-4c44-840f-430a74bc5cee\" (UID: \"ec549fb8-01cd-4c44-840f-430a74bc5cee\") " Dec 03 13:21:04 crc kubenswrapper[4986]: I1203 13:21:04.513068 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrzst\" (UniqueName: \"kubernetes.io/projected/ec549fb8-01cd-4c44-840f-430a74bc5cee-kube-api-access-hrzst\") pod \"ec549fb8-01cd-4c44-840f-430a74bc5cee\" (UID: \"ec549fb8-01cd-4c44-840f-430a74bc5cee\") " Dec 03 13:21:04 crc kubenswrapper[4986]: I1203 13:21:04.513113 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec549fb8-01cd-4c44-840f-430a74bc5cee-run-httpd\") pod \"ec549fb8-01cd-4c44-840f-430a74bc5cee\" (UID: \"ec549fb8-01cd-4c44-840f-430a74bc5cee\") " Dec 03 13:21:04 crc kubenswrapper[4986]: I1203 13:21:04.513173 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec549fb8-01cd-4c44-840f-430a74bc5cee-config-data\") pod \"ec549fb8-01cd-4c44-840f-430a74bc5cee\" (UID: \"ec549fb8-01cd-4c44-840f-430a74bc5cee\") " Dec 03 13:21:04 crc kubenswrapper[4986]: I1203 13:21:04.513671 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec549fb8-01cd-4c44-840f-430a74bc5cee-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ec549fb8-01cd-4c44-840f-430a74bc5cee" (UID: "ec549fb8-01cd-4c44-840f-430a74bc5cee"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 13:21:04 crc kubenswrapper[4986]: I1203 13:21:04.513838 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec549fb8-01cd-4c44-840f-430a74bc5cee-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ec549fb8-01cd-4c44-840f-430a74bc5cee" (UID: "ec549fb8-01cd-4c44-840f-430a74bc5cee"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 13:21:04 crc kubenswrapper[4986]: I1203 13:21:04.526639 4986 scope.go:117] "RemoveContainer" containerID="8afc1bb268a4547f05459ece12933ad84c1a10e737f5ff373a520415c52bbc0a" Dec 03 13:21:04 crc kubenswrapper[4986]: I1203 13:21:04.532377 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec549fb8-01cd-4c44-840f-430a74bc5cee-scripts" (OuterVolumeSpecName: "scripts") pod "ec549fb8-01cd-4c44-840f-430a74bc5cee" (UID: "ec549fb8-01cd-4c44-840f-430a74bc5cee"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:21:04 crc kubenswrapper[4986]: I1203 13:21:04.539193 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec549fb8-01cd-4c44-840f-430a74bc5cee-kube-api-access-hrzst" (OuterVolumeSpecName: "kube-api-access-hrzst") pod "ec549fb8-01cd-4c44-840f-430a74bc5cee" (UID: "ec549fb8-01cd-4c44-840f-430a74bc5cee"). InnerVolumeSpecName "kube-api-access-hrzst". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:21:04 crc kubenswrapper[4986]: I1203 13:21:04.562829 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec549fb8-01cd-4c44-840f-430a74bc5cee-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ec549fb8-01cd-4c44-840f-430a74bc5cee" (UID: "ec549fb8-01cd-4c44-840f-430a74bc5cee"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:21:04 crc kubenswrapper[4986]: I1203 13:21:04.568829 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec549fb8-01cd-4c44-840f-430a74bc5cee-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "ec549fb8-01cd-4c44-840f-430a74bc5cee" (UID: "ec549fb8-01cd-4c44-840f-430a74bc5cee"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:21:04 crc kubenswrapper[4986]: I1203 13:21:04.601462 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec549fb8-01cd-4c44-840f-430a74bc5cee-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ec549fb8-01cd-4c44-840f-430a74bc5cee" (UID: "ec549fb8-01cd-4c44-840f-430a74bc5cee"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:21:04 crc kubenswrapper[4986]: I1203 13:21:04.615193 4986 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec549fb8-01cd-4c44-840f-430a74bc5cee-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 13:21:04 crc kubenswrapper[4986]: I1203 13:21:04.615224 4986 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec549fb8-01cd-4c44-840f-430a74bc5cee-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 13:21:04 crc kubenswrapper[4986]: I1203 13:21:04.615233 4986 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec549fb8-01cd-4c44-840f-430a74bc5cee-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 03 13:21:04 crc kubenswrapper[4986]: I1203 13:21:04.615243 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hrzst\" (UniqueName: \"kubernetes.io/projected/ec549fb8-01cd-4c44-840f-430a74bc5cee-kube-api-access-hrzst\") on node \"crc\" DevicePath \"\"" Dec 03 13:21:04 crc kubenswrapper[4986]: I1203 13:21:04.615254 4986 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec549fb8-01cd-4c44-840f-430a74bc5cee-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 03 13:21:04 crc kubenswrapper[4986]: I1203 13:21:04.615262 4986 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ec549fb8-01cd-4c44-840f-430a74bc5cee-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 03 13:21:04 crc kubenswrapper[4986]: I1203 13:21:04.615270 4986 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec549fb8-01cd-4c44-840f-430a74bc5cee-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 03 13:21:04 crc kubenswrapper[4986]: I1203 13:21:04.628902 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec549fb8-01cd-4c44-840f-430a74bc5cee-config-data" (OuterVolumeSpecName: "config-data") pod "ec549fb8-01cd-4c44-840f-430a74bc5cee" (UID: "ec549fb8-01cd-4c44-840f-430a74bc5cee"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:21:04 crc kubenswrapper[4986]: I1203 13:21:04.636051 4986 scope.go:117] "RemoveContainer" containerID="e7f08f7f4dc7edc655c61c14e217b81b71975b656410d27db7566303111db5db" Dec 03 13:21:04 crc kubenswrapper[4986]: I1203 13:21:04.656316 4986 scope.go:117] "RemoveContainer" containerID="d938a786309eb75905d43a2c7033163dfb0c0c469bd83e4e5c73c98cb3417d06" Dec 03 13:21:04 crc kubenswrapper[4986]: I1203 13:21:04.676683 4986 scope.go:117] "RemoveContainer" containerID="39025e399296b76588d708d6413f8d1abeb9768390837139ebb49d5e47e38f51" Dec 03 13:21:04 crc kubenswrapper[4986]: E1203 13:21:04.677117 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39025e399296b76588d708d6413f8d1abeb9768390837139ebb49d5e47e38f51\": container with ID starting with 39025e399296b76588d708d6413f8d1abeb9768390837139ebb49d5e47e38f51 not found: ID does not exist" containerID="39025e399296b76588d708d6413f8d1abeb9768390837139ebb49d5e47e38f51" Dec 03 13:21:04 crc kubenswrapper[4986]: I1203 13:21:04.677177 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39025e399296b76588d708d6413f8d1abeb9768390837139ebb49d5e47e38f51"} err="failed to get container status \"39025e399296b76588d708d6413f8d1abeb9768390837139ebb49d5e47e38f51\": rpc error: code = NotFound desc = could not find container \"39025e399296b76588d708d6413f8d1abeb9768390837139ebb49d5e47e38f51\": container with ID starting with 39025e399296b76588d708d6413f8d1abeb9768390837139ebb49d5e47e38f51 not found: ID does not exist" Dec 03 13:21:04 crc kubenswrapper[4986]: I1203 13:21:04.677213 4986 scope.go:117] "RemoveContainer" containerID="8afc1bb268a4547f05459ece12933ad84c1a10e737f5ff373a520415c52bbc0a" Dec 03 13:21:04 crc kubenswrapper[4986]: E1203 13:21:04.677562 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8afc1bb268a4547f05459ece12933ad84c1a10e737f5ff373a520415c52bbc0a\": container with ID starting with 8afc1bb268a4547f05459ece12933ad84c1a10e737f5ff373a520415c52bbc0a not found: ID does not exist" containerID="8afc1bb268a4547f05459ece12933ad84c1a10e737f5ff373a520415c52bbc0a" Dec 03 13:21:04 crc kubenswrapper[4986]: I1203 13:21:04.677596 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8afc1bb268a4547f05459ece12933ad84c1a10e737f5ff373a520415c52bbc0a"} err="failed to get container status \"8afc1bb268a4547f05459ece12933ad84c1a10e737f5ff373a520415c52bbc0a\": rpc error: code = NotFound desc = could not find container \"8afc1bb268a4547f05459ece12933ad84c1a10e737f5ff373a520415c52bbc0a\": container with ID starting with 8afc1bb268a4547f05459ece12933ad84c1a10e737f5ff373a520415c52bbc0a not found: ID does not exist" Dec 03 13:21:04 crc kubenswrapper[4986]: I1203 13:21:04.677620 4986 scope.go:117] "RemoveContainer" containerID="e7f08f7f4dc7edc655c61c14e217b81b71975b656410d27db7566303111db5db" Dec 03 13:21:04 crc kubenswrapper[4986]: E1203 13:21:04.678020 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7f08f7f4dc7edc655c61c14e217b81b71975b656410d27db7566303111db5db\": container with ID starting with e7f08f7f4dc7edc655c61c14e217b81b71975b656410d27db7566303111db5db not found: ID does not exist" containerID="e7f08f7f4dc7edc655c61c14e217b81b71975b656410d27db7566303111db5db" Dec 03 13:21:04 crc kubenswrapper[4986]: I1203 13:21:04.678217 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7f08f7f4dc7edc655c61c14e217b81b71975b656410d27db7566303111db5db"} err="failed to get container status \"e7f08f7f4dc7edc655c61c14e217b81b71975b656410d27db7566303111db5db\": rpc error: code = NotFound desc = could not find container \"e7f08f7f4dc7edc655c61c14e217b81b71975b656410d27db7566303111db5db\": container with ID starting with e7f08f7f4dc7edc655c61c14e217b81b71975b656410d27db7566303111db5db not found: ID does not exist" Dec 03 13:21:04 crc kubenswrapper[4986]: I1203 13:21:04.678450 4986 scope.go:117] "RemoveContainer" containerID="d938a786309eb75905d43a2c7033163dfb0c0c469bd83e4e5c73c98cb3417d06" Dec 03 13:21:04 crc kubenswrapper[4986]: E1203 13:21:04.679067 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d938a786309eb75905d43a2c7033163dfb0c0c469bd83e4e5c73c98cb3417d06\": container with ID starting with d938a786309eb75905d43a2c7033163dfb0c0c469bd83e4e5c73c98cb3417d06 not found: ID does not exist" containerID="d938a786309eb75905d43a2c7033163dfb0c0c469bd83e4e5c73c98cb3417d06" Dec 03 13:21:04 crc kubenswrapper[4986]: I1203 13:21:04.679318 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d938a786309eb75905d43a2c7033163dfb0c0c469bd83e4e5c73c98cb3417d06"} err="failed to get container status \"d938a786309eb75905d43a2c7033163dfb0c0c469bd83e4e5c73c98cb3417d06\": rpc error: code = NotFound desc = could not find container \"d938a786309eb75905d43a2c7033163dfb0c0c469bd83e4e5c73c98cb3417d06\": container with ID starting with d938a786309eb75905d43a2c7033163dfb0c0c469bd83e4e5c73c98cb3417d06 not found: ID does not exist" Dec 03 13:21:04 crc kubenswrapper[4986]: I1203 13:21:04.716623 4986 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec549fb8-01cd-4c44-840f-430a74bc5cee-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 13:21:04 crc kubenswrapper[4986]: I1203 13:21:04.835652 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 03 13:21:04 crc kubenswrapper[4986]: I1203 13:21:04.847719 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 03 13:21:04 crc kubenswrapper[4986]: I1203 13:21:04.860434 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 03 13:21:04 crc kubenswrapper[4986]: E1203 13:21:04.860810 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec549fb8-01cd-4c44-840f-430a74bc5cee" containerName="sg-core" Dec 03 13:21:04 crc kubenswrapper[4986]: I1203 13:21:04.860826 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec549fb8-01cd-4c44-840f-430a74bc5cee" containerName="sg-core" Dec 03 13:21:04 crc kubenswrapper[4986]: E1203 13:21:04.860852 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec549fb8-01cd-4c44-840f-430a74bc5cee" containerName="ceilometer-notification-agent" Dec 03 13:21:04 crc kubenswrapper[4986]: I1203 13:21:04.860863 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec549fb8-01cd-4c44-840f-430a74bc5cee" containerName="ceilometer-notification-agent" Dec 03 13:21:04 crc kubenswrapper[4986]: E1203 13:21:04.860881 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec549fb8-01cd-4c44-840f-430a74bc5cee" containerName="ceilometer-central-agent" Dec 03 13:21:04 crc kubenswrapper[4986]: I1203 13:21:04.860888 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec549fb8-01cd-4c44-840f-430a74bc5cee" containerName="ceilometer-central-agent" Dec 03 13:21:04 crc kubenswrapper[4986]: E1203 13:21:04.860907 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec549fb8-01cd-4c44-840f-430a74bc5cee" containerName="proxy-httpd" Dec 03 13:21:04 crc kubenswrapper[4986]: I1203 13:21:04.860914 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec549fb8-01cd-4c44-840f-430a74bc5cee" containerName="proxy-httpd" Dec 03 13:21:04 crc kubenswrapper[4986]: I1203 13:21:04.861118 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec549fb8-01cd-4c44-840f-430a74bc5cee" containerName="ceilometer-notification-agent" Dec 03 13:21:04 crc kubenswrapper[4986]: I1203 13:21:04.861133 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec549fb8-01cd-4c44-840f-430a74bc5cee" containerName="sg-core" Dec 03 13:21:04 crc kubenswrapper[4986]: I1203 13:21:04.861149 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec549fb8-01cd-4c44-840f-430a74bc5cee" containerName="proxy-httpd" Dec 03 13:21:04 crc kubenswrapper[4986]: I1203 13:21:04.861167 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec549fb8-01cd-4c44-840f-430a74bc5cee" containerName="ceilometer-central-agent" Dec 03 13:21:04 crc kubenswrapper[4986]: I1203 13:21:04.862827 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 13:21:04 crc kubenswrapper[4986]: I1203 13:21:04.865255 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 03 13:21:04 crc kubenswrapper[4986]: I1203 13:21:04.866354 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 03 13:21:04 crc kubenswrapper[4986]: I1203 13:21:04.867062 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 03 13:21:04 crc kubenswrapper[4986]: I1203 13:21:04.870593 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 03 13:21:04 crc kubenswrapper[4986]: I1203 13:21:04.953314 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec549fb8-01cd-4c44-840f-430a74bc5cee" path="/var/lib/kubelet/pods/ec549fb8-01cd-4c44-840f-430a74bc5cee/volumes" Dec 03 13:21:05 crc kubenswrapper[4986]: I1203 13:21:05.021508 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68mcm\" (UniqueName: \"kubernetes.io/projected/582543a7-c25a-41d6-a8c0-efaa205d5e4d-kube-api-access-68mcm\") pod \"ceilometer-0\" (UID: \"582543a7-c25a-41d6-a8c0-efaa205d5e4d\") " pod="openstack/ceilometer-0" Dec 03 13:21:05 crc kubenswrapper[4986]: I1203 13:21:05.021713 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/582543a7-c25a-41d6-a8c0-efaa205d5e4d-run-httpd\") pod \"ceilometer-0\" (UID: \"582543a7-c25a-41d6-a8c0-efaa205d5e4d\") " pod="openstack/ceilometer-0" Dec 03 13:21:05 crc kubenswrapper[4986]: I1203 13:21:05.021896 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/582543a7-c25a-41d6-a8c0-efaa205d5e4d-scripts\") pod \"ceilometer-0\" (UID: \"582543a7-c25a-41d6-a8c0-efaa205d5e4d\") " pod="openstack/ceilometer-0" Dec 03 13:21:05 crc kubenswrapper[4986]: I1203 13:21:05.022012 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/582543a7-c25a-41d6-a8c0-efaa205d5e4d-log-httpd\") pod \"ceilometer-0\" (UID: \"582543a7-c25a-41d6-a8c0-efaa205d5e4d\") " pod="openstack/ceilometer-0" Dec 03 13:21:05 crc kubenswrapper[4986]: I1203 13:21:05.022065 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/582543a7-c25a-41d6-a8c0-efaa205d5e4d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"582543a7-c25a-41d6-a8c0-efaa205d5e4d\") " pod="openstack/ceilometer-0" Dec 03 13:21:05 crc kubenswrapper[4986]: I1203 13:21:05.022345 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/582543a7-c25a-41d6-a8c0-efaa205d5e4d-config-data\") pod \"ceilometer-0\" (UID: \"582543a7-c25a-41d6-a8c0-efaa205d5e4d\") " pod="openstack/ceilometer-0" Dec 03 13:21:05 crc kubenswrapper[4986]: I1203 13:21:05.022405 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/582543a7-c25a-41d6-a8c0-efaa205d5e4d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"582543a7-c25a-41d6-a8c0-efaa205d5e4d\") " pod="openstack/ceilometer-0" Dec 03 13:21:05 crc kubenswrapper[4986]: I1203 13:21:05.022473 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/582543a7-c25a-41d6-a8c0-efaa205d5e4d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"582543a7-c25a-41d6-a8c0-efaa205d5e4d\") " pod="openstack/ceilometer-0" Dec 03 13:21:05 crc kubenswrapper[4986]: I1203 13:21:05.124708 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68mcm\" (UniqueName: \"kubernetes.io/projected/582543a7-c25a-41d6-a8c0-efaa205d5e4d-kube-api-access-68mcm\") pod \"ceilometer-0\" (UID: \"582543a7-c25a-41d6-a8c0-efaa205d5e4d\") " pod="openstack/ceilometer-0" Dec 03 13:21:05 crc kubenswrapper[4986]: I1203 13:21:05.124796 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/582543a7-c25a-41d6-a8c0-efaa205d5e4d-run-httpd\") pod \"ceilometer-0\" (UID: \"582543a7-c25a-41d6-a8c0-efaa205d5e4d\") " pod="openstack/ceilometer-0" Dec 03 13:21:05 crc kubenswrapper[4986]: I1203 13:21:05.124832 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/582543a7-c25a-41d6-a8c0-efaa205d5e4d-scripts\") pod \"ceilometer-0\" (UID: \"582543a7-c25a-41d6-a8c0-efaa205d5e4d\") " pod="openstack/ceilometer-0" Dec 03 13:21:05 crc kubenswrapper[4986]: I1203 13:21:05.124866 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/582543a7-c25a-41d6-a8c0-efaa205d5e4d-log-httpd\") pod \"ceilometer-0\" (UID: \"582543a7-c25a-41d6-a8c0-efaa205d5e4d\") " pod="openstack/ceilometer-0" Dec 03 13:21:05 crc kubenswrapper[4986]: I1203 13:21:05.124887 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/582543a7-c25a-41d6-a8c0-efaa205d5e4d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"582543a7-c25a-41d6-a8c0-efaa205d5e4d\") " pod="openstack/ceilometer-0" Dec 03 13:21:05 crc kubenswrapper[4986]: I1203 13:21:05.124939 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/582543a7-c25a-41d6-a8c0-efaa205d5e4d-config-data\") pod \"ceilometer-0\" (UID: \"582543a7-c25a-41d6-a8c0-efaa205d5e4d\") " pod="openstack/ceilometer-0" Dec 03 13:21:05 crc kubenswrapper[4986]: I1203 13:21:05.124961 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/582543a7-c25a-41d6-a8c0-efaa205d5e4d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"582543a7-c25a-41d6-a8c0-efaa205d5e4d\") " pod="openstack/ceilometer-0" Dec 03 13:21:05 crc kubenswrapper[4986]: I1203 13:21:05.124987 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/582543a7-c25a-41d6-a8c0-efaa205d5e4d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"582543a7-c25a-41d6-a8c0-efaa205d5e4d\") " pod="openstack/ceilometer-0" Dec 03 13:21:05 crc kubenswrapper[4986]: I1203 13:21:05.125823 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/582543a7-c25a-41d6-a8c0-efaa205d5e4d-log-httpd\") pod \"ceilometer-0\" (UID: \"582543a7-c25a-41d6-a8c0-efaa205d5e4d\") " pod="openstack/ceilometer-0" Dec 03 13:21:05 crc kubenswrapper[4986]: I1203 13:21:05.125982 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/582543a7-c25a-41d6-a8c0-efaa205d5e4d-run-httpd\") pod \"ceilometer-0\" (UID: \"582543a7-c25a-41d6-a8c0-efaa205d5e4d\") " pod="openstack/ceilometer-0" Dec 03 13:21:05 crc kubenswrapper[4986]: I1203 13:21:05.129887 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/582543a7-c25a-41d6-a8c0-efaa205d5e4d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"582543a7-c25a-41d6-a8c0-efaa205d5e4d\") " pod="openstack/ceilometer-0" Dec 03 13:21:05 crc kubenswrapper[4986]: I1203 13:21:05.129975 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/582543a7-c25a-41d6-a8c0-efaa205d5e4d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"582543a7-c25a-41d6-a8c0-efaa205d5e4d\") " pod="openstack/ceilometer-0" Dec 03 13:21:05 crc kubenswrapper[4986]: I1203 13:21:05.133191 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/582543a7-c25a-41d6-a8c0-efaa205d5e4d-scripts\") pod \"ceilometer-0\" (UID: \"582543a7-c25a-41d6-a8c0-efaa205d5e4d\") " pod="openstack/ceilometer-0" Dec 03 13:21:05 crc kubenswrapper[4986]: I1203 13:21:05.134017 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/582543a7-c25a-41d6-a8c0-efaa205d5e4d-config-data\") pod \"ceilometer-0\" (UID: \"582543a7-c25a-41d6-a8c0-efaa205d5e4d\") " pod="openstack/ceilometer-0" Dec 03 13:21:05 crc kubenswrapper[4986]: I1203 13:21:05.137379 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/582543a7-c25a-41d6-a8c0-efaa205d5e4d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"582543a7-c25a-41d6-a8c0-efaa205d5e4d\") " pod="openstack/ceilometer-0" Dec 03 13:21:05 crc kubenswrapper[4986]: I1203 13:21:05.142382 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68mcm\" (UniqueName: \"kubernetes.io/projected/582543a7-c25a-41d6-a8c0-efaa205d5e4d-kube-api-access-68mcm\") pod \"ceilometer-0\" (UID: \"582543a7-c25a-41d6-a8c0-efaa205d5e4d\") " pod="openstack/ceilometer-0" Dec 03 13:21:05 crc kubenswrapper[4986]: I1203 13:21:05.178335 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 13:21:05 crc kubenswrapper[4986]: I1203 13:21:05.322897 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 03 13:21:05 crc kubenswrapper[4986]: I1203 13:21:05.648073 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 03 13:21:05 crc kubenswrapper[4986]: W1203 13:21:05.649610 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod582543a7_c25a_41d6_a8c0_efaa205d5e4d.slice/crio-67ee93f2bba6a4528075fdca68e50570cd743f9e589fa4b35269f689ee51f874 WatchSource:0}: Error finding container 67ee93f2bba6a4528075fdca68e50570cd743f9e589fa4b35269f689ee51f874: Status 404 returned error can't find the container with id 67ee93f2bba6a4528075fdca68e50570cd743f9e589fa4b35269f689ee51f874 Dec 03 13:21:06 crc kubenswrapper[4986]: I1203 13:21:06.350381 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 03 13:21:06 crc kubenswrapper[4986]: I1203 13:21:06.459314 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adfcf304-e178-412f-8638-0ded9cfa72e1-combined-ca-bundle\") pod \"adfcf304-e178-412f-8638-0ded9cfa72e1\" (UID: \"adfcf304-e178-412f-8638-0ded9cfa72e1\") " Dec 03 13:21:06 crc kubenswrapper[4986]: I1203 13:21:06.459402 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adfcf304-e178-412f-8638-0ded9cfa72e1-config-data\") pod \"adfcf304-e178-412f-8638-0ded9cfa72e1\" (UID: \"adfcf304-e178-412f-8638-0ded9cfa72e1\") " Dec 03 13:21:06 crc kubenswrapper[4986]: I1203 13:21:06.459577 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/adfcf304-e178-412f-8638-0ded9cfa72e1-logs\") pod \"adfcf304-e178-412f-8638-0ded9cfa72e1\" (UID: \"adfcf304-e178-412f-8638-0ded9cfa72e1\") " Dec 03 13:21:06 crc kubenswrapper[4986]: I1203 13:21:06.459670 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2g7g\" (UniqueName: \"kubernetes.io/projected/adfcf304-e178-412f-8638-0ded9cfa72e1-kube-api-access-q2g7g\") pod \"adfcf304-e178-412f-8638-0ded9cfa72e1\" (UID: \"adfcf304-e178-412f-8638-0ded9cfa72e1\") " Dec 03 13:21:06 crc kubenswrapper[4986]: I1203 13:21:06.462801 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/adfcf304-e178-412f-8638-0ded9cfa72e1-logs" (OuterVolumeSpecName: "logs") pod "adfcf304-e178-412f-8638-0ded9cfa72e1" (UID: "adfcf304-e178-412f-8638-0ded9cfa72e1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 13:21:06 crc kubenswrapper[4986]: I1203 13:21:06.481667 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adfcf304-e178-412f-8638-0ded9cfa72e1-kube-api-access-q2g7g" (OuterVolumeSpecName: "kube-api-access-q2g7g") pod "adfcf304-e178-412f-8638-0ded9cfa72e1" (UID: "adfcf304-e178-412f-8638-0ded9cfa72e1"). InnerVolumeSpecName "kube-api-access-q2g7g". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:21:06 crc kubenswrapper[4986]: I1203 13:21:06.564502 4986 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/adfcf304-e178-412f-8638-0ded9cfa72e1-logs\") on node \"crc\" DevicePath \"\"" Dec 03 13:21:06 crc kubenswrapper[4986]: I1203 13:21:06.564546 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q2g7g\" (UniqueName: \"kubernetes.io/projected/adfcf304-e178-412f-8638-0ded9cfa72e1-kube-api-access-q2g7g\") on node \"crc\" DevicePath \"\"" Dec 03 13:21:06 crc kubenswrapper[4986]: I1203 13:21:06.564661 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/adfcf304-e178-412f-8638-0ded9cfa72e1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "adfcf304-e178-412f-8638-0ded9cfa72e1" (UID: "adfcf304-e178-412f-8638-0ded9cfa72e1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:21:06 crc kubenswrapper[4986]: I1203 13:21:06.578999 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/adfcf304-e178-412f-8638-0ded9cfa72e1-config-data" (OuterVolumeSpecName: "config-data") pod "adfcf304-e178-412f-8638-0ded9cfa72e1" (UID: "adfcf304-e178-412f-8638-0ded9cfa72e1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:21:06 crc kubenswrapper[4986]: I1203 13:21:06.589480 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"582543a7-c25a-41d6-a8c0-efaa205d5e4d","Type":"ContainerStarted","Data":"67ee93f2bba6a4528075fdca68e50570cd743f9e589fa4b35269f689ee51f874"} Dec 03 13:21:06 crc kubenswrapper[4986]: I1203 13:21:06.603639 4986 generic.go:334] "Generic (PLEG): container finished" podID="adfcf304-e178-412f-8638-0ded9cfa72e1" containerID="742e27cb1a2f9237b83d1ea173f4846a3035db2c64d147b0ee5283bd50dd482c" exitCode=0 Dec 03 13:21:06 crc kubenswrapper[4986]: I1203 13:21:06.603677 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"adfcf304-e178-412f-8638-0ded9cfa72e1","Type":"ContainerDied","Data":"742e27cb1a2f9237b83d1ea173f4846a3035db2c64d147b0ee5283bd50dd482c"} Dec 03 13:21:06 crc kubenswrapper[4986]: I1203 13:21:06.603701 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"adfcf304-e178-412f-8638-0ded9cfa72e1","Type":"ContainerDied","Data":"aa0e65c0d25e8f6c1c48f31a78f537b83e9c8970a0f8ebe38833f37003d15cb4"} Dec 03 13:21:06 crc kubenswrapper[4986]: I1203 13:21:06.603717 4986 scope.go:117] "RemoveContainer" containerID="742e27cb1a2f9237b83d1ea173f4846a3035db2c64d147b0ee5283bd50dd482c" Dec 03 13:21:06 crc kubenswrapper[4986]: I1203 13:21:06.603838 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 03 13:21:06 crc kubenswrapper[4986]: I1203 13:21:06.628525 4986 scope.go:117] "RemoveContainer" containerID="542ac2d391719eae755e99a41a1ef243d17e1238e8f41435a7bf6c920266ace3" Dec 03 13:21:06 crc kubenswrapper[4986]: I1203 13:21:06.658238 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 03 13:21:06 crc kubenswrapper[4986]: I1203 13:21:06.666123 4986 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adfcf304-e178-412f-8638-0ded9cfa72e1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 13:21:06 crc kubenswrapper[4986]: I1203 13:21:06.666164 4986 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adfcf304-e178-412f-8638-0ded9cfa72e1-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 13:21:06 crc kubenswrapper[4986]: I1203 13:21:06.667388 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 03 13:21:06 crc kubenswrapper[4986]: I1203 13:21:06.677721 4986 scope.go:117] "RemoveContainer" containerID="742e27cb1a2f9237b83d1ea173f4846a3035db2c64d147b0ee5283bd50dd482c" Dec 03 13:21:06 crc kubenswrapper[4986]: E1203 13:21:06.678203 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"742e27cb1a2f9237b83d1ea173f4846a3035db2c64d147b0ee5283bd50dd482c\": container with ID starting with 742e27cb1a2f9237b83d1ea173f4846a3035db2c64d147b0ee5283bd50dd482c not found: ID does not exist" containerID="742e27cb1a2f9237b83d1ea173f4846a3035db2c64d147b0ee5283bd50dd482c" Dec 03 13:21:06 crc kubenswrapper[4986]: I1203 13:21:06.678260 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"742e27cb1a2f9237b83d1ea173f4846a3035db2c64d147b0ee5283bd50dd482c"} err="failed to get container status \"742e27cb1a2f9237b83d1ea173f4846a3035db2c64d147b0ee5283bd50dd482c\": rpc error: code = NotFound desc = could not find container \"742e27cb1a2f9237b83d1ea173f4846a3035db2c64d147b0ee5283bd50dd482c\": container with ID starting with 742e27cb1a2f9237b83d1ea173f4846a3035db2c64d147b0ee5283bd50dd482c not found: ID does not exist" Dec 03 13:21:06 crc kubenswrapper[4986]: I1203 13:21:06.678366 4986 scope.go:117] "RemoveContainer" containerID="542ac2d391719eae755e99a41a1ef243d17e1238e8f41435a7bf6c920266ace3" Dec 03 13:21:06 crc kubenswrapper[4986]: E1203 13:21:06.678649 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"542ac2d391719eae755e99a41a1ef243d17e1238e8f41435a7bf6c920266ace3\": container with ID starting with 542ac2d391719eae755e99a41a1ef243d17e1238e8f41435a7bf6c920266ace3 not found: ID does not exist" containerID="542ac2d391719eae755e99a41a1ef243d17e1238e8f41435a7bf6c920266ace3" Dec 03 13:21:06 crc kubenswrapper[4986]: I1203 13:21:06.678678 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"542ac2d391719eae755e99a41a1ef243d17e1238e8f41435a7bf6c920266ace3"} err="failed to get container status \"542ac2d391719eae755e99a41a1ef243d17e1238e8f41435a7bf6c920266ace3\": rpc error: code = NotFound desc = could not find container \"542ac2d391719eae755e99a41a1ef243d17e1238e8f41435a7bf6c920266ace3\": container with ID starting with 542ac2d391719eae755e99a41a1ef243d17e1238e8f41435a7bf6c920266ace3 not found: ID does not exist" Dec 03 13:21:06 crc kubenswrapper[4986]: I1203 13:21:06.683708 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 03 13:21:06 crc kubenswrapper[4986]: E1203 13:21:06.684171 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adfcf304-e178-412f-8638-0ded9cfa72e1" containerName="nova-api-api" Dec 03 13:21:06 crc kubenswrapper[4986]: I1203 13:21:06.684188 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="adfcf304-e178-412f-8638-0ded9cfa72e1" containerName="nova-api-api" Dec 03 13:21:06 crc kubenswrapper[4986]: E1203 13:21:06.684220 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adfcf304-e178-412f-8638-0ded9cfa72e1" containerName="nova-api-log" Dec 03 13:21:06 crc kubenswrapper[4986]: I1203 13:21:06.684226 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="adfcf304-e178-412f-8638-0ded9cfa72e1" containerName="nova-api-log" Dec 03 13:21:06 crc kubenswrapper[4986]: I1203 13:21:06.684443 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="adfcf304-e178-412f-8638-0ded9cfa72e1" containerName="nova-api-api" Dec 03 13:21:06 crc kubenswrapper[4986]: I1203 13:21:06.684467 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="adfcf304-e178-412f-8638-0ded9cfa72e1" containerName="nova-api-log" Dec 03 13:21:06 crc kubenswrapper[4986]: I1203 13:21:06.685577 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 03 13:21:06 crc kubenswrapper[4986]: I1203 13:21:06.688977 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Dec 03 13:21:06 crc kubenswrapper[4986]: I1203 13:21:06.689229 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Dec 03 13:21:06 crc kubenswrapper[4986]: I1203 13:21:06.689568 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 03 13:21:06 crc kubenswrapper[4986]: I1203 13:21:06.690879 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 03 13:21:06 crc kubenswrapper[4986]: I1203 13:21:06.767746 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgcft\" (UniqueName: \"kubernetes.io/projected/1da3bd0a-5103-4fe6-b3ec-2f2b8c07a743-kube-api-access-pgcft\") pod \"nova-api-0\" (UID: \"1da3bd0a-5103-4fe6-b3ec-2f2b8c07a743\") " pod="openstack/nova-api-0" Dec 03 13:21:06 crc kubenswrapper[4986]: I1203 13:21:06.767962 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1da3bd0a-5103-4fe6-b3ec-2f2b8c07a743-public-tls-certs\") pod \"nova-api-0\" (UID: \"1da3bd0a-5103-4fe6-b3ec-2f2b8c07a743\") " pod="openstack/nova-api-0" Dec 03 13:21:06 crc kubenswrapper[4986]: I1203 13:21:06.768014 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1da3bd0a-5103-4fe6-b3ec-2f2b8c07a743-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1da3bd0a-5103-4fe6-b3ec-2f2b8c07a743\") " pod="openstack/nova-api-0" Dec 03 13:21:06 crc kubenswrapper[4986]: I1203 13:21:06.768050 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1da3bd0a-5103-4fe6-b3ec-2f2b8c07a743-logs\") pod \"nova-api-0\" (UID: \"1da3bd0a-5103-4fe6-b3ec-2f2b8c07a743\") " pod="openstack/nova-api-0" Dec 03 13:21:06 crc kubenswrapper[4986]: I1203 13:21:06.768093 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1da3bd0a-5103-4fe6-b3ec-2f2b8c07a743-internal-tls-certs\") pod \"nova-api-0\" (UID: \"1da3bd0a-5103-4fe6-b3ec-2f2b8c07a743\") " pod="openstack/nova-api-0" Dec 03 13:21:06 crc kubenswrapper[4986]: I1203 13:21:06.768257 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1da3bd0a-5103-4fe6-b3ec-2f2b8c07a743-config-data\") pod \"nova-api-0\" (UID: \"1da3bd0a-5103-4fe6-b3ec-2f2b8c07a743\") " pod="openstack/nova-api-0" Dec 03 13:21:06 crc kubenswrapper[4986]: I1203 13:21:06.811864 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Dec 03 13:21:06 crc kubenswrapper[4986]: I1203 13:21:06.833968 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Dec 03 13:21:06 crc kubenswrapper[4986]: I1203 13:21:06.870069 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgcft\" (UniqueName: \"kubernetes.io/projected/1da3bd0a-5103-4fe6-b3ec-2f2b8c07a743-kube-api-access-pgcft\") pod \"nova-api-0\" (UID: \"1da3bd0a-5103-4fe6-b3ec-2f2b8c07a743\") " pod="openstack/nova-api-0" Dec 03 13:21:06 crc kubenswrapper[4986]: I1203 13:21:06.870157 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1da3bd0a-5103-4fe6-b3ec-2f2b8c07a743-public-tls-certs\") pod \"nova-api-0\" (UID: \"1da3bd0a-5103-4fe6-b3ec-2f2b8c07a743\") " pod="openstack/nova-api-0" Dec 03 13:21:06 crc kubenswrapper[4986]: I1203 13:21:06.870187 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1da3bd0a-5103-4fe6-b3ec-2f2b8c07a743-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1da3bd0a-5103-4fe6-b3ec-2f2b8c07a743\") " pod="openstack/nova-api-0" Dec 03 13:21:06 crc kubenswrapper[4986]: I1203 13:21:06.870220 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1da3bd0a-5103-4fe6-b3ec-2f2b8c07a743-logs\") pod \"nova-api-0\" (UID: \"1da3bd0a-5103-4fe6-b3ec-2f2b8c07a743\") " pod="openstack/nova-api-0" Dec 03 13:21:06 crc kubenswrapper[4986]: I1203 13:21:06.870257 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1da3bd0a-5103-4fe6-b3ec-2f2b8c07a743-internal-tls-certs\") pod \"nova-api-0\" (UID: \"1da3bd0a-5103-4fe6-b3ec-2f2b8c07a743\") " pod="openstack/nova-api-0" Dec 03 13:21:06 crc kubenswrapper[4986]: I1203 13:21:06.870333 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1da3bd0a-5103-4fe6-b3ec-2f2b8c07a743-config-data\") pod \"nova-api-0\" (UID: \"1da3bd0a-5103-4fe6-b3ec-2f2b8c07a743\") " pod="openstack/nova-api-0" Dec 03 13:21:06 crc kubenswrapper[4986]: I1203 13:21:06.870895 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1da3bd0a-5103-4fe6-b3ec-2f2b8c07a743-logs\") pod \"nova-api-0\" (UID: \"1da3bd0a-5103-4fe6-b3ec-2f2b8c07a743\") " pod="openstack/nova-api-0" Dec 03 13:21:06 crc kubenswrapper[4986]: I1203 13:21:06.874622 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1da3bd0a-5103-4fe6-b3ec-2f2b8c07a743-internal-tls-certs\") pod \"nova-api-0\" (UID: \"1da3bd0a-5103-4fe6-b3ec-2f2b8c07a743\") " pod="openstack/nova-api-0" Dec 03 13:21:06 crc kubenswrapper[4986]: I1203 13:21:06.875017 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1da3bd0a-5103-4fe6-b3ec-2f2b8c07a743-config-data\") pod \"nova-api-0\" (UID: \"1da3bd0a-5103-4fe6-b3ec-2f2b8c07a743\") " pod="openstack/nova-api-0" Dec 03 13:21:06 crc kubenswrapper[4986]: I1203 13:21:06.875503 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1da3bd0a-5103-4fe6-b3ec-2f2b8c07a743-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1da3bd0a-5103-4fe6-b3ec-2f2b8c07a743\") " pod="openstack/nova-api-0" Dec 03 13:21:06 crc kubenswrapper[4986]: I1203 13:21:06.877009 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1da3bd0a-5103-4fe6-b3ec-2f2b8c07a743-public-tls-certs\") pod \"nova-api-0\" (UID: \"1da3bd0a-5103-4fe6-b3ec-2f2b8c07a743\") " pod="openstack/nova-api-0" Dec 03 13:21:06 crc kubenswrapper[4986]: I1203 13:21:06.886734 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgcft\" (UniqueName: \"kubernetes.io/projected/1da3bd0a-5103-4fe6-b3ec-2f2b8c07a743-kube-api-access-pgcft\") pod \"nova-api-0\" (UID: \"1da3bd0a-5103-4fe6-b3ec-2f2b8c07a743\") " pod="openstack/nova-api-0" Dec 03 13:21:06 crc kubenswrapper[4986]: I1203 13:21:06.964036 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="adfcf304-e178-412f-8638-0ded9cfa72e1" path="/var/lib/kubelet/pods/adfcf304-e178-412f-8638-0ded9cfa72e1/volumes" Dec 03 13:21:07 crc kubenswrapper[4986]: I1203 13:21:07.013793 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 03 13:21:07 crc kubenswrapper[4986]: I1203 13:21:07.479795 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 03 13:21:07 crc kubenswrapper[4986]: W1203 13:21:07.482042 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1da3bd0a_5103_4fe6_b3ec_2f2b8c07a743.slice/crio-2914f06d5f2dedc94cfe55e449cc967fa6ff41d22783bd57d17afe1e695a552e WatchSource:0}: Error finding container 2914f06d5f2dedc94cfe55e449cc967fa6ff41d22783bd57d17afe1e695a552e: Status 404 returned error can't find the container with id 2914f06d5f2dedc94cfe55e449cc967fa6ff41d22783bd57d17afe1e695a552e Dec 03 13:21:07 crc kubenswrapper[4986]: I1203 13:21:07.617516 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1da3bd0a-5103-4fe6-b3ec-2f2b8c07a743","Type":"ContainerStarted","Data":"2914f06d5f2dedc94cfe55e449cc967fa6ff41d22783bd57d17afe1e695a552e"} Dec 03 13:21:07 crc kubenswrapper[4986]: I1203 13:21:07.622429 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"582543a7-c25a-41d6-a8c0-efaa205d5e4d","Type":"ContainerStarted","Data":"cbe7ac73a1caef1d108d2217e2002a518489e634a9f1c66ad411dcbd98bfab2f"} Dec 03 13:21:07 crc kubenswrapper[4986]: I1203 13:21:07.652573 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Dec 03 13:21:07 crc kubenswrapper[4986]: I1203 13:21:07.824816 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-d48lr"] Dec 03 13:21:07 crc kubenswrapper[4986]: I1203 13:21:07.826414 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-d48lr" Dec 03 13:21:07 crc kubenswrapper[4986]: I1203 13:21:07.831929 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Dec 03 13:21:07 crc kubenswrapper[4986]: I1203 13:21:07.831932 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Dec 03 13:21:07 crc kubenswrapper[4986]: I1203 13:21:07.836831 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-d48lr"] Dec 03 13:21:07 crc kubenswrapper[4986]: I1203 13:21:07.889061 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0a9f7fb-9224-43cb-bfee-86074845c01e-scripts\") pod \"nova-cell1-cell-mapping-d48lr\" (UID: \"c0a9f7fb-9224-43cb-bfee-86074845c01e\") " pod="openstack/nova-cell1-cell-mapping-d48lr" Dec 03 13:21:07 crc kubenswrapper[4986]: I1203 13:21:07.889117 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvrgs\" (UniqueName: \"kubernetes.io/projected/c0a9f7fb-9224-43cb-bfee-86074845c01e-kube-api-access-hvrgs\") pod \"nova-cell1-cell-mapping-d48lr\" (UID: \"c0a9f7fb-9224-43cb-bfee-86074845c01e\") " pod="openstack/nova-cell1-cell-mapping-d48lr" Dec 03 13:21:07 crc kubenswrapper[4986]: I1203 13:21:07.889614 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0a9f7fb-9224-43cb-bfee-86074845c01e-config-data\") pod \"nova-cell1-cell-mapping-d48lr\" (UID: \"c0a9f7fb-9224-43cb-bfee-86074845c01e\") " pod="openstack/nova-cell1-cell-mapping-d48lr" Dec 03 13:21:07 crc kubenswrapper[4986]: I1203 13:21:07.889687 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0a9f7fb-9224-43cb-bfee-86074845c01e-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-d48lr\" (UID: \"c0a9f7fb-9224-43cb-bfee-86074845c01e\") " pod="openstack/nova-cell1-cell-mapping-d48lr" Dec 03 13:21:07 crc kubenswrapper[4986]: I1203 13:21:07.991567 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0a9f7fb-9224-43cb-bfee-86074845c01e-config-data\") pod \"nova-cell1-cell-mapping-d48lr\" (UID: \"c0a9f7fb-9224-43cb-bfee-86074845c01e\") " pod="openstack/nova-cell1-cell-mapping-d48lr" Dec 03 13:21:07 crc kubenswrapper[4986]: I1203 13:21:07.991908 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0a9f7fb-9224-43cb-bfee-86074845c01e-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-d48lr\" (UID: \"c0a9f7fb-9224-43cb-bfee-86074845c01e\") " pod="openstack/nova-cell1-cell-mapping-d48lr" Dec 03 13:21:07 crc kubenswrapper[4986]: I1203 13:21:07.991992 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0a9f7fb-9224-43cb-bfee-86074845c01e-scripts\") pod \"nova-cell1-cell-mapping-d48lr\" (UID: \"c0a9f7fb-9224-43cb-bfee-86074845c01e\") " pod="openstack/nova-cell1-cell-mapping-d48lr" Dec 03 13:21:07 crc kubenswrapper[4986]: I1203 13:21:07.992021 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvrgs\" (UniqueName: \"kubernetes.io/projected/c0a9f7fb-9224-43cb-bfee-86074845c01e-kube-api-access-hvrgs\") pod \"nova-cell1-cell-mapping-d48lr\" (UID: \"c0a9f7fb-9224-43cb-bfee-86074845c01e\") " pod="openstack/nova-cell1-cell-mapping-d48lr" Dec 03 13:21:07 crc kubenswrapper[4986]: I1203 13:21:07.997839 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0a9f7fb-9224-43cb-bfee-86074845c01e-scripts\") pod \"nova-cell1-cell-mapping-d48lr\" (UID: \"c0a9f7fb-9224-43cb-bfee-86074845c01e\") " pod="openstack/nova-cell1-cell-mapping-d48lr" Dec 03 13:21:07 crc kubenswrapper[4986]: I1203 13:21:07.997943 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0a9f7fb-9224-43cb-bfee-86074845c01e-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-d48lr\" (UID: \"c0a9f7fb-9224-43cb-bfee-86074845c01e\") " pod="openstack/nova-cell1-cell-mapping-d48lr" Dec 03 13:21:08 crc kubenswrapper[4986]: I1203 13:21:07.999979 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0a9f7fb-9224-43cb-bfee-86074845c01e-config-data\") pod \"nova-cell1-cell-mapping-d48lr\" (UID: \"c0a9f7fb-9224-43cb-bfee-86074845c01e\") " pod="openstack/nova-cell1-cell-mapping-d48lr" Dec 03 13:21:08 crc kubenswrapper[4986]: I1203 13:21:08.010493 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvrgs\" (UniqueName: \"kubernetes.io/projected/c0a9f7fb-9224-43cb-bfee-86074845c01e-kube-api-access-hvrgs\") pod \"nova-cell1-cell-mapping-d48lr\" (UID: \"c0a9f7fb-9224-43cb-bfee-86074845c01e\") " pod="openstack/nova-cell1-cell-mapping-d48lr" Dec 03 13:21:08 crc kubenswrapper[4986]: I1203 13:21:08.147682 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-d48lr" Dec 03 13:21:08 crc kubenswrapper[4986]: W1203 13:21:08.638813 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc0a9f7fb_9224_43cb_bfee_86074845c01e.slice/crio-f99314078b693efacb5e826dbf235286e4d3a4698615fcc1edd6d7962809d17d WatchSource:0}: Error finding container f99314078b693efacb5e826dbf235286e4d3a4698615fcc1edd6d7962809d17d: Status 404 returned error can't find the container with id f99314078b693efacb5e826dbf235286e4d3a4698615fcc1edd6d7962809d17d Dec 03 13:21:08 crc kubenswrapper[4986]: I1203 13:21:08.639171 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1da3bd0a-5103-4fe6-b3ec-2f2b8c07a743","Type":"ContainerStarted","Data":"352ddaee5de0987e37f56e6a1d5702bba946e7626c86ba1b23aabc138445a07a"} Dec 03 13:21:08 crc kubenswrapper[4986]: I1203 13:21:08.639530 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1da3bd0a-5103-4fe6-b3ec-2f2b8c07a743","Type":"ContainerStarted","Data":"6a86d74456c0572552c0f9577c658c623237add126633aedb49e39fc8aed6ccd"} Dec 03 13:21:08 crc kubenswrapper[4986]: I1203 13:21:08.640467 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-d48lr"] Dec 03 13:21:08 crc kubenswrapper[4986]: I1203 13:21:08.647220 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"582543a7-c25a-41d6-a8c0-efaa205d5e4d","Type":"ContainerStarted","Data":"c953920a6b9a80cc358581662e1cb5536d3fcb31220b23e8793ba8cf93399e25"} Dec 03 13:21:08 crc kubenswrapper[4986]: I1203 13:21:08.647254 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"582543a7-c25a-41d6-a8c0-efaa205d5e4d","Type":"ContainerStarted","Data":"ccf422c1c403f58cfa717d472f48f439d8f3f335adc4391e69433d4e39d19175"} Dec 03 13:21:08 crc kubenswrapper[4986]: I1203 13:21:08.673317 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.673297115 podStartE2EDuration="2.673297115s" podCreationTimestamp="2025-12-03 13:21:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:21:08.669911513 +0000 UTC m=+1528.136342714" watchObservedRunningTime="2025-12-03 13:21:08.673297115 +0000 UTC m=+1528.139728306" Dec 03 13:21:09 crc kubenswrapper[4986]: I1203 13:21:09.035248 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-c7s6k" Dec 03 13:21:09 crc kubenswrapper[4986]: I1203 13:21:09.035324 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-c7s6k" Dec 03 13:21:09 crc kubenswrapper[4986]: I1203 13:21:09.097711 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-c7s6k" Dec 03 13:21:09 crc kubenswrapper[4986]: I1203 13:21:09.659188 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-d48lr" event={"ID":"c0a9f7fb-9224-43cb-bfee-86074845c01e","Type":"ContainerStarted","Data":"3e7f0d3cda8cd5ce16a068b6deedc145ea1343ab88db1faae1d4c55643dc2d08"} Dec 03 13:21:09 crc kubenswrapper[4986]: I1203 13:21:09.659649 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-d48lr" event={"ID":"c0a9f7fb-9224-43cb-bfee-86074845c01e","Type":"ContainerStarted","Data":"f99314078b693efacb5e826dbf235286e4d3a4698615fcc1edd6d7962809d17d"} Dec 03 13:21:09 crc kubenswrapper[4986]: I1203 13:21:09.677776 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-d48lr" podStartSLOduration=2.677760939 podStartE2EDuration="2.677760939s" podCreationTimestamp="2025-12-03 13:21:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:21:09.675320732 +0000 UTC m=+1529.141751963" watchObservedRunningTime="2025-12-03 13:21:09.677760939 +0000 UTC m=+1529.144192130" Dec 03 13:21:09 crc kubenswrapper[4986]: I1203 13:21:09.711981 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-c7s6k" Dec 03 13:21:09 crc kubenswrapper[4986]: I1203 13:21:09.767671 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-c7s6k"] Dec 03 13:21:10 crc kubenswrapper[4986]: I1203 13:21:10.151099 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-59cf4bdb65-gkvxp" Dec 03 13:21:10 crc kubenswrapper[4986]: I1203 13:21:10.254725 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-447fh"] Dec 03 13:21:10 crc kubenswrapper[4986]: I1203 13:21:10.255086 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-845d6d6f59-447fh" podUID="6bba352c-934e-4070-9722-dc0567a2e0be" containerName="dnsmasq-dns" containerID="cri-o://372be5d3e87cafb688f501de584b3f1a3d2af7b2678db6d98a88e494cfb5de0b" gracePeriod=10 Dec 03 13:21:10 crc kubenswrapper[4986]: I1203 13:21:10.668868 4986 generic.go:334] "Generic (PLEG): container finished" podID="6bba352c-934e-4070-9722-dc0567a2e0be" containerID="372be5d3e87cafb688f501de584b3f1a3d2af7b2678db6d98a88e494cfb5de0b" exitCode=0 Dec 03 13:21:10 crc kubenswrapper[4986]: I1203 13:21:10.669181 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-447fh" event={"ID":"6bba352c-934e-4070-9722-dc0567a2e0be","Type":"ContainerDied","Data":"372be5d3e87cafb688f501de584b3f1a3d2af7b2678db6d98a88e494cfb5de0b"} Dec 03 13:21:10 crc kubenswrapper[4986]: I1203 13:21:10.672516 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"582543a7-c25a-41d6-a8c0-efaa205d5e4d","Type":"ContainerStarted","Data":"4b078001075ed9c92bb7165315761431915048e864bfe08c01d40623fa8af235"} Dec 03 13:21:10 crc kubenswrapper[4986]: I1203 13:21:10.672617 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 03 13:21:10 crc kubenswrapper[4986]: I1203 13:21:10.672634 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="582543a7-c25a-41d6-a8c0-efaa205d5e4d" containerName="ceilometer-central-agent" containerID="cri-o://cbe7ac73a1caef1d108d2217e2002a518489e634a9f1c66ad411dcbd98bfab2f" gracePeriod=30 Dec 03 13:21:10 crc kubenswrapper[4986]: I1203 13:21:10.672682 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="582543a7-c25a-41d6-a8c0-efaa205d5e4d" containerName="sg-core" containerID="cri-o://c953920a6b9a80cc358581662e1cb5536d3fcb31220b23e8793ba8cf93399e25" gracePeriod=30 Dec 03 13:21:10 crc kubenswrapper[4986]: I1203 13:21:10.672767 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="582543a7-c25a-41d6-a8c0-efaa205d5e4d" containerName="proxy-httpd" containerID="cri-o://4b078001075ed9c92bb7165315761431915048e864bfe08c01d40623fa8af235" gracePeriod=30 Dec 03 13:21:10 crc kubenswrapper[4986]: I1203 13:21:10.672776 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="582543a7-c25a-41d6-a8c0-efaa205d5e4d" containerName="ceilometer-notification-agent" containerID="cri-o://ccf422c1c403f58cfa717d472f48f439d8f3f335adc4391e69433d4e39d19175" gracePeriod=30 Dec 03 13:21:10 crc kubenswrapper[4986]: I1203 13:21:10.702800 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.552842347 podStartE2EDuration="6.702779357s" podCreationTimestamp="2025-12-03 13:21:04 +0000 UTC" firstStartedPulling="2025-12-03 13:21:05.651095375 +0000 UTC m=+1525.117526586" lastFinishedPulling="2025-12-03 13:21:09.801032405 +0000 UTC m=+1529.267463596" observedRunningTime="2025-12-03 13:21:10.695230494 +0000 UTC m=+1530.161661695" watchObservedRunningTime="2025-12-03 13:21:10.702779357 +0000 UTC m=+1530.169210538" Dec 03 13:21:10 crc kubenswrapper[4986]: I1203 13:21:10.862166 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-845d6d6f59-447fh" Dec 03 13:21:10 crc kubenswrapper[4986]: I1203 13:21:10.948805 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dsxhd\" (UniqueName: \"kubernetes.io/projected/6bba352c-934e-4070-9722-dc0567a2e0be-kube-api-access-dsxhd\") pod \"6bba352c-934e-4070-9722-dc0567a2e0be\" (UID: \"6bba352c-934e-4070-9722-dc0567a2e0be\") " Dec 03 13:21:10 crc kubenswrapper[4986]: I1203 13:21:10.948909 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6bba352c-934e-4070-9722-dc0567a2e0be-ovsdbserver-nb\") pod \"6bba352c-934e-4070-9722-dc0567a2e0be\" (UID: \"6bba352c-934e-4070-9722-dc0567a2e0be\") " Dec 03 13:21:10 crc kubenswrapper[4986]: I1203 13:21:10.948982 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6bba352c-934e-4070-9722-dc0567a2e0be-dns-swift-storage-0\") pod \"6bba352c-934e-4070-9722-dc0567a2e0be\" (UID: \"6bba352c-934e-4070-9722-dc0567a2e0be\") " Dec 03 13:21:10 crc kubenswrapper[4986]: I1203 13:21:10.949033 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bba352c-934e-4070-9722-dc0567a2e0be-config\") pod \"6bba352c-934e-4070-9722-dc0567a2e0be\" (UID: \"6bba352c-934e-4070-9722-dc0567a2e0be\") " Dec 03 13:21:10 crc kubenswrapper[4986]: I1203 13:21:10.949133 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6bba352c-934e-4070-9722-dc0567a2e0be-ovsdbserver-sb\") pod \"6bba352c-934e-4070-9722-dc0567a2e0be\" (UID: \"6bba352c-934e-4070-9722-dc0567a2e0be\") " Dec 03 13:21:10 crc kubenswrapper[4986]: I1203 13:21:10.949227 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6bba352c-934e-4070-9722-dc0567a2e0be-dns-svc\") pod \"6bba352c-934e-4070-9722-dc0567a2e0be\" (UID: \"6bba352c-934e-4070-9722-dc0567a2e0be\") " Dec 03 13:21:10 crc kubenswrapper[4986]: I1203 13:21:10.970502 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bba352c-934e-4070-9722-dc0567a2e0be-kube-api-access-dsxhd" (OuterVolumeSpecName: "kube-api-access-dsxhd") pod "6bba352c-934e-4070-9722-dc0567a2e0be" (UID: "6bba352c-934e-4070-9722-dc0567a2e0be"). InnerVolumeSpecName "kube-api-access-dsxhd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:21:11 crc kubenswrapper[4986]: I1203 13:21:11.019753 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bba352c-934e-4070-9722-dc0567a2e0be-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6bba352c-934e-4070-9722-dc0567a2e0be" (UID: "6bba352c-934e-4070-9722-dc0567a2e0be"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:21:11 crc kubenswrapper[4986]: I1203 13:21:11.026190 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bba352c-934e-4070-9722-dc0567a2e0be-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6bba352c-934e-4070-9722-dc0567a2e0be" (UID: "6bba352c-934e-4070-9722-dc0567a2e0be"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:21:11 crc kubenswrapper[4986]: I1203 13:21:11.032484 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bba352c-934e-4070-9722-dc0567a2e0be-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "6bba352c-934e-4070-9722-dc0567a2e0be" (UID: "6bba352c-934e-4070-9722-dc0567a2e0be"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:21:11 crc kubenswrapper[4986]: I1203 13:21:11.033499 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bba352c-934e-4070-9722-dc0567a2e0be-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6bba352c-934e-4070-9722-dc0567a2e0be" (UID: "6bba352c-934e-4070-9722-dc0567a2e0be"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:21:11 crc kubenswrapper[4986]: I1203 13:21:11.039426 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bba352c-934e-4070-9722-dc0567a2e0be-config" (OuterVolumeSpecName: "config") pod "6bba352c-934e-4070-9722-dc0567a2e0be" (UID: "6bba352c-934e-4070-9722-dc0567a2e0be"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:21:11 crc kubenswrapper[4986]: I1203 13:21:11.051420 4986 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6bba352c-934e-4070-9722-dc0567a2e0be-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 03 13:21:11 crc kubenswrapper[4986]: I1203 13:21:11.051447 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dsxhd\" (UniqueName: \"kubernetes.io/projected/6bba352c-934e-4070-9722-dc0567a2e0be-kube-api-access-dsxhd\") on node \"crc\" DevicePath \"\"" Dec 03 13:21:11 crc kubenswrapper[4986]: I1203 13:21:11.051458 4986 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6bba352c-934e-4070-9722-dc0567a2e0be-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 03 13:21:11 crc kubenswrapper[4986]: I1203 13:21:11.051468 4986 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6bba352c-934e-4070-9722-dc0567a2e0be-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 03 13:21:11 crc kubenswrapper[4986]: I1203 13:21:11.051478 4986 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bba352c-934e-4070-9722-dc0567a2e0be-config\") on node \"crc\" DevicePath \"\"" Dec 03 13:21:11 crc kubenswrapper[4986]: I1203 13:21:11.051487 4986 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6bba352c-934e-4070-9722-dc0567a2e0be-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 03 13:21:11 crc kubenswrapper[4986]: I1203 13:21:11.681984 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-447fh" event={"ID":"6bba352c-934e-4070-9722-dc0567a2e0be","Type":"ContainerDied","Data":"cec6aa46fa92d85ed782f356d349bed71b20b27fd19c53d4faf4b212497fb404"} Dec 03 13:21:11 crc kubenswrapper[4986]: I1203 13:21:11.682323 4986 scope.go:117] "RemoveContainer" containerID="372be5d3e87cafb688f501de584b3f1a3d2af7b2678db6d98a88e494cfb5de0b" Dec 03 13:21:11 crc kubenswrapper[4986]: I1203 13:21:11.682430 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-845d6d6f59-447fh" Dec 03 13:21:11 crc kubenswrapper[4986]: I1203 13:21:11.695159 4986 generic.go:334] "Generic (PLEG): container finished" podID="582543a7-c25a-41d6-a8c0-efaa205d5e4d" containerID="4b078001075ed9c92bb7165315761431915048e864bfe08c01d40623fa8af235" exitCode=0 Dec 03 13:21:11 crc kubenswrapper[4986]: I1203 13:21:11.696243 4986 generic.go:334] "Generic (PLEG): container finished" podID="582543a7-c25a-41d6-a8c0-efaa205d5e4d" containerID="c953920a6b9a80cc358581662e1cb5536d3fcb31220b23e8793ba8cf93399e25" exitCode=2 Dec 03 13:21:11 crc kubenswrapper[4986]: I1203 13:21:11.696267 4986 generic.go:334] "Generic (PLEG): container finished" podID="582543a7-c25a-41d6-a8c0-efaa205d5e4d" containerID="ccf422c1c403f58cfa717d472f48f439d8f3f335adc4391e69433d4e39d19175" exitCode=0 Dec 03 13:21:11 crc kubenswrapper[4986]: I1203 13:21:11.695206 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"582543a7-c25a-41d6-a8c0-efaa205d5e4d","Type":"ContainerDied","Data":"4b078001075ed9c92bb7165315761431915048e864bfe08c01d40623fa8af235"} Dec 03 13:21:11 crc kubenswrapper[4986]: I1203 13:21:11.696479 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"582543a7-c25a-41d6-a8c0-efaa205d5e4d","Type":"ContainerDied","Data":"c953920a6b9a80cc358581662e1cb5536d3fcb31220b23e8793ba8cf93399e25"} Dec 03 13:21:11 crc kubenswrapper[4986]: I1203 13:21:11.696499 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"582543a7-c25a-41d6-a8c0-efaa205d5e4d","Type":"ContainerDied","Data":"ccf422c1c403f58cfa717d472f48f439d8f3f335adc4391e69433d4e39d19175"} Dec 03 13:21:11 crc kubenswrapper[4986]: I1203 13:21:11.696524 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-c7s6k" podUID="d4f4a308-6339-4346-98f6-562cc445769e" containerName="registry-server" containerID="cri-o://e6b989323ebcad458cdb47c7bfa7b43f3e6270d1bb748b913ff8dbb623f21348" gracePeriod=2 Dec 03 13:21:11 crc kubenswrapper[4986]: I1203 13:21:11.723746 4986 scope.go:117] "RemoveContainer" containerID="411e5b5f5ae9f5af159c1022ee5b1e51476f17b63791e65ee342abcc8d776ff5" Dec 03 13:21:11 crc kubenswrapper[4986]: I1203 13:21:11.732913 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-447fh"] Dec 03 13:21:11 crc kubenswrapper[4986]: I1203 13:21:11.777559 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-447fh"] Dec 03 13:21:12 crc kubenswrapper[4986]: I1203 13:21:12.208020 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c7s6k" Dec 03 13:21:12 crc kubenswrapper[4986]: I1203 13:21:12.279362 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4f4a308-6339-4346-98f6-562cc445769e-catalog-content\") pod \"d4f4a308-6339-4346-98f6-562cc445769e\" (UID: \"d4f4a308-6339-4346-98f6-562cc445769e\") " Dec 03 13:21:12 crc kubenswrapper[4986]: I1203 13:21:12.279427 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rsvtz\" (UniqueName: \"kubernetes.io/projected/d4f4a308-6339-4346-98f6-562cc445769e-kube-api-access-rsvtz\") pod \"d4f4a308-6339-4346-98f6-562cc445769e\" (UID: \"d4f4a308-6339-4346-98f6-562cc445769e\") " Dec 03 13:21:12 crc kubenswrapper[4986]: I1203 13:21:12.279595 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4f4a308-6339-4346-98f6-562cc445769e-utilities\") pod \"d4f4a308-6339-4346-98f6-562cc445769e\" (UID: \"d4f4a308-6339-4346-98f6-562cc445769e\") " Dec 03 13:21:12 crc kubenswrapper[4986]: I1203 13:21:12.280710 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4f4a308-6339-4346-98f6-562cc445769e-utilities" (OuterVolumeSpecName: "utilities") pod "d4f4a308-6339-4346-98f6-562cc445769e" (UID: "d4f4a308-6339-4346-98f6-562cc445769e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 13:21:12 crc kubenswrapper[4986]: I1203 13:21:12.284639 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4f4a308-6339-4346-98f6-562cc445769e-kube-api-access-rsvtz" (OuterVolumeSpecName: "kube-api-access-rsvtz") pod "d4f4a308-6339-4346-98f6-562cc445769e" (UID: "d4f4a308-6339-4346-98f6-562cc445769e"). InnerVolumeSpecName "kube-api-access-rsvtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:21:12 crc kubenswrapper[4986]: I1203 13:21:12.297648 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4f4a308-6339-4346-98f6-562cc445769e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d4f4a308-6339-4346-98f6-562cc445769e" (UID: "d4f4a308-6339-4346-98f6-562cc445769e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 13:21:12 crc kubenswrapper[4986]: I1203 13:21:12.390535 4986 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4f4a308-6339-4346-98f6-562cc445769e-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 13:21:12 crc kubenswrapper[4986]: I1203 13:21:12.390584 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rsvtz\" (UniqueName: \"kubernetes.io/projected/d4f4a308-6339-4346-98f6-562cc445769e-kube-api-access-rsvtz\") on node \"crc\" DevicePath \"\"" Dec 03 13:21:12 crc kubenswrapper[4986]: I1203 13:21:12.390599 4986 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4f4a308-6339-4346-98f6-562cc445769e-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 13:21:12 crc kubenswrapper[4986]: I1203 13:21:12.712189 4986 generic.go:334] "Generic (PLEG): container finished" podID="d4f4a308-6339-4346-98f6-562cc445769e" containerID="e6b989323ebcad458cdb47c7bfa7b43f3e6270d1bb748b913ff8dbb623f21348" exitCode=0 Dec 03 13:21:12 crc kubenswrapper[4986]: I1203 13:21:12.712254 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c7s6k" event={"ID":"d4f4a308-6339-4346-98f6-562cc445769e","Type":"ContainerDied","Data":"e6b989323ebcad458cdb47c7bfa7b43f3e6270d1bb748b913ff8dbb623f21348"} Dec 03 13:21:12 crc kubenswrapper[4986]: I1203 13:21:12.712424 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c7s6k" event={"ID":"d4f4a308-6339-4346-98f6-562cc445769e","Type":"ContainerDied","Data":"94050e0dd2e87d58556e8add8b113894f14801e63bc5a8379194f13caeda5e1c"} Dec 03 13:21:12 crc kubenswrapper[4986]: I1203 13:21:12.712467 4986 scope.go:117] "RemoveContainer" containerID="e6b989323ebcad458cdb47c7bfa7b43f3e6270d1bb748b913ff8dbb623f21348" Dec 03 13:21:12 crc kubenswrapper[4986]: I1203 13:21:12.712584 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c7s6k" Dec 03 13:21:12 crc kubenswrapper[4986]: I1203 13:21:12.733969 4986 scope.go:117] "RemoveContainer" containerID="1e0b8f90f4efbaf0d28ed7727788fccf18f4c5b7c74b482ed665bfd8c398f631" Dec 03 13:21:12 crc kubenswrapper[4986]: I1203 13:21:12.772657 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-c7s6k"] Dec 03 13:21:12 crc kubenswrapper[4986]: I1203 13:21:12.785913 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-c7s6k"] Dec 03 13:21:12 crc kubenswrapper[4986]: I1203 13:21:12.796356 4986 scope.go:117] "RemoveContainer" containerID="5ece9dcc073b1f1247e0fabcd78dbd8e1ca8fbc1ee47abdd9fbaf801bd9a03f5" Dec 03 13:21:12 crc kubenswrapper[4986]: I1203 13:21:12.839337 4986 scope.go:117] "RemoveContainer" containerID="e6b989323ebcad458cdb47c7bfa7b43f3e6270d1bb748b913ff8dbb623f21348" Dec 03 13:21:12 crc kubenswrapper[4986]: E1203 13:21:12.839591 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6b989323ebcad458cdb47c7bfa7b43f3e6270d1bb748b913ff8dbb623f21348\": container with ID starting with e6b989323ebcad458cdb47c7bfa7b43f3e6270d1bb748b913ff8dbb623f21348 not found: ID does not exist" containerID="e6b989323ebcad458cdb47c7bfa7b43f3e6270d1bb748b913ff8dbb623f21348" Dec 03 13:21:12 crc kubenswrapper[4986]: I1203 13:21:12.839629 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6b989323ebcad458cdb47c7bfa7b43f3e6270d1bb748b913ff8dbb623f21348"} err="failed to get container status \"e6b989323ebcad458cdb47c7bfa7b43f3e6270d1bb748b913ff8dbb623f21348\": rpc error: code = NotFound desc = could not find container \"e6b989323ebcad458cdb47c7bfa7b43f3e6270d1bb748b913ff8dbb623f21348\": container with ID starting with e6b989323ebcad458cdb47c7bfa7b43f3e6270d1bb748b913ff8dbb623f21348 not found: ID does not exist" Dec 03 13:21:12 crc kubenswrapper[4986]: I1203 13:21:12.839654 4986 scope.go:117] "RemoveContainer" containerID="1e0b8f90f4efbaf0d28ed7727788fccf18f4c5b7c74b482ed665bfd8c398f631" Dec 03 13:21:12 crc kubenswrapper[4986]: E1203 13:21:12.839887 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e0b8f90f4efbaf0d28ed7727788fccf18f4c5b7c74b482ed665bfd8c398f631\": container with ID starting with 1e0b8f90f4efbaf0d28ed7727788fccf18f4c5b7c74b482ed665bfd8c398f631 not found: ID does not exist" containerID="1e0b8f90f4efbaf0d28ed7727788fccf18f4c5b7c74b482ed665bfd8c398f631" Dec 03 13:21:12 crc kubenswrapper[4986]: I1203 13:21:12.839910 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e0b8f90f4efbaf0d28ed7727788fccf18f4c5b7c74b482ed665bfd8c398f631"} err="failed to get container status \"1e0b8f90f4efbaf0d28ed7727788fccf18f4c5b7c74b482ed665bfd8c398f631\": rpc error: code = NotFound desc = could not find container \"1e0b8f90f4efbaf0d28ed7727788fccf18f4c5b7c74b482ed665bfd8c398f631\": container with ID starting with 1e0b8f90f4efbaf0d28ed7727788fccf18f4c5b7c74b482ed665bfd8c398f631 not found: ID does not exist" Dec 03 13:21:12 crc kubenswrapper[4986]: I1203 13:21:12.839931 4986 scope.go:117] "RemoveContainer" containerID="5ece9dcc073b1f1247e0fabcd78dbd8e1ca8fbc1ee47abdd9fbaf801bd9a03f5" Dec 03 13:21:12 crc kubenswrapper[4986]: E1203 13:21:12.840085 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ece9dcc073b1f1247e0fabcd78dbd8e1ca8fbc1ee47abdd9fbaf801bd9a03f5\": container with ID starting with 5ece9dcc073b1f1247e0fabcd78dbd8e1ca8fbc1ee47abdd9fbaf801bd9a03f5 not found: ID does not exist" containerID="5ece9dcc073b1f1247e0fabcd78dbd8e1ca8fbc1ee47abdd9fbaf801bd9a03f5" Dec 03 13:21:12 crc kubenswrapper[4986]: I1203 13:21:12.840105 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ece9dcc073b1f1247e0fabcd78dbd8e1ca8fbc1ee47abdd9fbaf801bd9a03f5"} err="failed to get container status \"5ece9dcc073b1f1247e0fabcd78dbd8e1ca8fbc1ee47abdd9fbaf801bd9a03f5\": rpc error: code = NotFound desc = could not find container \"5ece9dcc073b1f1247e0fabcd78dbd8e1ca8fbc1ee47abdd9fbaf801bd9a03f5\": container with ID starting with 5ece9dcc073b1f1247e0fabcd78dbd8e1ca8fbc1ee47abdd9fbaf801bd9a03f5 not found: ID does not exist" Dec 03 13:21:12 crc kubenswrapper[4986]: I1203 13:21:12.957141 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6bba352c-934e-4070-9722-dc0567a2e0be" path="/var/lib/kubelet/pods/6bba352c-934e-4070-9722-dc0567a2e0be/volumes" Dec 03 13:21:12 crc kubenswrapper[4986]: I1203 13:21:12.958005 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4f4a308-6339-4346-98f6-562cc445769e" path="/var/lib/kubelet/pods/d4f4a308-6339-4346-98f6-562cc445769e/volumes" Dec 03 13:21:13 crc kubenswrapper[4986]: I1203 13:21:13.729933 4986 generic.go:334] "Generic (PLEG): container finished" podID="c0a9f7fb-9224-43cb-bfee-86074845c01e" containerID="3e7f0d3cda8cd5ce16a068b6deedc145ea1343ab88db1faae1d4c55643dc2d08" exitCode=0 Dec 03 13:21:13 crc kubenswrapper[4986]: I1203 13:21:13.730009 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-d48lr" event={"ID":"c0a9f7fb-9224-43cb-bfee-86074845c01e","Type":"ContainerDied","Data":"3e7f0d3cda8cd5ce16a068b6deedc145ea1343ab88db1faae1d4c55643dc2d08"} Dec 03 13:21:14 crc kubenswrapper[4986]: I1203 13:21:14.616432 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 13:21:14 crc kubenswrapper[4986]: I1203 13:21:14.732906 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/582543a7-c25a-41d6-a8c0-efaa205d5e4d-combined-ca-bundle\") pod \"582543a7-c25a-41d6-a8c0-efaa205d5e4d\" (UID: \"582543a7-c25a-41d6-a8c0-efaa205d5e4d\") " Dec 03 13:21:14 crc kubenswrapper[4986]: I1203 13:21:14.733129 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/582543a7-c25a-41d6-a8c0-efaa205d5e4d-ceilometer-tls-certs\") pod \"582543a7-c25a-41d6-a8c0-efaa205d5e4d\" (UID: \"582543a7-c25a-41d6-a8c0-efaa205d5e4d\") " Dec 03 13:21:14 crc kubenswrapper[4986]: I1203 13:21:14.733218 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/582543a7-c25a-41d6-a8c0-efaa205d5e4d-run-httpd\") pod \"582543a7-c25a-41d6-a8c0-efaa205d5e4d\" (UID: \"582543a7-c25a-41d6-a8c0-efaa205d5e4d\") " Dec 03 13:21:14 crc kubenswrapper[4986]: I1203 13:21:14.733303 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68mcm\" (UniqueName: \"kubernetes.io/projected/582543a7-c25a-41d6-a8c0-efaa205d5e4d-kube-api-access-68mcm\") pod \"582543a7-c25a-41d6-a8c0-efaa205d5e4d\" (UID: \"582543a7-c25a-41d6-a8c0-efaa205d5e4d\") " Dec 03 13:21:14 crc kubenswrapper[4986]: I1203 13:21:14.733337 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/582543a7-c25a-41d6-a8c0-efaa205d5e4d-scripts\") pod \"582543a7-c25a-41d6-a8c0-efaa205d5e4d\" (UID: \"582543a7-c25a-41d6-a8c0-efaa205d5e4d\") " Dec 03 13:21:14 crc kubenswrapper[4986]: I1203 13:21:14.733377 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/582543a7-c25a-41d6-a8c0-efaa205d5e4d-log-httpd\") pod \"582543a7-c25a-41d6-a8c0-efaa205d5e4d\" (UID: \"582543a7-c25a-41d6-a8c0-efaa205d5e4d\") " Dec 03 13:21:14 crc kubenswrapper[4986]: I1203 13:21:14.733438 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/582543a7-c25a-41d6-a8c0-efaa205d5e4d-config-data\") pod \"582543a7-c25a-41d6-a8c0-efaa205d5e4d\" (UID: \"582543a7-c25a-41d6-a8c0-efaa205d5e4d\") " Dec 03 13:21:14 crc kubenswrapper[4986]: I1203 13:21:14.733505 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/582543a7-c25a-41d6-a8c0-efaa205d5e4d-sg-core-conf-yaml\") pod \"582543a7-c25a-41d6-a8c0-efaa205d5e4d\" (UID: \"582543a7-c25a-41d6-a8c0-efaa205d5e4d\") " Dec 03 13:21:14 crc kubenswrapper[4986]: I1203 13:21:14.733901 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/582543a7-c25a-41d6-a8c0-efaa205d5e4d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "582543a7-c25a-41d6-a8c0-efaa205d5e4d" (UID: "582543a7-c25a-41d6-a8c0-efaa205d5e4d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 13:21:14 crc kubenswrapper[4986]: I1203 13:21:14.734207 4986 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/582543a7-c25a-41d6-a8c0-efaa205d5e4d-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 03 13:21:14 crc kubenswrapper[4986]: I1203 13:21:14.734420 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/582543a7-c25a-41d6-a8c0-efaa205d5e4d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "582543a7-c25a-41d6-a8c0-efaa205d5e4d" (UID: "582543a7-c25a-41d6-a8c0-efaa205d5e4d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 13:21:14 crc kubenswrapper[4986]: I1203 13:21:14.739166 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/582543a7-c25a-41d6-a8c0-efaa205d5e4d-kube-api-access-68mcm" (OuterVolumeSpecName: "kube-api-access-68mcm") pod "582543a7-c25a-41d6-a8c0-efaa205d5e4d" (UID: "582543a7-c25a-41d6-a8c0-efaa205d5e4d"). InnerVolumeSpecName "kube-api-access-68mcm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:21:14 crc kubenswrapper[4986]: I1203 13:21:14.740446 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/582543a7-c25a-41d6-a8c0-efaa205d5e4d-scripts" (OuterVolumeSpecName: "scripts") pod "582543a7-c25a-41d6-a8c0-efaa205d5e4d" (UID: "582543a7-c25a-41d6-a8c0-efaa205d5e4d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:21:14 crc kubenswrapper[4986]: I1203 13:21:14.744972 4986 generic.go:334] "Generic (PLEG): container finished" podID="582543a7-c25a-41d6-a8c0-efaa205d5e4d" containerID="cbe7ac73a1caef1d108d2217e2002a518489e634a9f1c66ad411dcbd98bfab2f" exitCode=0 Dec 03 13:21:14 crc kubenswrapper[4986]: I1203 13:21:14.745147 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 13:21:14 crc kubenswrapper[4986]: I1203 13:21:14.745820 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"582543a7-c25a-41d6-a8c0-efaa205d5e4d","Type":"ContainerDied","Data":"cbe7ac73a1caef1d108d2217e2002a518489e634a9f1c66ad411dcbd98bfab2f"} Dec 03 13:21:14 crc kubenswrapper[4986]: I1203 13:21:14.745851 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"582543a7-c25a-41d6-a8c0-efaa205d5e4d","Type":"ContainerDied","Data":"67ee93f2bba6a4528075fdca68e50570cd743f9e589fa4b35269f689ee51f874"} Dec 03 13:21:14 crc kubenswrapper[4986]: I1203 13:21:14.745868 4986 scope.go:117] "RemoveContainer" containerID="4b078001075ed9c92bb7165315761431915048e864bfe08c01d40623fa8af235" Dec 03 13:21:14 crc kubenswrapper[4986]: I1203 13:21:14.799138 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/582543a7-c25a-41d6-a8c0-efaa205d5e4d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "582543a7-c25a-41d6-a8c0-efaa205d5e4d" (UID: "582543a7-c25a-41d6-a8c0-efaa205d5e4d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:21:14 crc kubenswrapper[4986]: I1203 13:21:14.806308 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/582543a7-c25a-41d6-a8c0-efaa205d5e4d-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "582543a7-c25a-41d6-a8c0-efaa205d5e4d" (UID: "582543a7-c25a-41d6-a8c0-efaa205d5e4d"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:21:14 crc kubenswrapper[4986]: I1203 13:21:14.835610 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/582543a7-c25a-41d6-a8c0-efaa205d5e4d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "582543a7-c25a-41d6-a8c0-efaa205d5e4d" (UID: "582543a7-c25a-41d6-a8c0-efaa205d5e4d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:21:14 crc kubenswrapper[4986]: I1203 13:21:14.836179 4986 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/582543a7-c25a-41d6-a8c0-efaa205d5e4d-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 03 13:21:14 crc kubenswrapper[4986]: I1203 13:21:14.836225 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-68mcm\" (UniqueName: \"kubernetes.io/projected/582543a7-c25a-41d6-a8c0-efaa205d5e4d-kube-api-access-68mcm\") on node \"crc\" DevicePath \"\"" Dec 03 13:21:14 crc kubenswrapper[4986]: I1203 13:21:14.836240 4986 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/582543a7-c25a-41d6-a8c0-efaa205d5e4d-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 13:21:14 crc kubenswrapper[4986]: I1203 13:21:14.836248 4986 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/582543a7-c25a-41d6-a8c0-efaa205d5e4d-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 03 13:21:14 crc kubenswrapper[4986]: I1203 13:21:14.836256 4986 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/582543a7-c25a-41d6-a8c0-efaa205d5e4d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 03 13:21:14 crc kubenswrapper[4986]: I1203 13:21:14.836264 4986 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/582543a7-c25a-41d6-a8c0-efaa205d5e4d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 13:21:14 crc kubenswrapper[4986]: I1203 13:21:14.871416 4986 scope.go:117] "RemoveContainer" containerID="c953920a6b9a80cc358581662e1cb5536d3fcb31220b23e8793ba8cf93399e25" Dec 03 13:21:14 crc kubenswrapper[4986]: I1203 13:21:14.893451 4986 scope.go:117] "RemoveContainer" containerID="ccf422c1c403f58cfa717d472f48f439d8f3f335adc4391e69433d4e39d19175" Dec 03 13:21:14 crc kubenswrapper[4986]: I1203 13:21:14.903484 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/582543a7-c25a-41d6-a8c0-efaa205d5e4d-config-data" (OuterVolumeSpecName: "config-data") pod "582543a7-c25a-41d6-a8c0-efaa205d5e4d" (UID: "582543a7-c25a-41d6-a8c0-efaa205d5e4d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:21:14 crc kubenswrapper[4986]: I1203 13:21:14.917510 4986 scope.go:117] "RemoveContainer" containerID="cbe7ac73a1caef1d108d2217e2002a518489e634a9f1c66ad411dcbd98bfab2f" Dec 03 13:21:14 crc kubenswrapper[4986]: I1203 13:21:14.937353 4986 scope.go:117] "RemoveContainer" containerID="4b078001075ed9c92bb7165315761431915048e864bfe08c01d40623fa8af235" Dec 03 13:21:14 crc kubenswrapper[4986]: E1203 13:21:14.937813 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b078001075ed9c92bb7165315761431915048e864bfe08c01d40623fa8af235\": container with ID starting with 4b078001075ed9c92bb7165315761431915048e864bfe08c01d40623fa8af235 not found: ID does not exist" containerID="4b078001075ed9c92bb7165315761431915048e864bfe08c01d40623fa8af235" Dec 03 13:21:14 crc kubenswrapper[4986]: I1203 13:21:14.937871 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b078001075ed9c92bb7165315761431915048e864bfe08c01d40623fa8af235"} err="failed to get container status \"4b078001075ed9c92bb7165315761431915048e864bfe08c01d40623fa8af235\": rpc error: code = NotFound desc = could not find container \"4b078001075ed9c92bb7165315761431915048e864bfe08c01d40623fa8af235\": container with ID starting with 4b078001075ed9c92bb7165315761431915048e864bfe08c01d40623fa8af235 not found: ID does not exist" Dec 03 13:21:14 crc kubenswrapper[4986]: I1203 13:21:14.937905 4986 scope.go:117] "RemoveContainer" containerID="c953920a6b9a80cc358581662e1cb5536d3fcb31220b23e8793ba8cf93399e25" Dec 03 13:21:14 crc kubenswrapper[4986]: E1203 13:21:14.938314 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c953920a6b9a80cc358581662e1cb5536d3fcb31220b23e8793ba8cf93399e25\": container with ID starting with c953920a6b9a80cc358581662e1cb5536d3fcb31220b23e8793ba8cf93399e25 not found: ID does not exist" containerID="c953920a6b9a80cc358581662e1cb5536d3fcb31220b23e8793ba8cf93399e25" Dec 03 13:21:14 crc kubenswrapper[4986]: I1203 13:21:14.938353 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c953920a6b9a80cc358581662e1cb5536d3fcb31220b23e8793ba8cf93399e25"} err="failed to get container status \"c953920a6b9a80cc358581662e1cb5536d3fcb31220b23e8793ba8cf93399e25\": rpc error: code = NotFound desc = could not find container \"c953920a6b9a80cc358581662e1cb5536d3fcb31220b23e8793ba8cf93399e25\": container with ID starting with c953920a6b9a80cc358581662e1cb5536d3fcb31220b23e8793ba8cf93399e25 not found: ID does not exist" Dec 03 13:21:14 crc kubenswrapper[4986]: I1203 13:21:14.938382 4986 scope.go:117] "RemoveContainer" containerID="ccf422c1c403f58cfa717d472f48f439d8f3f335adc4391e69433d4e39d19175" Dec 03 13:21:14 crc kubenswrapper[4986]: E1203 13:21:14.938833 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ccf422c1c403f58cfa717d472f48f439d8f3f335adc4391e69433d4e39d19175\": container with ID starting with ccf422c1c403f58cfa717d472f48f439d8f3f335adc4391e69433d4e39d19175 not found: ID does not exist" containerID="ccf422c1c403f58cfa717d472f48f439d8f3f335adc4391e69433d4e39d19175" Dec 03 13:21:14 crc kubenswrapper[4986]: I1203 13:21:14.938876 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ccf422c1c403f58cfa717d472f48f439d8f3f335adc4391e69433d4e39d19175"} err="failed to get container status \"ccf422c1c403f58cfa717d472f48f439d8f3f335adc4391e69433d4e39d19175\": rpc error: code = NotFound desc = could not find container \"ccf422c1c403f58cfa717d472f48f439d8f3f335adc4391e69433d4e39d19175\": container with ID starting with ccf422c1c403f58cfa717d472f48f439d8f3f335adc4391e69433d4e39d19175 not found: ID does not exist" Dec 03 13:21:14 crc kubenswrapper[4986]: I1203 13:21:14.938907 4986 scope.go:117] "RemoveContainer" containerID="cbe7ac73a1caef1d108d2217e2002a518489e634a9f1c66ad411dcbd98bfab2f" Dec 03 13:21:14 crc kubenswrapper[4986]: E1203 13:21:14.939271 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cbe7ac73a1caef1d108d2217e2002a518489e634a9f1c66ad411dcbd98bfab2f\": container with ID starting with cbe7ac73a1caef1d108d2217e2002a518489e634a9f1c66ad411dcbd98bfab2f not found: ID does not exist" containerID="cbe7ac73a1caef1d108d2217e2002a518489e634a9f1c66ad411dcbd98bfab2f" Dec 03 13:21:14 crc kubenswrapper[4986]: I1203 13:21:14.939314 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbe7ac73a1caef1d108d2217e2002a518489e634a9f1c66ad411dcbd98bfab2f"} err="failed to get container status \"cbe7ac73a1caef1d108d2217e2002a518489e634a9f1c66ad411dcbd98bfab2f\": rpc error: code = NotFound desc = could not find container \"cbe7ac73a1caef1d108d2217e2002a518489e634a9f1c66ad411dcbd98bfab2f\": container with ID starting with cbe7ac73a1caef1d108d2217e2002a518489e634a9f1c66ad411dcbd98bfab2f not found: ID does not exist" Dec 03 13:21:14 crc kubenswrapper[4986]: I1203 13:21:14.939392 4986 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/582543a7-c25a-41d6-a8c0-efaa205d5e4d-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 13:21:15 crc kubenswrapper[4986]: I1203 13:21:15.012911 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-d48lr" Dec 03 13:21:15 crc kubenswrapper[4986]: I1203 13:21:15.075393 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 03 13:21:15 crc kubenswrapper[4986]: I1203 13:21:15.091923 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 03 13:21:15 crc kubenswrapper[4986]: I1203 13:21:15.125155 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 03 13:21:15 crc kubenswrapper[4986]: E1203 13:21:15.126247 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0a9f7fb-9224-43cb-bfee-86074845c01e" containerName="nova-manage" Dec 03 13:21:15 crc kubenswrapper[4986]: I1203 13:21:15.126270 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0a9f7fb-9224-43cb-bfee-86074845c01e" containerName="nova-manage" Dec 03 13:21:15 crc kubenswrapper[4986]: E1203 13:21:15.126306 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="582543a7-c25a-41d6-a8c0-efaa205d5e4d" containerName="ceilometer-central-agent" Dec 03 13:21:15 crc kubenswrapper[4986]: I1203 13:21:15.126320 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="582543a7-c25a-41d6-a8c0-efaa205d5e4d" containerName="ceilometer-central-agent" Dec 03 13:21:15 crc kubenswrapper[4986]: E1203 13:21:15.126347 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4f4a308-6339-4346-98f6-562cc445769e" containerName="extract-content" Dec 03 13:21:15 crc kubenswrapper[4986]: I1203 13:21:15.126355 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4f4a308-6339-4346-98f6-562cc445769e" containerName="extract-content" Dec 03 13:21:15 crc kubenswrapper[4986]: E1203 13:21:15.126372 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4f4a308-6339-4346-98f6-562cc445769e" containerName="registry-server" Dec 03 13:21:15 crc kubenswrapper[4986]: I1203 13:21:15.126379 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4f4a308-6339-4346-98f6-562cc445769e" containerName="registry-server" Dec 03 13:21:15 crc kubenswrapper[4986]: E1203 13:21:15.126412 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="582543a7-c25a-41d6-a8c0-efaa205d5e4d" containerName="sg-core" Dec 03 13:21:15 crc kubenswrapper[4986]: I1203 13:21:15.126420 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="582543a7-c25a-41d6-a8c0-efaa205d5e4d" containerName="sg-core" Dec 03 13:21:15 crc kubenswrapper[4986]: E1203 13:21:15.126443 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bba352c-934e-4070-9722-dc0567a2e0be" containerName="init" Dec 03 13:21:15 crc kubenswrapper[4986]: I1203 13:21:15.126451 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bba352c-934e-4070-9722-dc0567a2e0be" containerName="init" Dec 03 13:21:15 crc kubenswrapper[4986]: E1203 13:21:15.126520 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="582543a7-c25a-41d6-a8c0-efaa205d5e4d" containerName="ceilometer-notification-agent" Dec 03 13:21:15 crc kubenswrapper[4986]: I1203 13:21:15.126531 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="582543a7-c25a-41d6-a8c0-efaa205d5e4d" containerName="ceilometer-notification-agent" Dec 03 13:21:15 crc kubenswrapper[4986]: E1203 13:21:15.126540 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="582543a7-c25a-41d6-a8c0-efaa205d5e4d" containerName="proxy-httpd" Dec 03 13:21:15 crc kubenswrapper[4986]: I1203 13:21:15.126552 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="582543a7-c25a-41d6-a8c0-efaa205d5e4d" containerName="proxy-httpd" Dec 03 13:21:15 crc kubenswrapper[4986]: E1203 13:21:15.126564 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4f4a308-6339-4346-98f6-562cc445769e" containerName="extract-utilities" Dec 03 13:21:15 crc kubenswrapper[4986]: I1203 13:21:15.126571 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4f4a308-6339-4346-98f6-562cc445769e" containerName="extract-utilities" Dec 03 13:21:15 crc kubenswrapper[4986]: E1203 13:21:15.126597 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bba352c-934e-4070-9722-dc0567a2e0be" containerName="dnsmasq-dns" Dec 03 13:21:15 crc kubenswrapper[4986]: I1203 13:21:15.126605 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bba352c-934e-4070-9722-dc0567a2e0be" containerName="dnsmasq-dns" Dec 03 13:21:15 crc kubenswrapper[4986]: I1203 13:21:15.127057 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="582543a7-c25a-41d6-a8c0-efaa205d5e4d" containerName="proxy-httpd" Dec 03 13:21:15 crc kubenswrapper[4986]: I1203 13:21:15.127082 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="582543a7-c25a-41d6-a8c0-efaa205d5e4d" containerName="ceilometer-central-agent" Dec 03 13:21:15 crc kubenswrapper[4986]: I1203 13:21:15.127222 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0a9f7fb-9224-43cb-bfee-86074845c01e" containerName="nova-manage" Dec 03 13:21:15 crc kubenswrapper[4986]: I1203 13:21:15.127323 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bba352c-934e-4070-9722-dc0567a2e0be" containerName="dnsmasq-dns" Dec 03 13:21:15 crc kubenswrapper[4986]: I1203 13:21:15.127360 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="582543a7-c25a-41d6-a8c0-efaa205d5e4d" containerName="ceilometer-notification-agent" Dec 03 13:21:15 crc kubenswrapper[4986]: I1203 13:21:15.127396 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="582543a7-c25a-41d6-a8c0-efaa205d5e4d" containerName="sg-core" Dec 03 13:21:15 crc kubenswrapper[4986]: I1203 13:21:15.127419 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4f4a308-6339-4346-98f6-562cc445769e" containerName="registry-server" Dec 03 13:21:15 crc kubenswrapper[4986]: I1203 13:21:15.132578 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 13:21:15 crc kubenswrapper[4986]: I1203 13:21:15.134938 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 03 13:21:15 crc kubenswrapper[4986]: I1203 13:21:15.135132 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 03 13:21:15 crc kubenswrapper[4986]: I1203 13:21:15.135472 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 03 13:21:15 crc kubenswrapper[4986]: I1203 13:21:15.141981 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 03 13:21:15 crc kubenswrapper[4986]: I1203 13:21:15.142035 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0a9f7fb-9224-43cb-bfee-86074845c01e-combined-ca-bundle\") pod \"c0a9f7fb-9224-43cb-bfee-86074845c01e\" (UID: \"c0a9f7fb-9224-43cb-bfee-86074845c01e\") " Dec 03 13:21:15 crc kubenswrapper[4986]: I1203 13:21:15.142067 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hvrgs\" (UniqueName: \"kubernetes.io/projected/c0a9f7fb-9224-43cb-bfee-86074845c01e-kube-api-access-hvrgs\") pod \"c0a9f7fb-9224-43cb-bfee-86074845c01e\" (UID: \"c0a9f7fb-9224-43cb-bfee-86074845c01e\") " Dec 03 13:21:15 crc kubenswrapper[4986]: I1203 13:21:15.142130 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0a9f7fb-9224-43cb-bfee-86074845c01e-config-data\") pod \"c0a9f7fb-9224-43cb-bfee-86074845c01e\" (UID: \"c0a9f7fb-9224-43cb-bfee-86074845c01e\") " Dec 03 13:21:15 crc kubenswrapper[4986]: I1203 13:21:15.142191 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0a9f7fb-9224-43cb-bfee-86074845c01e-scripts\") pod \"c0a9f7fb-9224-43cb-bfee-86074845c01e\" (UID: \"c0a9f7fb-9224-43cb-bfee-86074845c01e\") " Dec 03 13:21:15 crc kubenswrapper[4986]: I1203 13:21:15.147602 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0a9f7fb-9224-43cb-bfee-86074845c01e-kube-api-access-hvrgs" (OuterVolumeSpecName: "kube-api-access-hvrgs") pod "c0a9f7fb-9224-43cb-bfee-86074845c01e" (UID: "c0a9f7fb-9224-43cb-bfee-86074845c01e"). InnerVolumeSpecName "kube-api-access-hvrgs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:21:15 crc kubenswrapper[4986]: I1203 13:21:15.157874 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0a9f7fb-9224-43cb-bfee-86074845c01e-scripts" (OuterVolumeSpecName: "scripts") pod "c0a9f7fb-9224-43cb-bfee-86074845c01e" (UID: "c0a9f7fb-9224-43cb-bfee-86074845c01e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:21:15 crc kubenswrapper[4986]: I1203 13:21:15.183338 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0a9f7fb-9224-43cb-bfee-86074845c01e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c0a9f7fb-9224-43cb-bfee-86074845c01e" (UID: "c0a9f7fb-9224-43cb-bfee-86074845c01e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:21:15 crc kubenswrapper[4986]: I1203 13:21:15.183356 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0a9f7fb-9224-43cb-bfee-86074845c01e-config-data" (OuterVolumeSpecName: "config-data") pod "c0a9f7fb-9224-43cb-bfee-86074845c01e" (UID: "c0a9f7fb-9224-43cb-bfee-86074845c01e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:21:15 crc kubenswrapper[4986]: I1203 13:21:15.244646 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc0fd902-4092-4036-b7ba-1b6ede68bf04-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bc0fd902-4092-4036-b7ba-1b6ede68bf04\") " pod="openstack/ceilometer-0" Dec 03 13:21:15 crc kubenswrapper[4986]: I1203 13:21:15.244896 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc0fd902-4092-4036-b7ba-1b6ede68bf04-log-httpd\") pod \"ceilometer-0\" (UID: \"bc0fd902-4092-4036-b7ba-1b6ede68bf04\") " pod="openstack/ceilometer-0" Dec 03 13:21:15 crc kubenswrapper[4986]: I1203 13:21:15.245013 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bc0fd902-4092-4036-b7ba-1b6ede68bf04-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bc0fd902-4092-4036-b7ba-1b6ede68bf04\") " pod="openstack/ceilometer-0" Dec 03 13:21:15 crc kubenswrapper[4986]: I1203 13:21:15.245130 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc0fd902-4092-4036-b7ba-1b6ede68bf04-scripts\") pod \"ceilometer-0\" (UID: \"bc0fd902-4092-4036-b7ba-1b6ede68bf04\") " pod="openstack/ceilometer-0" Dec 03 13:21:15 crc kubenswrapper[4986]: I1203 13:21:15.245221 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc0fd902-4092-4036-b7ba-1b6ede68bf04-run-httpd\") pod \"ceilometer-0\" (UID: \"bc0fd902-4092-4036-b7ba-1b6ede68bf04\") " pod="openstack/ceilometer-0" Dec 03 13:21:15 crc kubenswrapper[4986]: I1203 13:21:15.245362 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zr8wc\" (UniqueName: \"kubernetes.io/projected/bc0fd902-4092-4036-b7ba-1b6ede68bf04-kube-api-access-zr8wc\") pod \"ceilometer-0\" (UID: \"bc0fd902-4092-4036-b7ba-1b6ede68bf04\") " pod="openstack/ceilometer-0" Dec 03 13:21:15 crc kubenswrapper[4986]: I1203 13:21:15.245497 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc0fd902-4092-4036-b7ba-1b6ede68bf04-config-data\") pod \"ceilometer-0\" (UID: \"bc0fd902-4092-4036-b7ba-1b6ede68bf04\") " pod="openstack/ceilometer-0" Dec 03 13:21:15 crc kubenswrapper[4986]: I1203 13:21:15.245582 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc0fd902-4092-4036-b7ba-1b6ede68bf04-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"bc0fd902-4092-4036-b7ba-1b6ede68bf04\") " pod="openstack/ceilometer-0" Dec 03 13:21:15 crc kubenswrapper[4986]: I1203 13:21:15.245685 4986 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0a9f7fb-9224-43cb-bfee-86074845c01e-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 13:21:15 crc kubenswrapper[4986]: I1203 13:21:15.245757 4986 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0a9f7fb-9224-43cb-bfee-86074845c01e-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 13:21:15 crc kubenswrapper[4986]: I1203 13:21:15.245818 4986 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0a9f7fb-9224-43cb-bfee-86074845c01e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 13:21:15 crc kubenswrapper[4986]: I1203 13:21:15.245876 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hvrgs\" (UniqueName: \"kubernetes.io/projected/c0a9f7fb-9224-43cb-bfee-86074845c01e-kube-api-access-hvrgs\") on node \"crc\" DevicePath \"\"" Dec 03 13:21:15 crc kubenswrapper[4986]: I1203 13:21:15.347570 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc0fd902-4092-4036-b7ba-1b6ede68bf04-config-data\") pod \"ceilometer-0\" (UID: \"bc0fd902-4092-4036-b7ba-1b6ede68bf04\") " pod="openstack/ceilometer-0" Dec 03 13:21:15 crc kubenswrapper[4986]: I1203 13:21:15.349353 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc0fd902-4092-4036-b7ba-1b6ede68bf04-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"bc0fd902-4092-4036-b7ba-1b6ede68bf04\") " pod="openstack/ceilometer-0" Dec 03 13:21:15 crc kubenswrapper[4986]: I1203 13:21:15.350439 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc0fd902-4092-4036-b7ba-1b6ede68bf04-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bc0fd902-4092-4036-b7ba-1b6ede68bf04\") " pod="openstack/ceilometer-0" Dec 03 13:21:15 crc kubenswrapper[4986]: I1203 13:21:15.350710 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc0fd902-4092-4036-b7ba-1b6ede68bf04-log-httpd\") pod \"ceilometer-0\" (UID: \"bc0fd902-4092-4036-b7ba-1b6ede68bf04\") " pod="openstack/ceilometer-0" Dec 03 13:21:15 crc kubenswrapper[4986]: I1203 13:21:15.350884 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bc0fd902-4092-4036-b7ba-1b6ede68bf04-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bc0fd902-4092-4036-b7ba-1b6ede68bf04\") " pod="openstack/ceilometer-0" Dec 03 13:21:15 crc kubenswrapper[4986]: I1203 13:21:15.356054 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc0fd902-4092-4036-b7ba-1b6ede68bf04-scripts\") pod \"ceilometer-0\" (UID: \"bc0fd902-4092-4036-b7ba-1b6ede68bf04\") " pod="openstack/ceilometer-0" Dec 03 13:21:15 crc kubenswrapper[4986]: I1203 13:21:15.356816 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc0fd902-4092-4036-b7ba-1b6ede68bf04-run-httpd\") pod \"ceilometer-0\" (UID: \"bc0fd902-4092-4036-b7ba-1b6ede68bf04\") " pod="openstack/ceilometer-0" Dec 03 13:21:15 crc kubenswrapper[4986]: I1203 13:21:15.354334 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc0fd902-4092-4036-b7ba-1b6ede68bf04-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"bc0fd902-4092-4036-b7ba-1b6ede68bf04\") " pod="openstack/ceilometer-0" Dec 03 13:21:15 crc kubenswrapper[4986]: I1203 13:21:15.355495 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc0fd902-4092-4036-b7ba-1b6ede68bf04-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bc0fd902-4092-4036-b7ba-1b6ede68bf04\") " pod="openstack/ceilometer-0" Dec 03 13:21:15 crc kubenswrapper[4986]: I1203 13:21:15.355870 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc0fd902-4092-4036-b7ba-1b6ede68bf04-config-data\") pod \"ceilometer-0\" (UID: \"bc0fd902-4092-4036-b7ba-1b6ede68bf04\") " pod="openstack/ceilometer-0" Dec 03 13:21:15 crc kubenswrapper[4986]: I1203 13:21:15.351335 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc0fd902-4092-4036-b7ba-1b6ede68bf04-log-httpd\") pod \"ceilometer-0\" (UID: \"bc0fd902-4092-4036-b7ba-1b6ede68bf04\") " pod="openstack/ceilometer-0" Dec 03 13:21:15 crc kubenswrapper[4986]: I1203 13:21:15.354219 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bc0fd902-4092-4036-b7ba-1b6ede68bf04-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bc0fd902-4092-4036-b7ba-1b6ede68bf04\") " pod="openstack/ceilometer-0" Dec 03 13:21:15 crc kubenswrapper[4986]: I1203 13:21:15.357107 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zr8wc\" (UniqueName: \"kubernetes.io/projected/bc0fd902-4092-4036-b7ba-1b6ede68bf04-kube-api-access-zr8wc\") pod \"ceilometer-0\" (UID: \"bc0fd902-4092-4036-b7ba-1b6ede68bf04\") " pod="openstack/ceilometer-0" Dec 03 13:21:15 crc kubenswrapper[4986]: I1203 13:21:15.357382 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc0fd902-4092-4036-b7ba-1b6ede68bf04-run-httpd\") pod \"ceilometer-0\" (UID: \"bc0fd902-4092-4036-b7ba-1b6ede68bf04\") " pod="openstack/ceilometer-0" Dec 03 13:21:15 crc kubenswrapper[4986]: I1203 13:21:15.363275 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc0fd902-4092-4036-b7ba-1b6ede68bf04-scripts\") pod \"ceilometer-0\" (UID: \"bc0fd902-4092-4036-b7ba-1b6ede68bf04\") " pod="openstack/ceilometer-0" Dec 03 13:21:15 crc kubenswrapper[4986]: I1203 13:21:15.374477 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zr8wc\" (UniqueName: \"kubernetes.io/projected/bc0fd902-4092-4036-b7ba-1b6ede68bf04-kube-api-access-zr8wc\") pod \"ceilometer-0\" (UID: \"bc0fd902-4092-4036-b7ba-1b6ede68bf04\") " pod="openstack/ceilometer-0" Dec 03 13:21:15 crc kubenswrapper[4986]: I1203 13:21:15.456714 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 13:21:15 crc kubenswrapper[4986]: I1203 13:21:15.755939 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-d48lr" event={"ID":"c0a9f7fb-9224-43cb-bfee-86074845c01e","Type":"ContainerDied","Data":"f99314078b693efacb5e826dbf235286e4d3a4698615fcc1edd6d7962809d17d"} Dec 03 13:21:15 crc kubenswrapper[4986]: I1203 13:21:15.755994 4986 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f99314078b693efacb5e826dbf235286e4d3a4698615fcc1edd6d7962809d17d" Dec 03 13:21:15 crc kubenswrapper[4986]: I1203 13:21:15.756111 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-d48lr" Dec 03 13:21:15 crc kubenswrapper[4986]: I1203 13:21:15.899204 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 03 13:21:15 crc kubenswrapper[4986]: I1203 13:21:15.945134 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 03 13:21:15 crc kubenswrapper[4986]: I1203 13:21:15.945415 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="1da3bd0a-5103-4fe6-b3ec-2f2b8c07a743" containerName="nova-api-log" containerID="cri-o://6a86d74456c0572552c0f9577c658c623237add126633aedb49e39fc8aed6ccd" gracePeriod=30 Dec 03 13:21:15 crc kubenswrapper[4986]: I1203 13:21:15.945521 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="1da3bd0a-5103-4fe6-b3ec-2f2b8c07a743" containerName="nova-api-api" containerID="cri-o://352ddaee5de0987e37f56e6a1d5702bba946e7626c86ba1b23aabc138445a07a" gracePeriod=30 Dec 03 13:21:15 crc kubenswrapper[4986]: I1203 13:21:15.971731 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 03 13:21:15 crc kubenswrapper[4986]: I1203 13:21:15.971948 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="96ddaceb-69ae-45af-92b2-511ccc92f2c2" containerName="nova-scheduler-scheduler" containerID="cri-o://3bd9833a0c6c9a11353bfaf94bb93ec4574d2fa5b6f4fb0282fa315c48f53387" gracePeriod=30 Dec 03 13:21:15 crc kubenswrapper[4986]: I1203 13:21:15.997198 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 03 13:21:15 crc kubenswrapper[4986]: I1203 13:21:15.997621 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="0b17a506-eb92-4170-a2cf-0a563e427197" containerName="nova-metadata-log" containerID="cri-o://1140fadaa85eaee0fbdb38e87caea1583d6a3f7bebe8af59868412337b5fa1aa" gracePeriod=30 Dec 03 13:21:15 crc kubenswrapper[4986]: I1203 13:21:15.997663 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="0b17a506-eb92-4170-a2cf-0a563e427197" containerName="nova-metadata-metadata" containerID="cri-o://d913854847519427e500aca4536d98af73991ac62f6d1f3106d55f98a92ad271" gracePeriod=30 Dec 03 13:21:16 crc kubenswrapper[4986]: I1203 13:21:16.546233 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 03 13:21:16 crc kubenswrapper[4986]: I1203 13:21:16.680187 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1da3bd0a-5103-4fe6-b3ec-2f2b8c07a743-logs\") pod \"1da3bd0a-5103-4fe6-b3ec-2f2b8c07a743\" (UID: \"1da3bd0a-5103-4fe6-b3ec-2f2b8c07a743\") " Dec 03 13:21:16 crc kubenswrapper[4986]: I1203 13:21:16.680455 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1da3bd0a-5103-4fe6-b3ec-2f2b8c07a743-combined-ca-bundle\") pod \"1da3bd0a-5103-4fe6-b3ec-2f2b8c07a743\" (UID: \"1da3bd0a-5103-4fe6-b3ec-2f2b8c07a743\") " Dec 03 13:21:16 crc kubenswrapper[4986]: I1203 13:21:16.680487 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1da3bd0a-5103-4fe6-b3ec-2f2b8c07a743-public-tls-certs\") pod \"1da3bd0a-5103-4fe6-b3ec-2f2b8c07a743\" (UID: \"1da3bd0a-5103-4fe6-b3ec-2f2b8c07a743\") " Dec 03 13:21:16 crc kubenswrapper[4986]: I1203 13:21:16.680557 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1da3bd0a-5103-4fe6-b3ec-2f2b8c07a743-config-data\") pod \"1da3bd0a-5103-4fe6-b3ec-2f2b8c07a743\" (UID: \"1da3bd0a-5103-4fe6-b3ec-2f2b8c07a743\") " Dec 03 13:21:16 crc kubenswrapper[4986]: I1203 13:21:16.680583 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1da3bd0a-5103-4fe6-b3ec-2f2b8c07a743-internal-tls-certs\") pod \"1da3bd0a-5103-4fe6-b3ec-2f2b8c07a743\" (UID: \"1da3bd0a-5103-4fe6-b3ec-2f2b8c07a743\") " Dec 03 13:21:16 crc kubenswrapper[4986]: I1203 13:21:16.680622 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgcft\" (UniqueName: \"kubernetes.io/projected/1da3bd0a-5103-4fe6-b3ec-2f2b8c07a743-kube-api-access-pgcft\") pod \"1da3bd0a-5103-4fe6-b3ec-2f2b8c07a743\" (UID: \"1da3bd0a-5103-4fe6-b3ec-2f2b8c07a743\") " Dec 03 13:21:16 crc kubenswrapper[4986]: I1203 13:21:16.680684 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1da3bd0a-5103-4fe6-b3ec-2f2b8c07a743-logs" (OuterVolumeSpecName: "logs") pod "1da3bd0a-5103-4fe6-b3ec-2f2b8c07a743" (UID: "1da3bd0a-5103-4fe6-b3ec-2f2b8c07a743"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 13:21:16 crc kubenswrapper[4986]: I1203 13:21:16.681025 4986 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1da3bd0a-5103-4fe6-b3ec-2f2b8c07a743-logs\") on node \"crc\" DevicePath \"\"" Dec 03 13:21:16 crc kubenswrapper[4986]: I1203 13:21:16.688231 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1da3bd0a-5103-4fe6-b3ec-2f2b8c07a743-kube-api-access-pgcft" (OuterVolumeSpecName: "kube-api-access-pgcft") pod "1da3bd0a-5103-4fe6-b3ec-2f2b8c07a743" (UID: "1da3bd0a-5103-4fe6-b3ec-2f2b8c07a743"). InnerVolumeSpecName "kube-api-access-pgcft". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:21:16 crc kubenswrapper[4986]: I1203 13:21:16.710563 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1da3bd0a-5103-4fe6-b3ec-2f2b8c07a743-config-data" (OuterVolumeSpecName: "config-data") pod "1da3bd0a-5103-4fe6-b3ec-2f2b8c07a743" (UID: "1da3bd0a-5103-4fe6-b3ec-2f2b8c07a743"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:21:16 crc kubenswrapper[4986]: I1203 13:21:16.711205 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1da3bd0a-5103-4fe6-b3ec-2f2b8c07a743-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1da3bd0a-5103-4fe6-b3ec-2f2b8c07a743" (UID: "1da3bd0a-5103-4fe6-b3ec-2f2b8c07a743"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:21:16 crc kubenswrapper[4986]: I1203 13:21:16.736550 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1da3bd0a-5103-4fe6-b3ec-2f2b8c07a743-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "1da3bd0a-5103-4fe6-b3ec-2f2b8c07a743" (UID: "1da3bd0a-5103-4fe6-b3ec-2f2b8c07a743"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:21:16 crc kubenswrapper[4986]: I1203 13:21:16.747481 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1da3bd0a-5103-4fe6-b3ec-2f2b8c07a743-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "1da3bd0a-5103-4fe6-b3ec-2f2b8c07a743" (UID: "1da3bd0a-5103-4fe6-b3ec-2f2b8c07a743"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:21:16 crc kubenswrapper[4986]: I1203 13:21:16.768214 4986 generic.go:334] "Generic (PLEG): container finished" podID="1da3bd0a-5103-4fe6-b3ec-2f2b8c07a743" containerID="352ddaee5de0987e37f56e6a1d5702bba946e7626c86ba1b23aabc138445a07a" exitCode=0 Dec 03 13:21:16 crc kubenswrapper[4986]: I1203 13:21:16.768245 4986 generic.go:334] "Generic (PLEG): container finished" podID="1da3bd0a-5103-4fe6-b3ec-2f2b8c07a743" containerID="6a86d74456c0572552c0f9577c658c623237add126633aedb49e39fc8aed6ccd" exitCode=143 Dec 03 13:21:16 crc kubenswrapper[4986]: I1203 13:21:16.768272 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1da3bd0a-5103-4fe6-b3ec-2f2b8c07a743","Type":"ContainerDied","Data":"352ddaee5de0987e37f56e6a1d5702bba946e7626c86ba1b23aabc138445a07a"} Dec 03 13:21:16 crc kubenswrapper[4986]: I1203 13:21:16.768322 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 03 13:21:16 crc kubenswrapper[4986]: I1203 13:21:16.768352 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1da3bd0a-5103-4fe6-b3ec-2f2b8c07a743","Type":"ContainerDied","Data":"6a86d74456c0572552c0f9577c658c623237add126633aedb49e39fc8aed6ccd"} Dec 03 13:21:16 crc kubenswrapper[4986]: I1203 13:21:16.768368 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1da3bd0a-5103-4fe6-b3ec-2f2b8c07a743","Type":"ContainerDied","Data":"2914f06d5f2dedc94cfe55e449cc967fa6ff41d22783bd57d17afe1e695a552e"} Dec 03 13:21:16 crc kubenswrapper[4986]: I1203 13:21:16.768387 4986 scope.go:117] "RemoveContainer" containerID="352ddaee5de0987e37f56e6a1d5702bba946e7626c86ba1b23aabc138445a07a" Dec 03 13:21:16 crc kubenswrapper[4986]: I1203 13:21:16.771309 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc0fd902-4092-4036-b7ba-1b6ede68bf04","Type":"ContainerStarted","Data":"06678d0cb269307f5e1a4aed65a2915c0b0bf9259cc756a53c7f9d6415fa5cf8"} Dec 03 13:21:16 crc kubenswrapper[4986]: I1203 13:21:16.774376 4986 generic.go:334] "Generic (PLEG): container finished" podID="0b17a506-eb92-4170-a2cf-0a563e427197" containerID="1140fadaa85eaee0fbdb38e87caea1583d6a3f7bebe8af59868412337b5fa1aa" exitCode=143 Dec 03 13:21:16 crc kubenswrapper[4986]: I1203 13:21:16.774418 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0b17a506-eb92-4170-a2cf-0a563e427197","Type":"ContainerDied","Data":"1140fadaa85eaee0fbdb38e87caea1583d6a3f7bebe8af59868412337b5fa1aa"} Dec 03 13:21:16 crc kubenswrapper[4986]: I1203 13:21:16.782943 4986 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1da3bd0a-5103-4fe6-b3ec-2f2b8c07a743-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 13:21:16 crc kubenswrapper[4986]: I1203 13:21:16.782976 4986 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1da3bd0a-5103-4fe6-b3ec-2f2b8c07a743-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 03 13:21:16 crc kubenswrapper[4986]: I1203 13:21:16.782985 4986 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1da3bd0a-5103-4fe6-b3ec-2f2b8c07a743-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 13:21:16 crc kubenswrapper[4986]: I1203 13:21:16.782993 4986 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1da3bd0a-5103-4fe6-b3ec-2f2b8c07a743-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 03 13:21:16 crc kubenswrapper[4986]: I1203 13:21:16.783001 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pgcft\" (UniqueName: \"kubernetes.io/projected/1da3bd0a-5103-4fe6-b3ec-2f2b8c07a743-kube-api-access-pgcft\") on node \"crc\" DevicePath \"\"" Dec 03 13:21:16 crc kubenswrapper[4986]: I1203 13:21:16.818240 4986 scope.go:117] "RemoveContainer" containerID="6a86d74456c0572552c0f9577c658c623237add126633aedb49e39fc8aed6ccd" Dec 03 13:21:16 crc kubenswrapper[4986]: I1203 13:21:16.818719 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 03 13:21:16 crc kubenswrapper[4986]: I1203 13:21:16.839838 4986 scope.go:117] "RemoveContainer" containerID="352ddaee5de0987e37f56e6a1d5702bba946e7626c86ba1b23aabc138445a07a" Dec 03 13:21:16 crc kubenswrapper[4986]: E1203 13:21:16.840305 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"352ddaee5de0987e37f56e6a1d5702bba946e7626c86ba1b23aabc138445a07a\": container with ID starting with 352ddaee5de0987e37f56e6a1d5702bba946e7626c86ba1b23aabc138445a07a not found: ID does not exist" containerID="352ddaee5de0987e37f56e6a1d5702bba946e7626c86ba1b23aabc138445a07a" Dec 03 13:21:16 crc kubenswrapper[4986]: I1203 13:21:16.840350 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"352ddaee5de0987e37f56e6a1d5702bba946e7626c86ba1b23aabc138445a07a"} err="failed to get container status \"352ddaee5de0987e37f56e6a1d5702bba946e7626c86ba1b23aabc138445a07a\": rpc error: code = NotFound desc = could not find container \"352ddaee5de0987e37f56e6a1d5702bba946e7626c86ba1b23aabc138445a07a\": container with ID starting with 352ddaee5de0987e37f56e6a1d5702bba946e7626c86ba1b23aabc138445a07a not found: ID does not exist" Dec 03 13:21:16 crc kubenswrapper[4986]: I1203 13:21:16.840382 4986 scope.go:117] "RemoveContainer" containerID="6a86d74456c0572552c0f9577c658c623237add126633aedb49e39fc8aed6ccd" Dec 03 13:21:16 crc kubenswrapper[4986]: E1203 13:21:16.840682 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a86d74456c0572552c0f9577c658c623237add126633aedb49e39fc8aed6ccd\": container with ID starting with 6a86d74456c0572552c0f9577c658c623237add126633aedb49e39fc8aed6ccd not found: ID does not exist" containerID="6a86d74456c0572552c0f9577c658c623237add126633aedb49e39fc8aed6ccd" Dec 03 13:21:16 crc kubenswrapper[4986]: I1203 13:21:16.840733 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a86d74456c0572552c0f9577c658c623237add126633aedb49e39fc8aed6ccd"} err="failed to get container status \"6a86d74456c0572552c0f9577c658c623237add126633aedb49e39fc8aed6ccd\": rpc error: code = NotFound desc = could not find container \"6a86d74456c0572552c0f9577c658c623237add126633aedb49e39fc8aed6ccd\": container with ID starting with 6a86d74456c0572552c0f9577c658c623237add126633aedb49e39fc8aed6ccd not found: ID does not exist" Dec 03 13:21:16 crc kubenswrapper[4986]: I1203 13:21:16.840758 4986 scope.go:117] "RemoveContainer" containerID="352ddaee5de0987e37f56e6a1d5702bba946e7626c86ba1b23aabc138445a07a" Dec 03 13:21:16 crc kubenswrapper[4986]: I1203 13:21:16.841089 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"352ddaee5de0987e37f56e6a1d5702bba946e7626c86ba1b23aabc138445a07a"} err="failed to get container status \"352ddaee5de0987e37f56e6a1d5702bba946e7626c86ba1b23aabc138445a07a\": rpc error: code = NotFound desc = could not find container \"352ddaee5de0987e37f56e6a1d5702bba946e7626c86ba1b23aabc138445a07a\": container with ID starting with 352ddaee5de0987e37f56e6a1d5702bba946e7626c86ba1b23aabc138445a07a not found: ID does not exist" Dec 03 13:21:16 crc kubenswrapper[4986]: I1203 13:21:16.841111 4986 scope.go:117] "RemoveContainer" containerID="6a86d74456c0572552c0f9577c658c623237add126633aedb49e39fc8aed6ccd" Dec 03 13:21:16 crc kubenswrapper[4986]: I1203 13:21:16.841894 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a86d74456c0572552c0f9577c658c623237add126633aedb49e39fc8aed6ccd"} err="failed to get container status \"6a86d74456c0572552c0f9577c658c623237add126633aedb49e39fc8aed6ccd\": rpc error: code = NotFound desc = could not find container \"6a86d74456c0572552c0f9577c658c623237add126633aedb49e39fc8aed6ccd\": container with ID starting with 6a86d74456c0572552c0f9577c658c623237add126633aedb49e39fc8aed6ccd not found: ID does not exist" Dec 03 13:21:16 crc kubenswrapper[4986]: I1203 13:21:16.841949 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 03 13:21:16 crc kubenswrapper[4986]: I1203 13:21:16.849541 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 03 13:21:16 crc kubenswrapper[4986]: E1203 13:21:16.849984 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1da3bd0a-5103-4fe6-b3ec-2f2b8c07a743" containerName="nova-api-api" Dec 03 13:21:16 crc kubenswrapper[4986]: I1203 13:21:16.850001 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="1da3bd0a-5103-4fe6-b3ec-2f2b8c07a743" containerName="nova-api-api" Dec 03 13:21:16 crc kubenswrapper[4986]: E1203 13:21:16.850011 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1da3bd0a-5103-4fe6-b3ec-2f2b8c07a743" containerName="nova-api-log" Dec 03 13:21:16 crc kubenswrapper[4986]: I1203 13:21:16.850018 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="1da3bd0a-5103-4fe6-b3ec-2f2b8c07a743" containerName="nova-api-log" Dec 03 13:21:16 crc kubenswrapper[4986]: I1203 13:21:16.850186 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="1da3bd0a-5103-4fe6-b3ec-2f2b8c07a743" containerName="nova-api-log" Dec 03 13:21:16 crc kubenswrapper[4986]: I1203 13:21:16.850211 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="1da3bd0a-5103-4fe6-b3ec-2f2b8c07a743" containerName="nova-api-api" Dec 03 13:21:16 crc kubenswrapper[4986]: I1203 13:21:16.851147 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 03 13:21:16 crc kubenswrapper[4986]: I1203 13:21:16.854249 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 03 13:21:16 crc kubenswrapper[4986]: I1203 13:21:16.854530 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Dec 03 13:21:16 crc kubenswrapper[4986]: I1203 13:21:16.854714 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Dec 03 13:21:16 crc kubenswrapper[4986]: I1203 13:21:16.873120 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 03 13:21:16 crc kubenswrapper[4986]: I1203 13:21:16.959386 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1da3bd0a-5103-4fe6-b3ec-2f2b8c07a743" path="/var/lib/kubelet/pods/1da3bd0a-5103-4fe6-b3ec-2f2b8c07a743/volumes" Dec 03 13:21:16 crc kubenswrapper[4986]: I1203 13:21:16.960199 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="582543a7-c25a-41d6-a8c0-efaa205d5e4d" path="/var/lib/kubelet/pods/582543a7-c25a-41d6-a8c0-efaa205d5e4d/volumes" Dec 03 13:21:16 crc kubenswrapper[4986]: I1203 13:21:16.986183 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/39eb69a6-11cd-41da-956d-a9697ef88d67-internal-tls-certs\") pod \"nova-api-0\" (UID: \"39eb69a6-11cd-41da-956d-a9697ef88d67\") " pod="openstack/nova-api-0" Dec 03 13:21:16 crc kubenswrapper[4986]: I1203 13:21:16.986312 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39eb69a6-11cd-41da-956d-a9697ef88d67-logs\") pod \"nova-api-0\" (UID: \"39eb69a6-11cd-41da-956d-a9697ef88d67\") " pod="openstack/nova-api-0" Dec 03 13:21:16 crc kubenswrapper[4986]: I1203 13:21:16.986397 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39eb69a6-11cd-41da-956d-a9697ef88d67-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"39eb69a6-11cd-41da-956d-a9697ef88d67\") " pod="openstack/nova-api-0" Dec 03 13:21:16 crc kubenswrapper[4986]: I1203 13:21:16.986958 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78q5n\" (UniqueName: \"kubernetes.io/projected/39eb69a6-11cd-41da-956d-a9697ef88d67-kube-api-access-78q5n\") pod \"nova-api-0\" (UID: \"39eb69a6-11cd-41da-956d-a9697ef88d67\") " pod="openstack/nova-api-0" Dec 03 13:21:16 crc kubenswrapper[4986]: I1203 13:21:16.987675 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39eb69a6-11cd-41da-956d-a9697ef88d67-config-data\") pod \"nova-api-0\" (UID: \"39eb69a6-11cd-41da-956d-a9697ef88d67\") " pod="openstack/nova-api-0" Dec 03 13:21:16 crc kubenswrapper[4986]: I1203 13:21:16.987702 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/39eb69a6-11cd-41da-956d-a9697ef88d67-public-tls-certs\") pod \"nova-api-0\" (UID: \"39eb69a6-11cd-41da-956d-a9697ef88d67\") " pod="openstack/nova-api-0" Dec 03 13:21:17 crc kubenswrapper[4986]: I1203 13:21:17.089868 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/39eb69a6-11cd-41da-956d-a9697ef88d67-internal-tls-certs\") pod \"nova-api-0\" (UID: \"39eb69a6-11cd-41da-956d-a9697ef88d67\") " pod="openstack/nova-api-0" Dec 03 13:21:17 crc kubenswrapper[4986]: I1203 13:21:17.089926 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39eb69a6-11cd-41da-956d-a9697ef88d67-logs\") pod \"nova-api-0\" (UID: \"39eb69a6-11cd-41da-956d-a9697ef88d67\") " pod="openstack/nova-api-0" Dec 03 13:21:17 crc kubenswrapper[4986]: I1203 13:21:17.089988 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39eb69a6-11cd-41da-956d-a9697ef88d67-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"39eb69a6-11cd-41da-956d-a9697ef88d67\") " pod="openstack/nova-api-0" Dec 03 13:21:17 crc kubenswrapper[4986]: I1203 13:21:17.090015 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78q5n\" (UniqueName: \"kubernetes.io/projected/39eb69a6-11cd-41da-956d-a9697ef88d67-kube-api-access-78q5n\") pod \"nova-api-0\" (UID: \"39eb69a6-11cd-41da-956d-a9697ef88d67\") " pod="openstack/nova-api-0" Dec 03 13:21:17 crc kubenswrapper[4986]: I1203 13:21:17.090081 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/39eb69a6-11cd-41da-956d-a9697ef88d67-public-tls-certs\") pod \"nova-api-0\" (UID: \"39eb69a6-11cd-41da-956d-a9697ef88d67\") " pod="openstack/nova-api-0" Dec 03 13:21:17 crc kubenswrapper[4986]: I1203 13:21:17.090101 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39eb69a6-11cd-41da-956d-a9697ef88d67-config-data\") pod \"nova-api-0\" (UID: \"39eb69a6-11cd-41da-956d-a9697ef88d67\") " pod="openstack/nova-api-0" Dec 03 13:21:17 crc kubenswrapper[4986]: I1203 13:21:17.091227 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39eb69a6-11cd-41da-956d-a9697ef88d67-logs\") pod \"nova-api-0\" (UID: \"39eb69a6-11cd-41da-956d-a9697ef88d67\") " pod="openstack/nova-api-0" Dec 03 13:21:17 crc kubenswrapper[4986]: I1203 13:21:17.093702 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39eb69a6-11cd-41da-956d-a9697ef88d67-config-data\") pod \"nova-api-0\" (UID: \"39eb69a6-11cd-41da-956d-a9697ef88d67\") " pod="openstack/nova-api-0" Dec 03 13:21:17 crc kubenswrapper[4986]: I1203 13:21:17.095069 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/39eb69a6-11cd-41da-956d-a9697ef88d67-internal-tls-certs\") pod \"nova-api-0\" (UID: \"39eb69a6-11cd-41da-956d-a9697ef88d67\") " pod="openstack/nova-api-0" Dec 03 13:21:17 crc kubenswrapper[4986]: I1203 13:21:17.095950 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39eb69a6-11cd-41da-956d-a9697ef88d67-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"39eb69a6-11cd-41da-956d-a9697ef88d67\") " pod="openstack/nova-api-0" Dec 03 13:21:17 crc kubenswrapper[4986]: I1203 13:21:17.096412 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/39eb69a6-11cd-41da-956d-a9697ef88d67-public-tls-certs\") pod \"nova-api-0\" (UID: \"39eb69a6-11cd-41da-956d-a9697ef88d67\") " pod="openstack/nova-api-0" Dec 03 13:21:17 crc kubenswrapper[4986]: I1203 13:21:17.110452 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78q5n\" (UniqueName: \"kubernetes.io/projected/39eb69a6-11cd-41da-956d-a9697ef88d67-kube-api-access-78q5n\") pod \"nova-api-0\" (UID: \"39eb69a6-11cd-41da-956d-a9697ef88d67\") " pod="openstack/nova-api-0" Dec 03 13:21:17 crc kubenswrapper[4986]: I1203 13:21:17.169131 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 03 13:21:17 crc kubenswrapper[4986]: W1203 13:21:17.636154 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod39eb69a6_11cd_41da_956d_a9697ef88d67.slice/crio-db7901b28ee7a4b23af13c0971db6656d2b66f0c666ed8b59a733912c0c15bdc WatchSource:0}: Error finding container db7901b28ee7a4b23af13c0971db6656d2b66f0c666ed8b59a733912c0c15bdc: Status 404 returned error can't find the container with id db7901b28ee7a4b23af13c0971db6656d2b66f0c666ed8b59a733912c0c15bdc Dec 03 13:21:17 crc kubenswrapper[4986]: I1203 13:21:17.637384 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 03 13:21:17 crc kubenswrapper[4986]: I1203 13:21:17.784343 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"39eb69a6-11cd-41da-956d-a9697ef88d67","Type":"ContainerStarted","Data":"db7901b28ee7a4b23af13c0971db6656d2b66f0c666ed8b59a733912c0c15bdc"} Dec 03 13:21:17 crc kubenswrapper[4986]: I1203 13:21:17.789310 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc0fd902-4092-4036-b7ba-1b6ede68bf04","Type":"ContainerStarted","Data":"3e6d234f1d0edf428a3eef3ac5910c3a11c96c76e15f4fb9f6fd717eddad70a4"} Dec 03 13:21:17 crc kubenswrapper[4986]: I1203 13:21:17.789363 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc0fd902-4092-4036-b7ba-1b6ede68bf04","Type":"ContainerStarted","Data":"2d945a88d002a6780c7636b05e6e53638544477fd765b8260b577079aebed40b"} Dec 03 13:21:18 crc kubenswrapper[4986]: I1203 13:21:18.049258 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 03 13:21:18 crc kubenswrapper[4986]: I1203 13:21:18.108978 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5vqz5\" (UniqueName: \"kubernetes.io/projected/96ddaceb-69ae-45af-92b2-511ccc92f2c2-kube-api-access-5vqz5\") pod \"96ddaceb-69ae-45af-92b2-511ccc92f2c2\" (UID: \"96ddaceb-69ae-45af-92b2-511ccc92f2c2\") " Dec 03 13:21:18 crc kubenswrapper[4986]: I1203 13:21:18.109054 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96ddaceb-69ae-45af-92b2-511ccc92f2c2-config-data\") pod \"96ddaceb-69ae-45af-92b2-511ccc92f2c2\" (UID: \"96ddaceb-69ae-45af-92b2-511ccc92f2c2\") " Dec 03 13:21:18 crc kubenswrapper[4986]: I1203 13:21:18.109199 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96ddaceb-69ae-45af-92b2-511ccc92f2c2-combined-ca-bundle\") pod \"96ddaceb-69ae-45af-92b2-511ccc92f2c2\" (UID: \"96ddaceb-69ae-45af-92b2-511ccc92f2c2\") " Dec 03 13:21:18 crc kubenswrapper[4986]: I1203 13:21:18.121186 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96ddaceb-69ae-45af-92b2-511ccc92f2c2-kube-api-access-5vqz5" (OuterVolumeSpecName: "kube-api-access-5vqz5") pod "96ddaceb-69ae-45af-92b2-511ccc92f2c2" (UID: "96ddaceb-69ae-45af-92b2-511ccc92f2c2"). InnerVolumeSpecName "kube-api-access-5vqz5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:21:18 crc kubenswrapper[4986]: I1203 13:21:18.144380 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96ddaceb-69ae-45af-92b2-511ccc92f2c2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "96ddaceb-69ae-45af-92b2-511ccc92f2c2" (UID: "96ddaceb-69ae-45af-92b2-511ccc92f2c2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:21:18 crc kubenswrapper[4986]: I1203 13:21:18.150753 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96ddaceb-69ae-45af-92b2-511ccc92f2c2-config-data" (OuterVolumeSpecName: "config-data") pod "96ddaceb-69ae-45af-92b2-511ccc92f2c2" (UID: "96ddaceb-69ae-45af-92b2-511ccc92f2c2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:21:18 crc kubenswrapper[4986]: I1203 13:21:18.211335 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5vqz5\" (UniqueName: \"kubernetes.io/projected/96ddaceb-69ae-45af-92b2-511ccc92f2c2-kube-api-access-5vqz5\") on node \"crc\" DevicePath \"\"" Dec 03 13:21:18 crc kubenswrapper[4986]: I1203 13:21:18.211367 4986 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96ddaceb-69ae-45af-92b2-511ccc92f2c2-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 13:21:18 crc kubenswrapper[4986]: I1203 13:21:18.211376 4986 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96ddaceb-69ae-45af-92b2-511ccc92f2c2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 13:21:18 crc kubenswrapper[4986]: I1203 13:21:18.799993 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"39eb69a6-11cd-41da-956d-a9697ef88d67","Type":"ContainerStarted","Data":"a2b08a46c6673db6aed53d2ffd639afebcab7b1312b27bab4ca04af297b1f802"} Dec 03 13:21:18 crc kubenswrapper[4986]: I1203 13:21:18.800040 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"39eb69a6-11cd-41da-956d-a9697ef88d67","Type":"ContainerStarted","Data":"57e15e39ab88d08aa48b22700a68a5479bbaf7d71ecf2ba0894ba5919f997eb3"} Dec 03 13:21:18 crc kubenswrapper[4986]: I1203 13:21:18.801799 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc0fd902-4092-4036-b7ba-1b6ede68bf04","Type":"ContainerStarted","Data":"01a9f94c7855bcd828f0a47761d133697d6314c7cd627bd5d72c62b8626d5d1b"} Dec 03 13:21:18 crc kubenswrapper[4986]: I1203 13:21:18.803826 4986 generic.go:334] "Generic (PLEG): container finished" podID="96ddaceb-69ae-45af-92b2-511ccc92f2c2" containerID="3bd9833a0c6c9a11353bfaf94bb93ec4574d2fa5b6f4fb0282fa315c48f53387" exitCode=0 Dec 03 13:21:18 crc kubenswrapper[4986]: I1203 13:21:18.803859 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"96ddaceb-69ae-45af-92b2-511ccc92f2c2","Type":"ContainerDied","Data":"3bd9833a0c6c9a11353bfaf94bb93ec4574d2fa5b6f4fb0282fa315c48f53387"} Dec 03 13:21:18 crc kubenswrapper[4986]: I1203 13:21:18.803880 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"96ddaceb-69ae-45af-92b2-511ccc92f2c2","Type":"ContainerDied","Data":"c1c4c6b75559c3bdbfaeac153c4fb19b0b84b00e072a611b4f0f5071b9fdb64a"} Dec 03 13:21:18 crc kubenswrapper[4986]: I1203 13:21:18.803896 4986 scope.go:117] "RemoveContainer" containerID="3bd9833a0c6c9a11353bfaf94bb93ec4574d2fa5b6f4fb0282fa315c48f53387" Dec 03 13:21:18 crc kubenswrapper[4986]: I1203 13:21:18.804006 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 03 13:21:18 crc kubenswrapper[4986]: I1203 13:21:18.833729 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.833712539 podStartE2EDuration="2.833712539s" podCreationTimestamp="2025-12-03 13:21:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:21:18.828436418 +0000 UTC m=+1538.294867639" watchObservedRunningTime="2025-12-03 13:21:18.833712539 +0000 UTC m=+1538.300143720" Dec 03 13:21:18 crc kubenswrapper[4986]: I1203 13:21:18.836306 4986 scope.go:117] "RemoveContainer" containerID="3bd9833a0c6c9a11353bfaf94bb93ec4574d2fa5b6f4fb0282fa315c48f53387" Dec 03 13:21:18 crc kubenswrapper[4986]: E1203 13:21:18.836900 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3bd9833a0c6c9a11353bfaf94bb93ec4574d2fa5b6f4fb0282fa315c48f53387\": container with ID starting with 3bd9833a0c6c9a11353bfaf94bb93ec4574d2fa5b6f4fb0282fa315c48f53387 not found: ID does not exist" containerID="3bd9833a0c6c9a11353bfaf94bb93ec4574d2fa5b6f4fb0282fa315c48f53387" Dec 03 13:21:18 crc kubenswrapper[4986]: I1203 13:21:18.836956 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bd9833a0c6c9a11353bfaf94bb93ec4574d2fa5b6f4fb0282fa315c48f53387"} err="failed to get container status \"3bd9833a0c6c9a11353bfaf94bb93ec4574d2fa5b6f4fb0282fa315c48f53387\": rpc error: code = NotFound desc = could not find container \"3bd9833a0c6c9a11353bfaf94bb93ec4574d2fa5b6f4fb0282fa315c48f53387\": container with ID starting with 3bd9833a0c6c9a11353bfaf94bb93ec4574d2fa5b6f4fb0282fa315c48f53387 not found: ID does not exist" Dec 03 13:21:18 crc kubenswrapper[4986]: I1203 13:21:18.859785 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 03 13:21:18 crc kubenswrapper[4986]: I1203 13:21:18.889234 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 03 13:21:18 crc kubenswrapper[4986]: I1203 13:21:18.889304 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 03 13:21:18 crc kubenswrapper[4986]: E1203 13:21:18.889604 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96ddaceb-69ae-45af-92b2-511ccc92f2c2" containerName="nova-scheduler-scheduler" Dec 03 13:21:18 crc kubenswrapper[4986]: I1203 13:21:18.889615 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="96ddaceb-69ae-45af-92b2-511ccc92f2c2" containerName="nova-scheduler-scheduler" Dec 03 13:21:18 crc kubenswrapper[4986]: I1203 13:21:18.889819 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="96ddaceb-69ae-45af-92b2-511ccc92f2c2" containerName="nova-scheduler-scheduler" Dec 03 13:21:18 crc kubenswrapper[4986]: I1203 13:21:18.890340 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 03 13:21:18 crc kubenswrapper[4986]: I1203 13:21:18.890407 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 03 13:21:18 crc kubenswrapper[4986]: I1203 13:21:18.911175 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 03 13:21:18 crc kubenswrapper[4986]: I1203 13:21:18.929359 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74dfd7ae-e1e8-4fe0-9563-0b6a8aaa60f4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"74dfd7ae-e1e8-4fe0-9563-0b6a8aaa60f4\") " pod="openstack/nova-scheduler-0" Dec 03 13:21:18 crc kubenswrapper[4986]: I1203 13:21:18.929409 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4tvq\" (UniqueName: \"kubernetes.io/projected/74dfd7ae-e1e8-4fe0-9563-0b6a8aaa60f4-kube-api-access-c4tvq\") pod \"nova-scheduler-0\" (UID: \"74dfd7ae-e1e8-4fe0-9563-0b6a8aaa60f4\") " pod="openstack/nova-scheduler-0" Dec 03 13:21:18 crc kubenswrapper[4986]: I1203 13:21:18.929509 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74dfd7ae-e1e8-4fe0-9563-0b6a8aaa60f4-config-data\") pod \"nova-scheduler-0\" (UID: \"74dfd7ae-e1e8-4fe0-9563-0b6a8aaa60f4\") " pod="openstack/nova-scheduler-0" Dec 03 13:21:18 crc kubenswrapper[4986]: I1203 13:21:18.952887 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96ddaceb-69ae-45af-92b2-511ccc92f2c2" path="/var/lib/kubelet/pods/96ddaceb-69ae-45af-92b2-511ccc92f2c2/volumes" Dec 03 13:21:19 crc kubenswrapper[4986]: I1203 13:21:19.031550 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4tvq\" (UniqueName: \"kubernetes.io/projected/74dfd7ae-e1e8-4fe0-9563-0b6a8aaa60f4-kube-api-access-c4tvq\") pod \"nova-scheduler-0\" (UID: \"74dfd7ae-e1e8-4fe0-9563-0b6a8aaa60f4\") " pod="openstack/nova-scheduler-0" Dec 03 13:21:19 crc kubenswrapper[4986]: I1203 13:21:19.031920 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74dfd7ae-e1e8-4fe0-9563-0b6a8aaa60f4-config-data\") pod \"nova-scheduler-0\" (UID: \"74dfd7ae-e1e8-4fe0-9563-0b6a8aaa60f4\") " pod="openstack/nova-scheduler-0" Dec 03 13:21:19 crc kubenswrapper[4986]: I1203 13:21:19.033514 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74dfd7ae-e1e8-4fe0-9563-0b6a8aaa60f4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"74dfd7ae-e1e8-4fe0-9563-0b6a8aaa60f4\") " pod="openstack/nova-scheduler-0" Dec 03 13:21:19 crc kubenswrapper[4986]: I1203 13:21:19.036551 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74dfd7ae-e1e8-4fe0-9563-0b6a8aaa60f4-config-data\") pod \"nova-scheduler-0\" (UID: \"74dfd7ae-e1e8-4fe0-9563-0b6a8aaa60f4\") " pod="openstack/nova-scheduler-0" Dec 03 13:21:19 crc kubenswrapper[4986]: I1203 13:21:19.038902 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74dfd7ae-e1e8-4fe0-9563-0b6a8aaa60f4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"74dfd7ae-e1e8-4fe0-9563-0b6a8aaa60f4\") " pod="openstack/nova-scheduler-0" Dec 03 13:21:19 crc kubenswrapper[4986]: I1203 13:21:19.047496 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4tvq\" (UniqueName: \"kubernetes.io/projected/74dfd7ae-e1e8-4fe0-9563-0b6a8aaa60f4-kube-api-access-c4tvq\") pod \"nova-scheduler-0\" (UID: \"74dfd7ae-e1e8-4fe0-9563-0b6a8aaa60f4\") " pod="openstack/nova-scheduler-0" Dec 03 13:21:19 crc kubenswrapper[4986]: I1203 13:21:19.240795 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 03 13:21:19 crc kubenswrapper[4986]: I1203 13:21:19.585876 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 03 13:21:19 crc kubenswrapper[4986]: I1203 13:21:19.644492 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b17a506-eb92-4170-a2cf-0a563e427197-config-data\") pod \"0b17a506-eb92-4170-a2cf-0a563e427197\" (UID: \"0b17a506-eb92-4170-a2cf-0a563e427197\") " Dec 03 13:21:19 crc kubenswrapper[4986]: I1203 13:21:19.644560 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b17a506-eb92-4170-a2cf-0a563e427197-nova-metadata-tls-certs\") pod \"0b17a506-eb92-4170-a2cf-0a563e427197\" (UID: \"0b17a506-eb92-4170-a2cf-0a563e427197\") " Dec 03 13:21:19 crc kubenswrapper[4986]: I1203 13:21:19.644623 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b17a506-eb92-4170-a2cf-0a563e427197-logs\") pod \"0b17a506-eb92-4170-a2cf-0a563e427197\" (UID: \"0b17a506-eb92-4170-a2cf-0a563e427197\") " Dec 03 13:21:19 crc kubenswrapper[4986]: I1203 13:21:19.644665 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ndp92\" (UniqueName: \"kubernetes.io/projected/0b17a506-eb92-4170-a2cf-0a563e427197-kube-api-access-ndp92\") pod \"0b17a506-eb92-4170-a2cf-0a563e427197\" (UID: \"0b17a506-eb92-4170-a2cf-0a563e427197\") " Dec 03 13:21:19 crc kubenswrapper[4986]: I1203 13:21:19.644701 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b17a506-eb92-4170-a2cf-0a563e427197-combined-ca-bundle\") pod \"0b17a506-eb92-4170-a2cf-0a563e427197\" (UID: \"0b17a506-eb92-4170-a2cf-0a563e427197\") " Dec 03 13:21:19 crc kubenswrapper[4986]: I1203 13:21:19.647140 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b17a506-eb92-4170-a2cf-0a563e427197-logs" (OuterVolumeSpecName: "logs") pod "0b17a506-eb92-4170-a2cf-0a563e427197" (UID: "0b17a506-eb92-4170-a2cf-0a563e427197"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 13:21:19 crc kubenswrapper[4986]: I1203 13:21:19.664679 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b17a506-eb92-4170-a2cf-0a563e427197-kube-api-access-ndp92" (OuterVolumeSpecName: "kube-api-access-ndp92") pod "0b17a506-eb92-4170-a2cf-0a563e427197" (UID: "0b17a506-eb92-4170-a2cf-0a563e427197"). InnerVolumeSpecName "kube-api-access-ndp92". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:21:19 crc kubenswrapper[4986]: I1203 13:21:19.685941 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b17a506-eb92-4170-a2cf-0a563e427197-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0b17a506-eb92-4170-a2cf-0a563e427197" (UID: "0b17a506-eb92-4170-a2cf-0a563e427197"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:21:19 crc kubenswrapper[4986]: I1203 13:21:19.702152 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b17a506-eb92-4170-a2cf-0a563e427197-config-data" (OuterVolumeSpecName: "config-data") pod "0b17a506-eb92-4170-a2cf-0a563e427197" (UID: "0b17a506-eb92-4170-a2cf-0a563e427197"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:21:19 crc kubenswrapper[4986]: I1203 13:21:19.736118 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 03 13:21:19 crc kubenswrapper[4986]: I1203 13:21:19.750085 4986 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b17a506-eb92-4170-a2cf-0a563e427197-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 13:21:19 crc kubenswrapper[4986]: I1203 13:21:19.750120 4986 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b17a506-eb92-4170-a2cf-0a563e427197-logs\") on node \"crc\" DevicePath \"\"" Dec 03 13:21:19 crc kubenswrapper[4986]: I1203 13:21:19.750134 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ndp92\" (UniqueName: \"kubernetes.io/projected/0b17a506-eb92-4170-a2cf-0a563e427197-kube-api-access-ndp92\") on node \"crc\" DevicePath \"\"" Dec 03 13:21:19 crc kubenswrapper[4986]: I1203 13:21:19.750147 4986 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b17a506-eb92-4170-a2cf-0a563e427197-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 13:21:19 crc kubenswrapper[4986]: I1203 13:21:19.776782 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b17a506-eb92-4170-a2cf-0a563e427197-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "0b17a506-eb92-4170-a2cf-0a563e427197" (UID: "0b17a506-eb92-4170-a2cf-0a563e427197"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:21:19 crc kubenswrapper[4986]: I1203 13:21:19.821636 4986 generic.go:334] "Generic (PLEG): container finished" podID="0b17a506-eb92-4170-a2cf-0a563e427197" containerID="d913854847519427e500aca4536d98af73991ac62f6d1f3106d55f98a92ad271" exitCode=0 Dec 03 13:21:19 crc kubenswrapper[4986]: I1203 13:21:19.821756 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0b17a506-eb92-4170-a2cf-0a563e427197","Type":"ContainerDied","Data":"d913854847519427e500aca4536d98af73991ac62f6d1f3106d55f98a92ad271"} Dec 03 13:21:19 crc kubenswrapper[4986]: I1203 13:21:19.821788 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0b17a506-eb92-4170-a2cf-0a563e427197","Type":"ContainerDied","Data":"e2abd383b3cefbc7f5006a716517e19f4972c34e48ae5b6904c66d35773208d6"} Dec 03 13:21:19 crc kubenswrapper[4986]: I1203 13:21:19.821829 4986 scope.go:117] "RemoveContainer" containerID="d913854847519427e500aca4536d98af73991ac62f6d1f3106d55f98a92ad271" Dec 03 13:21:19 crc kubenswrapper[4986]: I1203 13:21:19.821926 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 03 13:21:19 crc kubenswrapper[4986]: I1203 13:21:19.826547 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"74dfd7ae-e1e8-4fe0-9563-0b6a8aaa60f4","Type":"ContainerStarted","Data":"998759de73c8b74e7c53e90c5290889a6824e4fee3b5ff3b977cf96c2cbc5c34"} Dec 03 13:21:19 crc kubenswrapper[4986]: I1203 13:21:19.851265 4986 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b17a506-eb92-4170-a2cf-0a563e427197-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 03 13:21:19 crc kubenswrapper[4986]: I1203 13:21:19.866533 4986 scope.go:117] "RemoveContainer" containerID="1140fadaa85eaee0fbdb38e87caea1583d6a3f7bebe8af59868412337b5fa1aa" Dec 03 13:21:19 crc kubenswrapper[4986]: I1203 13:21:19.883400 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 03 13:21:19 crc kubenswrapper[4986]: I1203 13:21:19.894264 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 03 13:21:19 crc kubenswrapper[4986]: I1203 13:21:19.926575 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 03 13:21:19 crc kubenswrapper[4986]: E1203 13:21:19.927037 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b17a506-eb92-4170-a2cf-0a563e427197" containerName="nova-metadata-metadata" Dec 03 13:21:19 crc kubenswrapper[4986]: I1203 13:21:19.927053 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b17a506-eb92-4170-a2cf-0a563e427197" containerName="nova-metadata-metadata" Dec 03 13:21:19 crc kubenswrapper[4986]: E1203 13:21:19.927077 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b17a506-eb92-4170-a2cf-0a563e427197" containerName="nova-metadata-log" Dec 03 13:21:19 crc kubenswrapper[4986]: I1203 13:21:19.927083 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b17a506-eb92-4170-a2cf-0a563e427197" containerName="nova-metadata-log" Dec 03 13:21:19 crc kubenswrapper[4986]: I1203 13:21:19.927239 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b17a506-eb92-4170-a2cf-0a563e427197" containerName="nova-metadata-log" Dec 03 13:21:19 crc kubenswrapper[4986]: I1203 13:21:19.927270 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b17a506-eb92-4170-a2cf-0a563e427197" containerName="nova-metadata-metadata" Dec 03 13:21:19 crc kubenswrapper[4986]: I1203 13:21:19.928184 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 03 13:21:19 crc kubenswrapper[4986]: I1203 13:21:19.935196 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 03 13:21:19 crc kubenswrapper[4986]: I1203 13:21:19.953589 4986 scope.go:117] "RemoveContainer" containerID="d913854847519427e500aca4536d98af73991ac62f6d1f3106d55f98a92ad271" Dec 03 13:21:19 crc kubenswrapper[4986]: I1203 13:21:19.954052 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 03 13:21:19 crc kubenswrapper[4986]: I1203 13:21:19.954267 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 03 13:21:19 crc kubenswrapper[4986]: E1203 13:21:19.976719 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d913854847519427e500aca4536d98af73991ac62f6d1f3106d55f98a92ad271\": container with ID starting with d913854847519427e500aca4536d98af73991ac62f6d1f3106d55f98a92ad271 not found: ID does not exist" containerID="d913854847519427e500aca4536d98af73991ac62f6d1f3106d55f98a92ad271" Dec 03 13:21:19 crc kubenswrapper[4986]: I1203 13:21:19.976772 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d913854847519427e500aca4536d98af73991ac62f6d1f3106d55f98a92ad271"} err="failed to get container status \"d913854847519427e500aca4536d98af73991ac62f6d1f3106d55f98a92ad271\": rpc error: code = NotFound desc = could not find container \"d913854847519427e500aca4536d98af73991ac62f6d1f3106d55f98a92ad271\": container with ID starting with d913854847519427e500aca4536d98af73991ac62f6d1f3106d55f98a92ad271 not found: ID does not exist" Dec 03 13:21:19 crc kubenswrapper[4986]: I1203 13:21:19.976799 4986 scope.go:117] "RemoveContainer" containerID="1140fadaa85eaee0fbdb38e87caea1583d6a3f7bebe8af59868412337b5fa1aa" Dec 03 13:21:19 crc kubenswrapper[4986]: E1203 13:21:19.981360 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1140fadaa85eaee0fbdb38e87caea1583d6a3f7bebe8af59868412337b5fa1aa\": container with ID starting with 1140fadaa85eaee0fbdb38e87caea1583d6a3f7bebe8af59868412337b5fa1aa not found: ID does not exist" containerID="1140fadaa85eaee0fbdb38e87caea1583d6a3f7bebe8af59868412337b5fa1aa" Dec 03 13:21:19 crc kubenswrapper[4986]: I1203 13:21:19.981428 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1140fadaa85eaee0fbdb38e87caea1583d6a3f7bebe8af59868412337b5fa1aa"} err="failed to get container status \"1140fadaa85eaee0fbdb38e87caea1583d6a3f7bebe8af59868412337b5fa1aa\": rpc error: code = NotFound desc = could not find container \"1140fadaa85eaee0fbdb38e87caea1583d6a3f7bebe8af59868412337b5fa1aa\": container with ID starting with 1140fadaa85eaee0fbdb38e87caea1583d6a3f7bebe8af59868412337b5fa1aa not found: ID does not exist" Dec 03 13:21:20 crc kubenswrapper[4986]: I1203 13:21:20.055734 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3bb61071-904d-46d6-8594-2312383a8a06-logs\") pod \"nova-metadata-0\" (UID: \"3bb61071-904d-46d6-8594-2312383a8a06\") " pod="openstack/nova-metadata-0" Dec 03 13:21:20 crc kubenswrapper[4986]: I1203 13:21:20.055882 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bb61071-904d-46d6-8594-2312383a8a06-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3bb61071-904d-46d6-8594-2312383a8a06\") " pod="openstack/nova-metadata-0" Dec 03 13:21:20 crc kubenswrapper[4986]: I1203 13:21:20.056040 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7qq5\" (UniqueName: \"kubernetes.io/projected/3bb61071-904d-46d6-8594-2312383a8a06-kube-api-access-l7qq5\") pod \"nova-metadata-0\" (UID: \"3bb61071-904d-46d6-8594-2312383a8a06\") " pod="openstack/nova-metadata-0" Dec 03 13:21:20 crc kubenswrapper[4986]: I1203 13:21:20.056129 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bb61071-904d-46d6-8594-2312383a8a06-config-data\") pod \"nova-metadata-0\" (UID: \"3bb61071-904d-46d6-8594-2312383a8a06\") " pod="openstack/nova-metadata-0" Dec 03 13:21:20 crc kubenswrapper[4986]: I1203 13:21:20.056262 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bb61071-904d-46d6-8594-2312383a8a06-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3bb61071-904d-46d6-8594-2312383a8a06\") " pod="openstack/nova-metadata-0" Dec 03 13:21:20 crc kubenswrapper[4986]: I1203 13:21:20.158216 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7qq5\" (UniqueName: \"kubernetes.io/projected/3bb61071-904d-46d6-8594-2312383a8a06-kube-api-access-l7qq5\") pod \"nova-metadata-0\" (UID: \"3bb61071-904d-46d6-8594-2312383a8a06\") " pod="openstack/nova-metadata-0" Dec 03 13:21:20 crc kubenswrapper[4986]: I1203 13:21:20.158598 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bb61071-904d-46d6-8594-2312383a8a06-config-data\") pod \"nova-metadata-0\" (UID: \"3bb61071-904d-46d6-8594-2312383a8a06\") " pod="openstack/nova-metadata-0" Dec 03 13:21:20 crc kubenswrapper[4986]: I1203 13:21:20.158643 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bb61071-904d-46d6-8594-2312383a8a06-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3bb61071-904d-46d6-8594-2312383a8a06\") " pod="openstack/nova-metadata-0" Dec 03 13:21:20 crc kubenswrapper[4986]: I1203 13:21:20.158759 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3bb61071-904d-46d6-8594-2312383a8a06-logs\") pod \"nova-metadata-0\" (UID: \"3bb61071-904d-46d6-8594-2312383a8a06\") " pod="openstack/nova-metadata-0" Dec 03 13:21:20 crc kubenswrapper[4986]: I1203 13:21:20.158799 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bb61071-904d-46d6-8594-2312383a8a06-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3bb61071-904d-46d6-8594-2312383a8a06\") " pod="openstack/nova-metadata-0" Dec 03 13:21:20 crc kubenswrapper[4986]: I1203 13:21:20.159280 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3bb61071-904d-46d6-8594-2312383a8a06-logs\") pod \"nova-metadata-0\" (UID: \"3bb61071-904d-46d6-8594-2312383a8a06\") " pod="openstack/nova-metadata-0" Dec 03 13:21:20 crc kubenswrapper[4986]: I1203 13:21:20.164914 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bb61071-904d-46d6-8594-2312383a8a06-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3bb61071-904d-46d6-8594-2312383a8a06\") " pod="openstack/nova-metadata-0" Dec 03 13:21:20 crc kubenswrapper[4986]: I1203 13:21:20.166797 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bb61071-904d-46d6-8594-2312383a8a06-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3bb61071-904d-46d6-8594-2312383a8a06\") " pod="openstack/nova-metadata-0" Dec 03 13:21:20 crc kubenswrapper[4986]: I1203 13:21:20.167974 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bb61071-904d-46d6-8594-2312383a8a06-config-data\") pod \"nova-metadata-0\" (UID: \"3bb61071-904d-46d6-8594-2312383a8a06\") " pod="openstack/nova-metadata-0" Dec 03 13:21:20 crc kubenswrapper[4986]: I1203 13:21:20.176192 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7qq5\" (UniqueName: \"kubernetes.io/projected/3bb61071-904d-46d6-8594-2312383a8a06-kube-api-access-l7qq5\") pod \"nova-metadata-0\" (UID: \"3bb61071-904d-46d6-8594-2312383a8a06\") " pod="openstack/nova-metadata-0" Dec 03 13:21:20 crc kubenswrapper[4986]: I1203 13:21:20.314736 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 03 13:21:20 crc kubenswrapper[4986]: I1203 13:21:20.764063 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 03 13:21:20 crc kubenswrapper[4986]: I1203 13:21:20.841053 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc0fd902-4092-4036-b7ba-1b6ede68bf04","Type":"ContainerStarted","Data":"315d99b6a196bc45e0014efc2c488dd03c5ddfe7dca5f9ef912be8097a84e8b8"} Dec 03 13:21:20 crc kubenswrapper[4986]: I1203 13:21:20.842321 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 03 13:21:20 crc kubenswrapper[4986]: I1203 13:21:20.846254 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"74dfd7ae-e1e8-4fe0-9563-0b6a8aaa60f4","Type":"ContainerStarted","Data":"9d2c07e67ad028fb32413bca5533209eafc708b72e00ae5e5225efe7eb5a5ec4"} Dec 03 13:21:20 crc kubenswrapper[4986]: I1203 13:21:20.849961 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3bb61071-904d-46d6-8594-2312383a8a06","Type":"ContainerStarted","Data":"9e676a3f45be687182f788a491c64e440e9d5db3475bd149128e6b1b552f9c25"} Dec 03 13:21:20 crc kubenswrapper[4986]: I1203 13:21:20.868372 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.074553532 podStartE2EDuration="5.868352202s" podCreationTimestamp="2025-12-03 13:21:15 +0000 UTC" firstStartedPulling="2025-12-03 13:21:15.909115104 +0000 UTC m=+1535.375546295" lastFinishedPulling="2025-12-03 13:21:19.702913774 +0000 UTC m=+1539.169344965" observedRunningTime="2025-12-03 13:21:20.862874665 +0000 UTC m=+1540.329305876" watchObservedRunningTime="2025-12-03 13:21:20.868352202 +0000 UTC m=+1540.334783393" Dec 03 13:21:20 crc kubenswrapper[4986]: I1203 13:21:20.884437 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.884416265 podStartE2EDuration="2.884416265s" podCreationTimestamp="2025-12-03 13:21:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:21:20.878462325 +0000 UTC m=+1540.344893516" watchObservedRunningTime="2025-12-03 13:21:20.884416265 +0000 UTC m=+1540.350847456" Dec 03 13:21:20 crc kubenswrapper[4986]: I1203 13:21:20.954937 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b17a506-eb92-4170-a2cf-0a563e427197" path="/var/lib/kubelet/pods/0b17a506-eb92-4170-a2cf-0a563e427197/volumes" Dec 03 13:21:21 crc kubenswrapper[4986]: I1203 13:21:21.871222 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3bb61071-904d-46d6-8594-2312383a8a06","Type":"ContainerStarted","Data":"ebcd2838cac83a0d9afbe04382bed57804efcffeb5ddb5a68a45bff1b3e80f18"} Dec 03 13:21:21 crc kubenswrapper[4986]: I1203 13:21:21.871572 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3bb61071-904d-46d6-8594-2312383a8a06","Type":"ContainerStarted","Data":"56ee4e4712c4fee2d4552abb29aba3e76d0bcfe9e12bf3695c8e8b0372ba0370"} Dec 03 13:21:21 crc kubenswrapper[4986]: I1203 13:21:21.901503 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.9014831599999997 podStartE2EDuration="2.90148316s" podCreationTimestamp="2025-12-03 13:21:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:21:21.894992995 +0000 UTC m=+1541.361424196" watchObservedRunningTime="2025-12-03 13:21:21.90148316 +0000 UTC m=+1541.367914361" Dec 03 13:21:24 crc kubenswrapper[4986]: I1203 13:21:24.242930 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 03 13:21:24 crc kubenswrapper[4986]: I1203 13:21:24.553944 4986 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="0b17a506-eb92-4170-a2cf-0a563e427197" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.191:8775/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 03 13:21:24 crc kubenswrapper[4986]: I1203 13:21:24.554395 4986 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="0b17a506-eb92-4170-a2cf-0a563e427197" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.191:8775/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 03 13:21:25 crc kubenswrapper[4986]: I1203 13:21:25.315840 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 03 13:21:25 crc kubenswrapper[4986]: I1203 13:21:25.315913 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 03 13:21:27 crc kubenswrapper[4986]: I1203 13:21:27.170738 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 03 13:21:27 crc kubenswrapper[4986]: I1203 13:21:27.171636 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 03 13:21:28 crc kubenswrapper[4986]: I1203 13:21:28.190515 4986 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="39eb69a6-11cd-41da-956d-a9697ef88d67" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.201:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 03 13:21:28 crc kubenswrapper[4986]: I1203 13:21:28.190540 4986 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="39eb69a6-11cd-41da-956d-a9697ef88d67" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.201:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 03 13:21:29 crc kubenswrapper[4986]: I1203 13:21:29.241783 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 03 13:21:29 crc kubenswrapper[4986]: I1203 13:21:29.269892 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 03 13:21:29 crc kubenswrapper[4986]: I1203 13:21:29.980802 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 03 13:21:30 crc kubenswrapper[4986]: I1203 13:21:30.316203 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 03 13:21:30 crc kubenswrapper[4986]: I1203 13:21:30.316257 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 03 13:21:31 crc kubenswrapper[4986]: I1203 13:21:31.336547 4986 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="3bb61071-904d-46d6-8594-2312383a8a06" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.203:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 03 13:21:31 crc kubenswrapper[4986]: I1203 13:21:31.336675 4986 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="3bb61071-904d-46d6-8594-2312383a8a06" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.203:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 03 13:21:31 crc kubenswrapper[4986]: I1203 13:21:31.441189 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6frnw"] Dec 03 13:21:31 crc kubenswrapper[4986]: I1203 13:21:31.443629 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6frnw" Dec 03 13:21:31 crc kubenswrapper[4986]: I1203 13:21:31.457041 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6frnw"] Dec 03 13:21:31 crc kubenswrapper[4986]: I1203 13:21:31.489197 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5baf423-355d-4b11-8579-e07766dc4939-utilities\") pod \"community-operators-6frnw\" (UID: \"a5baf423-355d-4b11-8579-e07766dc4939\") " pod="openshift-marketplace/community-operators-6frnw" Dec 03 13:21:31 crc kubenswrapper[4986]: I1203 13:21:31.489259 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gghzq\" (UniqueName: \"kubernetes.io/projected/a5baf423-355d-4b11-8579-e07766dc4939-kube-api-access-gghzq\") pod \"community-operators-6frnw\" (UID: \"a5baf423-355d-4b11-8579-e07766dc4939\") " pod="openshift-marketplace/community-operators-6frnw" Dec 03 13:21:31 crc kubenswrapper[4986]: I1203 13:21:31.489362 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5baf423-355d-4b11-8579-e07766dc4939-catalog-content\") pod \"community-operators-6frnw\" (UID: \"a5baf423-355d-4b11-8579-e07766dc4939\") " pod="openshift-marketplace/community-operators-6frnw" Dec 03 13:21:31 crc kubenswrapper[4986]: I1203 13:21:31.591302 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5baf423-355d-4b11-8579-e07766dc4939-utilities\") pod \"community-operators-6frnw\" (UID: \"a5baf423-355d-4b11-8579-e07766dc4939\") " pod="openshift-marketplace/community-operators-6frnw" Dec 03 13:21:31 crc kubenswrapper[4986]: I1203 13:21:31.591374 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gghzq\" (UniqueName: \"kubernetes.io/projected/a5baf423-355d-4b11-8579-e07766dc4939-kube-api-access-gghzq\") pod \"community-operators-6frnw\" (UID: \"a5baf423-355d-4b11-8579-e07766dc4939\") " pod="openshift-marketplace/community-operators-6frnw" Dec 03 13:21:31 crc kubenswrapper[4986]: I1203 13:21:31.591416 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5baf423-355d-4b11-8579-e07766dc4939-catalog-content\") pod \"community-operators-6frnw\" (UID: \"a5baf423-355d-4b11-8579-e07766dc4939\") " pod="openshift-marketplace/community-operators-6frnw" Dec 03 13:21:31 crc kubenswrapper[4986]: I1203 13:21:31.591925 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5baf423-355d-4b11-8579-e07766dc4939-utilities\") pod \"community-operators-6frnw\" (UID: \"a5baf423-355d-4b11-8579-e07766dc4939\") " pod="openshift-marketplace/community-operators-6frnw" Dec 03 13:21:31 crc kubenswrapper[4986]: I1203 13:21:31.591938 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5baf423-355d-4b11-8579-e07766dc4939-catalog-content\") pod \"community-operators-6frnw\" (UID: \"a5baf423-355d-4b11-8579-e07766dc4939\") " pod="openshift-marketplace/community-operators-6frnw" Dec 03 13:21:31 crc kubenswrapper[4986]: I1203 13:21:31.614265 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gghzq\" (UniqueName: \"kubernetes.io/projected/a5baf423-355d-4b11-8579-e07766dc4939-kube-api-access-gghzq\") pod \"community-operators-6frnw\" (UID: \"a5baf423-355d-4b11-8579-e07766dc4939\") " pod="openshift-marketplace/community-operators-6frnw" Dec 03 13:21:31 crc kubenswrapper[4986]: I1203 13:21:31.789356 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6frnw" Dec 03 13:21:32 crc kubenswrapper[4986]: I1203 13:21:32.345246 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6frnw"] Dec 03 13:21:32 crc kubenswrapper[4986]: I1203 13:21:32.990068 4986 generic.go:334] "Generic (PLEG): container finished" podID="a5baf423-355d-4b11-8579-e07766dc4939" containerID="57908dbe40b95e1e64e52428210e57965622a9a03e18f863f878e0ffeaae571d" exitCode=0 Dec 03 13:21:32 crc kubenswrapper[4986]: I1203 13:21:32.990193 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6frnw" event={"ID":"a5baf423-355d-4b11-8579-e07766dc4939","Type":"ContainerDied","Data":"57908dbe40b95e1e64e52428210e57965622a9a03e18f863f878e0ffeaae571d"} Dec 03 13:21:32 crc kubenswrapper[4986]: I1203 13:21:32.990670 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6frnw" event={"ID":"a5baf423-355d-4b11-8579-e07766dc4939","Type":"ContainerStarted","Data":"bfa7afeb8958aabe8d7e8470dcf54cbf920beaf7d8e8073fed6f3589cfe64bd6"} Dec 03 13:21:33 crc kubenswrapper[4986]: I1203 13:21:33.491693 4986 patch_prober.go:28] interesting pod/machine-config-daemon-xggpj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 13:21:33 crc kubenswrapper[4986]: I1203 13:21:33.491765 4986 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 13:21:33 crc kubenswrapper[4986]: I1203 13:21:33.491826 4986 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" Dec 03 13:21:33 crc kubenswrapper[4986]: I1203 13:21:33.492728 4986 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3ba053774b99510def9eb1096d5851d891d9961deee4a7966a8cfbf9b5fbf159"} pod="openshift-machine-config-operator/machine-config-daemon-xggpj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 13:21:33 crc kubenswrapper[4986]: I1203 13:21:33.492789 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" containerName="machine-config-daemon" containerID="cri-o://3ba053774b99510def9eb1096d5851d891d9961deee4a7966a8cfbf9b5fbf159" gracePeriod=600 Dec 03 13:21:33 crc kubenswrapper[4986]: E1203 13:21:33.616122 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 13:21:34 crc kubenswrapper[4986]: I1203 13:21:34.003949 4986 generic.go:334] "Generic (PLEG): container finished" podID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" containerID="3ba053774b99510def9eb1096d5851d891d9961deee4a7966a8cfbf9b5fbf159" exitCode=0 Dec 03 13:21:34 crc kubenswrapper[4986]: I1203 13:21:34.004008 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" event={"ID":"f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c","Type":"ContainerDied","Data":"3ba053774b99510def9eb1096d5851d891d9961deee4a7966a8cfbf9b5fbf159"} Dec 03 13:21:34 crc kubenswrapper[4986]: I1203 13:21:34.004060 4986 scope.go:117] "RemoveContainer" containerID="c5f89c3f3b886e65cc9822e8ac97f262b26f801c62b7d2e9ded96fcd903af71d" Dec 03 13:21:34 crc kubenswrapper[4986]: I1203 13:21:34.004439 4986 scope.go:117] "RemoveContainer" containerID="3ba053774b99510def9eb1096d5851d891d9961deee4a7966a8cfbf9b5fbf159" Dec 03 13:21:34 crc kubenswrapper[4986]: E1203 13:21:34.004675 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 13:21:35 crc kubenswrapper[4986]: I1203 13:21:35.020182 4986 generic.go:334] "Generic (PLEG): container finished" podID="a5baf423-355d-4b11-8579-e07766dc4939" containerID="23f54419f6df370539021fe02f37bfd4c39179e9a7ad0c1885bc1837eb83e0d7" exitCode=0 Dec 03 13:21:35 crc kubenswrapper[4986]: I1203 13:21:35.020317 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6frnw" event={"ID":"a5baf423-355d-4b11-8579-e07766dc4939","Type":"ContainerDied","Data":"23f54419f6df370539021fe02f37bfd4c39179e9a7ad0c1885bc1837eb83e0d7"} Dec 03 13:21:37 crc kubenswrapper[4986]: I1203 13:21:37.049381 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6frnw" event={"ID":"a5baf423-355d-4b11-8579-e07766dc4939","Type":"ContainerStarted","Data":"9f056c8bd55f4982569888c34ffb6aa1bd6f228d29285508faa25799d7ededd1"} Dec 03 13:21:37 crc kubenswrapper[4986]: I1203 13:21:37.087237 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6frnw" podStartSLOduration=3.212139865 podStartE2EDuration="6.087189975s" podCreationTimestamp="2025-12-03 13:21:31 +0000 UTC" firstStartedPulling="2025-12-03 13:21:32.99303381 +0000 UTC m=+1552.459465011" lastFinishedPulling="2025-12-03 13:21:35.86808393 +0000 UTC m=+1555.334515121" observedRunningTime="2025-12-03 13:21:37.072016747 +0000 UTC m=+1556.538447948" watchObservedRunningTime="2025-12-03 13:21:37.087189975 +0000 UTC m=+1556.553621186" Dec 03 13:21:37 crc kubenswrapper[4986]: I1203 13:21:37.178433 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 03 13:21:37 crc kubenswrapper[4986]: I1203 13:21:37.178645 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 03 13:21:37 crc kubenswrapper[4986]: I1203 13:21:37.178983 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 03 13:21:37 crc kubenswrapper[4986]: I1203 13:21:37.197210 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 03 13:21:38 crc kubenswrapper[4986]: I1203 13:21:38.058537 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 03 13:21:38 crc kubenswrapper[4986]: I1203 13:21:38.075050 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 03 13:21:40 crc kubenswrapper[4986]: I1203 13:21:40.323406 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 03 13:21:40 crc kubenswrapper[4986]: I1203 13:21:40.324047 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 03 13:21:40 crc kubenswrapper[4986]: I1203 13:21:40.330831 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 03 13:21:40 crc kubenswrapper[4986]: I1203 13:21:40.331160 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 03 13:21:41 crc kubenswrapper[4986]: I1203 13:21:41.789684 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6frnw" Dec 03 13:21:41 crc kubenswrapper[4986]: I1203 13:21:41.790411 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6frnw" Dec 03 13:21:41 crc kubenswrapper[4986]: I1203 13:21:41.845519 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6frnw" Dec 03 13:21:42 crc kubenswrapper[4986]: I1203 13:21:42.133267 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6frnw" Dec 03 13:21:42 crc kubenswrapper[4986]: I1203 13:21:42.190438 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6frnw"] Dec 03 13:21:44 crc kubenswrapper[4986]: I1203 13:21:44.108057 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6frnw" podUID="a5baf423-355d-4b11-8579-e07766dc4939" containerName="registry-server" containerID="cri-o://9f056c8bd55f4982569888c34ffb6aa1bd6f228d29285508faa25799d7ededd1" gracePeriod=2 Dec 03 13:21:44 crc kubenswrapper[4986]: I1203 13:21:44.583360 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6frnw" Dec 03 13:21:44 crc kubenswrapper[4986]: I1203 13:21:44.645839 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5baf423-355d-4b11-8579-e07766dc4939-catalog-content\") pod \"a5baf423-355d-4b11-8579-e07766dc4939\" (UID: \"a5baf423-355d-4b11-8579-e07766dc4939\") " Dec 03 13:21:44 crc kubenswrapper[4986]: I1203 13:21:44.646016 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5baf423-355d-4b11-8579-e07766dc4939-utilities\") pod \"a5baf423-355d-4b11-8579-e07766dc4939\" (UID: \"a5baf423-355d-4b11-8579-e07766dc4939\") " Dec 03 13:21:44 crc kubenswrapper[4986]: I1203 13:21:44.646044 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gghzq\" (UniqueName: \"kubernetes.io/projected/a5baf423-355d-4b11-8579-e07766dc4939-kube-api-access-gghzq\") pod \"a5baf423-355d-4b11-8579-e07766dc4939\" (UID: \"a5baf423-355d-4b11-8579-e07766dc4939\") " Dec 03 13:21:44 crc kubenswrapper[4986]: I1203 13:21:44.646946 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5baf423-355d-4b11-8579-e07766dc4939-utilities" (OuterVolumeSpecName: "utilities") pod "a5baf423-355d-4b11-8579-e07766dc4939" (UID: "a5baf423-355d-4b11-8579-e07766dc4939"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 13:21:44 crc kubenswrapper[4986]: I1203 13:21:44.663670 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5baf423-355d-4b11-8579-e07766dc4939-kube-api-access-gghzq" (OuterVolumeSpecName: "kube-api-access-gghzq") pod "a5baf423-355d-4b11-8579-e07766dc4939" (UID: "a5baf423-355d-4b11-8579-e07766dc4939"). InnerVolumeSpecName "kube-api-access-gghzq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:21:44 crc kubenswrapper[4986]: I1203 13:21:44.731392 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5baf423-355d-4b11-8579-e07766dc4939-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a5baf423-355d-4b11-8579-e07766dc4939" (UID: "a5baf423-355d-4b11-8579-e07766dc4939"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 13:21:44 crc kubenswrapper[4986]: I1203 13:21:44.748002 4986 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5baf423-355d-4b11-8579-e07766dc4939-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 13:21:44 crc kubenswrapper[4986]: I1203 13:21:44.748046 4986 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5baf423-355d-4b11-8579-e07766dc4939-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 13:21:44 crc kubenswrapper[4986]: I1203 13:21:44.748062 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gghzq\" (UniqueName: \"kubernetes.io/projected/a5baf423-355d-4b11-8579-e07766dc4939-kube-api-access-gghzq\") on node \"crc\" DevicePath \"\"" Dec 03 13:21:45 crc kubenswrapper[4986]: I1203 13:21:45.125303 4986 generic.go:334] "Generic (PLEG): container finished" podID="a5baf423-355d-4b11-8579-e07766dc4939" containerID="9f056c8bd55f4982569888c34ffb6aa1bd6f228d29285508faa25799d7ededd1" exitCode=0 Dec 03 13:21:45 crc kubenswrapper[4986]: I1203 13:21:45.125343 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6frnw" event={"ID":"a5baf423-355d-4b11-8579-e07766dc4939","Type":"ContainerDied","Data":"9f056c8bd55f4982569888c34ffb6aa1bd6f228d29285508faa25799d7ededd1"} Dec 03 13:21:45 crc kubenswrapper[4986]: I1203 13:21:45.125368 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6frnw" event={"ID":"a5baf423-355d-4b11-8579-e07766dc4939","Type":"ContainerDied","Data":"bfa7afeb8958aabe8d7e8470dcf54cbf920beaf7d8e8073fed6f3589cfe64bd6"} Dec 03 13:21:45 crc kubenswrapper[4986]: I1203 13:21:45.125386 4986 scope.go:117] "RemoveContainer" containerID="9f056c8bd55f4982569888c34ffb6aa1bd6f228d29285508faa25799d7ededd1" Dec 03 13:21:45 crc kubenswrapper[4986]: I1203 13:21:45.125519 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6frnw" Dec 03 13:21:45 crc kubenswrapper[4986]: I1203 13:21:45.181683 4986 scope.go:117] "RemoveContainer" containerID="23f54419f6df370539021fe02f37bfd4c39179e9a7ad0c1885bc1837eb83e0d7" Dec 03 13:21:45 crc kubenswrapper[4986]: I1203 13:21:45.196137 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6frnw"] Dec 03 13:21:45 crc kubenswrapper[4986]: I1203 13:21:45.209060 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6frnw"] Dec 03 13:21:45 crc kubenswrapper[4986]: I1203 13:21:45.219298 4986 scope.go:117] "RemoveContainer" containerID="57908dbe40b95e1e64e52428210e57965622a9a03e18f863f878e0ffeaae571d" Dec 03 13:21:45 crc kubenswrapper[4986]: I1203 13:21:45.258491 4986 scope.go:117] "RemoveContainer" containerID="9f056c8bd55f4982569888c34ffb6aa1bd6f228d29285508faa25799d7ededd1" Dec 03 13:21:45 crc kubenswrapper[4986]: E1203 13:21:45.259851 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f056c8bd55f4982569888c34ffb6aa1bd6f228d29285508faa25799d7ededd1\": container with ID starting with 9f056c8bd55f4982569888c34ffb6aa1bd6f228d29285508faa25799d7ededd1 not found: ID does not exist" containerID="9f056c8bd55f4982569888c34ffb6aa1bd6f228d29285508faa25799d7ededd1" Dec 03 13:21:45 crc kubenswrapper[4986]: I1203 13:21:45.259906 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f056c8bd55f4982569888c34ffb6aa1bd6f228d29285508faa25799d7ededd1"} err="failed to get container status \"9f056c8bd55f4982569888c34ffb6aa1bd6f228d29285508faa25799d7ededd1\": rpc error: code = NotFound desc = could not find container \"9f056c8bd55f4982569888c34ffb6aa1bd6f228d29285508faa25799d7ededd1\": container with ID starting with 9f056c8bd55f4982569888c34ffb6aa1bd6f228d29285508faa25799d7ededd1 not found: ID does not exist" Dec 03 13:21:45 crc kubenswrapper[4986]: I1203 13:21:45.259938 4986 scope.go:117] "RemoveContainer" containerID="23f54419f6df370539021fe02f37bfd4c39179e9a7ad0c1885bc1837eb83e0d7" Dec 03 13:21:45 crc kubenswrapper[4986]: E1203 13:21:45.260383 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23f54419f6df370539021fe02f37bfd4c39179e9a7ad0c1885bc1837eb83e0d7\": container with ID starting with 23f54419f6df370539021fe02f37bfd4c39179e9a7ad0c1885bc1837eb83e0d7 not found: ID does not exist" containerID="23f54419f6df370539021fe02f37bfd4c39179e9a7ad0c1885bc1837eb83e0d7" Dec 03 13:21:45 crc kubenswrapper[4986]: I1203 13:21:45.260416 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23f54419f6df370539021fe02f37bfd4c39179e9a7ad0c1885bc1837eb83e0d7"} err="failed to get container status \"23f54419f6df370539021fe02f37bfd4c39179e9a7ad0c1885bc1837eb83e0d7\": rpc error: code = NotFound desc = could not find container \"23f54419f6df370539021fe02f37bfd4c39179e9a7ad0c1885bc1837eb83e0d7\": container with ID starting with 23f54419f6df370539021fe02f37bfd4c39179e9a7ad0c1885bc1837eb83e0d7 not found: ID does not exist" Dec 03 13:21:45 crc kubenswrapper[4986]: I1203 13:21:45.260437 4986 scope.go:117] "RemoveContainer" containerID="57908dbe40b95e1e64e52428210e57965622a9a03e18f863f878e0ffeaae571d" Dec 03 13:21:45 crc kubenswrapper[4986]: E1203 13:21:45.260704 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57908dbe40b95e1e64e52428210e57965622a9a03e18f863f878e0ffeaae571d\": container with ID starting with 57908dbe40b95e1e64e52428210e57965622a9a03e18f863f878e0ffeaae571d not found: ID does not exist" containerID="57908dbe40b95e1e64e52428210e57965622a9a03e18f863f878e0ffeaae571d" Dec 03 13:21:45 crc kubenswrapper[4986]: I1203 13:21:45.260725 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57908dbe40b95e1e64e52428210e57965622a9a03e18f863f878e0ffeaae571d"} err="failed to get container status \"57908dbe40b95e1e64e52428210e57965622a9a03e18f863f878e0ffeaae571d\": rpc error: code = NotFound desc = could not find container \"57908dbe40b95e1e64e52428210e57965622a9a03e18f863f878e0ffeaae571d\": container with ID starting with 57908dbe40b95e1e64e52428210e57965622a9a03e18f863f878e0ffeaae571d not found: ID does not exist" Dec 03 13:21:45 crc kubenswrapper[4986]: I1203 13:21:45.467983 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 03 13:21:46 crc kubenswrapper[4986]: I1203 13:21:46.955316 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5baf423-355d-4b11-8579-e07766dc4939" path="/var/lib/kubelet/pods/a5baf423-355d-4b11-8579-e07766dc4939/volumes" Dec 03 13:21:48 crc kubenswrapper[4986]: I1203 13:21:48.943107 4986 scope.go:117] "RemoveContainer" containerID="3ba053774b99510def9eb1096d5851d891d9961deee4a7966a8cfbf9b5fbf159" Dec 03 13:21:48 crc kubenswrapper[4986]: E1203 13:21:48.943628 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 13:21:55 crc kubenswrapper[4986]: I1203 13:21:55.897124 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 03 13:21:57 crc kubenswrapper[4986]: I1203 13:21:57.176475 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 03 13:21:59 crc kubenswrapper[4986]: I1203 13:21:59.911124 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="285bb825-5a17-4d45-87d6-852513d0351b" containerName="rabbitmq" containerID="cri-o://cd77f1dedb74dc55252506af1fb7526b464897f502c56d636536bcccd59e644a" gracePeriod=604796 Dec 03 13:21:59 crc kubenswrapper[4986]: I1203 13:21:59.943610 4986 scope.go:117] "RemoveContainer" containerID="3ba053774b99510def9eb1096d5851d891d9961deee4a7966a8cfbf9b5fbf159" Dec 03 13:21:59 crc kubenswrapper[4986]: E1203 13:21:59.943934 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 13:22:01 crc kubenswrapper[4986]: I1203 13:22:01.314963 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="5e8a62bd-1e92-464d-b905-8eb18cc44646" containerName="rabbitmq" containerID="cri-o://70937ee4d6412671fece4d919a544662e17c8e49205ef4554e63372362219691" gracePeriod=604796 Dec 03 13:22:06 crc kubenswrapper[4986]: I1203 13:22:06.336182 4986 generic.go:334] "Generic (PLEG): container finished" podID="285bb825-5a17-4d45-87d6-852513d0351b" containerID="cd77f1dedb74dc55252506af1fb7526b464897f502c56d636536bcccd59e644a" exitCode=0 Dec 03 13:22:06 crc kubenswrapper[4986]: I1203 13:22:06.336322 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"285bb825-5a17-4d45-87d6-852513d0351b","Type":"ContainerDied","Data":"cd77f1dedb74dc55252506af1fb7526b464897f502c56d636536bcccd59e644a"} Dec 03 13:22:06 crc kubenswrapper[4986]: I1203 13:22:06.517266 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 03 13:22:06 crc kubenswrapper[4986]: I1203 13:22:06.588359 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/285bb825-5a17-4d45-87d6-852513d0351b-pod-info\") pod \"285bb825-5a17-4d45-87d6-852513d0351b\" (UID: \"285bb825-5a17-4d45-87d6-852513d0351b\") " Dec 03 13:22:06 crc kubenswrapper[4986]: I1203 13:22:06.588414 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/285bb825-5a17-4d45-87d6-852513d0351b-rabbitmq-tls\") pod \"285bb825-5a17-4d45-87d6-852513d0351b\" (UID: \"285bb825-5a17-4d45-87d6-852513d0351b\") " Dec 03 13:22:06 crc kubenswrapper[4986]: I1203 13:22:06.588481 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/285bb825-5a17-4d45-87d6-852513d0351b-config-data\") pod \"285bb825-5a17-4d45-87d6-852513d0351b\" (UID: \"285bb825-5a17-4d45-87d6-852513d0351b\") " Dec 03 13:22:06 crc kubenswrapper[4986]: I1203 13:22:06.588540 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/285bb825-5a17-4d45-87d6-852513d0351b-server-conf\") pod \"285bb825-5a17-4d45-87d6-852513d0351b\" (UID: \"285bb825-5a17-4d45-87d6-852513d0351b\") " Dec 03 13:22:06 crc kubenswrapper[4986]: I1203 13:22:06.588697 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/285bb825-5a17-4d45-87d6-852513d0351b-rabbitmq-plugins\") pod \"285bb825-5a17-4d45-87d6-852513d0351b\" (UID: \"285bb825-5a17-4d45-87d6-852513d0351b\") " Dec 03 13:22:06 crc kubenswrapper[4986]: I1203 13:22:06.588764 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"285bb825-5a17-4d45-87d6-852513d0351b\" (UID: \"285bb825-5a17-4d45-87d6-852513d0351b\") " Dec 03 13:22:06 crc kubenswrapper[4986]: I1203 13:22:06.588788 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/285bb825-5a17-4d45-87d6-852513d0351b-plugins-conf\") pod \"285bb825-5a17-4d45-87d6-852513d0351b\" (UID: \"285bb825-5a17-4d45-87d6-852513d0351b\") " Dec 03 13:22:06 crc kubenswrapper[4986]: I1203 13:22:06.588805 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/285bb825-5a17-4d45-87d6-852513d0351b-rabbitmq-erlang-cookie\") pod \"285bb825-5a17-4d45-87d6-852513d0351b\" (UID: \"285bb825-5a17-4d45-87d6-852513d0351b\") " Dec 03 13:22:06 crc kubenswrapper[4986]: I1203 13:22:06.588870 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/285bb825-5a17-4d45-87d6-852513d0351b-erlang-cookie-secret\") pod \"285bb825-5a17-4d45-87d6-852513d0351b\" (UID: \"285bb825-5a17-4d45-87d6-852513d0351b\") " Dec 03 13:22:06 crc kubenswrapper[4986]: I1203 13:22:06.588906 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/285bb825-5a17-4d45-87d6-852513d0351b-rabbitmq-confd\") pod \"285bb825-5a17-4d45-87d6-852513d0351b\" (UID: \"285bb825-5a17-4d45-87d6-852513d0351b\") " Dec 03 13:22:06 crc kubenswrapper[4986]: I1203 13:22:06.588931 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6d2cx\" (UniqueName: \"kubernetes.io/projected/285bb825-5a17-4d45-87d6-852513d0351b-kube-api-access-6d2cx\") pod \"285bb825-5a17-4d45-87d6-852513d0351b\" (UID: \"285bb825-5a17-4d45-87d6-852513d0351b\") " Dec 03 13:22:06 crc kubenswrapper[4986]: I1203 13:22:06.589384 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/285bb825-5a17-4d45-87d6-852513d0351b-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "285bb825-5a17-4d45-87d6-852513d0351b" (UID: "285bb825-5a17-4d45-87d6-852513d0351b"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 13:22:06 crc kubenswrapper[4986]: I1203 13:22:06.589463 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/285bb825-5a17-4d45-87d6-852513d0351b-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "285bb825-5a17-4d45-87d6-852513d0351b" (UID: "285bb825-5a17-4d45-87d6-852513d0351b"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:22:06 crc kubenswrapper[4986]: I1203 13:22:06.589972 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/285bb825-5a17-4d45-87d6-852513d0351b-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "285bb825-5a17-4d45-87d6-852513d0351b" (UID: "285bb825-5a17-4d45-87d6-852513d0351b"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 13:22:06 crc kubenswrapper[4986]: I1203 13:22:06.593791 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "persistence") pod "285bb825-5a17-4d45-87d6-852513d0351b" (UID: "285bb825-5a17-4d45-87d6-852513d0351b"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 03 13:22:06 crc kubenswrapper[4986]: I1203 13:22:06.595731 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/285bb825-5a17-4d45-87d6-852513d0351b-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "285bb825-5a17-4d45-87d6-852513d0351b" (UID: "285bb825-5a17-4d45-87d6-852513d0351b"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:22:06 crc kubenswrapper[4986]: I1203 13:22:06.595807 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/285bb825-5a17-4d45-87d6-852513d0351b-pod-info" (OuterVolumeSpecName: "pod-info") pod "285bb825-5a17-4d45-87d6-852513d0351b" (UID: "285bb825-5a17-4d45-87d6-852513d0351b"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Dec 03 13:22:06 crc kubenswrapper[4986]: I1203 13:22:06.603797 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/285bb825-5a17-4d45-87d6-852513d0351b-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "285bb825-5a17-4d45-87d6-852513d0351b" (UID: "285bb825-5a17-4d45-87d6-852513d0351b"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:22:06 crc kubenswrapper[4986]: I1203 13:22:06.606383 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/285bb825-5a17-4d45-87d6-852513d0351b-kube-api-access-6d2cx" (OuterVolumeSpecName: "kube-api-access-6d2cx") pod "285bb825-5a17-4d45-87d6-852513d0351b" (UID: "285bb825-5a17-4d45-87d6-852513d0351b"). InnerVolumeSpecName "kube-api-access-6d2cx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:22:06 crc kubenswrapper[4986]: I1203 13:22:06.645538 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/285bb825-5a17-4d45-87d6-852513d0351b-config-data" (OuterVolumeSpecName: "config-data") pod "285bb825-5a17-4d45-87d6-852513d0351b" (UID: "285bb825-5a17-4d45-87d6-852513d0351b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:22:06 crc kubenswrapper[4986]: I1203 13:22:06.671252 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/285bb825-5a17-4d45-87d6-852513d0351b-server-conf" (OuterVolumeSpecName: "server-conf") pod "285bb825-5a17-4d45-87d6-852513d0351b" (UID: "285bb825-5a17-4d45-87d6-852513d0351b"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:22:06 crc kubenswrapper[4986]: I1203 13:22:06.690709 4986 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/285bb825-5a17-4d45-87d6-852513d0351b-server-conf\") on node \"crc\" DevicePath \"\"" Dec 03 13:22:06 crc kubenswrapper[4986]: I1203 13:22:06.690736 4986 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/285bb825-5a17-4d45-87d6-852513d0351b-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Dec 03 13:22:06 crc kubenswrapper[4986]: I1203 13:22:06.690765 4986 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Dec 03 13:22:06 crc kubenswrapper[4986]: I1203 13:22:06.690774 4986 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/285bb825-5a17-4d45-87d6-852513d0351b-plugins-conf\") on node \"crc\" DevicePath \"\"" Dec 03 13:22:06 crc kubenswrapper[4986]: I1203 13:22:06.690785 4986 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/285bb825-5a17-4d45-87d6-852513d0351b-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Dec 03 13:22:06 crc kubenswrapper[4986]: I1203 13:22:06.690796 4986 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/285bb825-5a17-4d45-87d6-852513d0351b-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Dec 03 13:22:06 crc kubenswrapper[4986]: I1203 13:22:06.690805 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6d2cx\" (UniqueName: \"kubernetes.io/projected/285bb825-5a17-4d45-87d6-852513d0351b-kube-api-access-6d2cx\") on node \"crc\" DevicePath \"\"" Dec 03 13:22:06 crc kubenswrapper[4986]: I1203 13:22:06.690813 4986 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/285bb825-5a17-4d45-87d6-852513d0351b-pod-info\") on node \"crc\" DevicePath \"\"" Dec 03 13:22:06 crc kubenswrapper[4986]: I1203 13:22:06.690820 4986 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/285bb825-5a17-4d45-87d6-852513d0351b-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Dec 03 13:22:06 crc kubenswrapper[4986]: I1203 13:22:06.690828 4986 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/285bb825-5a17-4d45-87d6-852513d0351b-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 13:22:06 crc kubenswrapper[4986]: I1203 13:22:06.721058 4986 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Dec 03 13:22:06 crc kubenswrapper[4986]: I1203 13:22:06.747854 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/285bb825-5a17-4d45-87d6-852513d0351b-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "285bb825-5a17-4d45-87d6-852513d0351b" (UID: "285bb825-5a17-4d45-87d6-852513d0351b"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:22:06 crc kubenswrapper[4986]: I1203 13:22:06.792856 4986 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Dec 03 13:22:06 crc kubenswrapper[4986]: I1203 13:22:06.792896 4986 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/285bb825-5a17-4d45-87d6-852513d0351b-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Dec 03 13:22:07 crc kubenswrapper[4986]: I1203 13:22:07.347188 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 03 13:22:07 crc kubenswrapper[4986]: I1203 13:22:07.347141 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"285bb825-5a17-4d45-87d6-852513d0351b","Type":"ContainerDied","Data":"c8375457046ecdbee80a482cf08fbc1e34dcaecc5d8c276ad3770f3e9a91ff2e"} Dec 03 13:22:07 crc kubenswrapper[4986]: I1203 13:22:07.347636 4986 scope.go:117] "RemoveContainer" containerID="cd77f1dedb74dc55252506af1fb7526b464897f502c56d636536bcccd59e644a" Dec 03 13:22:07 crc kubenswrapper[4986]: I1203 13:22:07.387429 4986 scope.go:117] "RemoveContainer" containerID="42ee92ce1ede62523add70392662fba98598b4a99ffb08d37172d9fc355676d5" Dec 03 13:22:07 crc kubenswrapper[4986]: I1203 13:22:07.389101 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 03 13:22:07 crc kubenswrapper[4986]: I1203 13:22:07.407540 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 03 13:22:07 crc kubenswrapper[4986]: I1203 13:22:07.419468 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Dec 03 13:22:07 crc kubenswrapper[4986]: E1203 13:22:07.419822 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5baf423-355d-4b11-8579-e07766dc4939" containerName="extract-content" Dec 03 13:22:07 crc kubenswrapper[4986]: I1203 13:22:07.419837 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5baf423-355d-4b11-8579-e07766dc4939" containerName="extract-content" Dec 03 13:22:07 crc kubenswrapper[4986]: E1203 13:22:07.419850 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="285bb825-5a17-4d45-87d6-852513d0351b" containerName="setup-container" Dec 03 13:22:07 crc kubenswrapper[4986]: I1203 13:22:07.419856 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="285bb825-5a17-4d45-87d6-852513d0351b" containerName="setup-container" Dec 03 13:22:07 crc kubenswrapper[4986]: E1203 13:22:07.419871 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="285bb825-5a17-4d45-87d6-852513d0351b" containerName="rabbitmq" Dec 03 13:22:07 crc kubenswrapper[4986]: I1203 13:22:07.419880 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="285bb825-5a17-4d45-87d6-852513d0351b" containerName="rabbitmq" Dec 03 13:22:07 crc kubenswrapper[4986]: E1203 13:22:07.419890 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5baf423-355d-4b11-8579-e07766dc4939" containerName="registry-server" Dec 03 13:22:07 crc kubenswrapper[4986]: I1203 13:22:07.419896 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5baf423-355d-4b11-8579-e07766dc4939" containerName="registry-server" Dec 03 13:22:07 crc kubenswrapper[4986]: E1203 13:22:07.419908 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5baf423-355d-4b11-8579-e07766dc4939" containerName="extract-utilities" Dec 03 13:22:07 crc kubenswrapper[4986]: I1203 13:22:07.419914 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5baf423-355d-4b11-8579-e07766dc4939" containerName="extract-utilities" Dec 03 13:22:07 crc kubenswrapper[4986]: I1203 13:22:07.420093 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="285bb825-5a17-4d45-87d6-852513d0351b" containerName="rabbitmq" Dec 03 13:22:07 crc kubenswrapper[4986]: I1203 13:22:07.420119 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5baf423-355d-4b11-8579-e07766dc4939" containerName="registry-server" Dec 03 13:22:07 crc kubenswrapper[4986]: I1203 13:22:07.421029 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 03 13:22:07 crc kubenswrapper[4986]: I1203 13:22:07.430681 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-xhl6q" Dec 03 13:22:07 crc kubenswrapper[4986]: I1203 13:22:07.430856 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Dec 03 13:22:07 crc kubenswrapper[4986]: I1203 13:22:07.431225 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Dec 03 13:22:07 crc kubenswrapper[4986]: I1203 13:22:07.431542 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Dec 03 13:22:07 crc kubenswrapper[4986]: I1203 13:22:07.431612 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Dec 03 13:22:07 crc kubenswrapper[4986]: I1203 13:22:07.431548 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Dec 03 13:22:07 crc kubenswrapper[4986]: I1203 13:22:07.431700 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Dec 03 13:22:07 crc kubenswrapper[4986]: I1203 13:22:07.441756 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 03 13:22:07 crc kubenswrapper[4986]: I1203 13:22:07.505981 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/67da7713-f27f-48cb-a2f1-4ebea4d2f939-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"67da7713-f27f-48cb-a2f1-4ebea4d2f939\") " pod="openstack/rabbitmq-server-0" Dec 03 13:22:07 crc kubenswrapper[4986]: I1203 13:22:07.506057 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/67da7713-f27f-48cb-a2f1-4ebea4d2f939-pod-info\") pod \"rabbitmq-server-0\" (UID: \"67da7713-f27f-48cb-a2f1-4ebea4d2f939\") " pod="openstack/rabbitmq-server-0" Dec 03 13:22:07 crc kubenswrapper[4986]: I1203 13:22:07.506146 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"67da7713-f27f-48cb-a2f1-4ebea4d2f939\") " pod="openstack/rabbitmq-server-0" Dec 03 13:22:07 crc kubenswrapper[4986]: I1203 13:22:07.506171 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/67da7713-f27f-48cb-a2f1-4ebea4d2f939-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"67da7713-f27f-48cb-a2f1-4ebea4d2f939\") " pod="openstack/rabbitmq-server-0" Dec 03 13:22:07 crc kubenswrapper[4986]: I1203 13:22:07.506401 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6tmn\" (UniqueName: \"kubernetes.io/projected/67da7713-f27f-48cb-a2f1-4ebea4d2f939-kube-api-access-k6tmn\") pod \"rabbitmq-server-0\" (UID: \"67da7713-f27f-48cb-a2f1-4ebea4d2f939\") " pod="openstack/rabbitmq-server-0" Dec 03 13:22:07 crc kubenswrapper[4986]: I1203 13:22:07.506501 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/67da7713-f27f-48cb-a2f1-4ebea4d2f939-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"67da7713-f27f-48cb-a2f1-4ebea4d2f939\") " pod="openstack/rabbitmq-server-0" Dec 03 13:22:07 crc kubenswrapper[4986]: I1203 13:22:07.506538 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/67da7713-f27f-48cb-a2f1-4ebea4d2f939-config-data\") pod \"rabbitmq-server-0\" (UID: \"67da7713-f27f-48cb-a2f1-4ebea4d2f939\") " pod="openstack/rabbitmq-server-0" Dec 03 13:22:07 crc kubenswrapper[4986]: I1203 13:22:07.506640 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/67da7713-f27f-48cb-a2f1-4ebea4d2f939-server-conf\") pod \"rabbitmq-server-0\" (UID: \"67da7713-f27f-48cb-a2f1-4ebea4d2f939\") " pod="openstack/rabbitmq-server-0" Dec 03 13:22:07 crc kubenswrapper[4986]: I1203 13:22:07.506728 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/67da7713-f27f-48cb-a2f1-4ebea4d2f939-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"67da7713-f27f-48cb-a2f1-4ebea4d2f939\") " pod="openstack/rabbitmq-server-0" Dec 03 13:22:07 crc kubenswrapper[4986]: I1203 13:22:07.506815 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/67da7713-f27f-48cb-a2f1-4ebea4d2f939-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"67da7713-f27f-48cb-a2f1-4ebea4d2f939\") " pod="openstack/rabbitmq-server-0" Dec 03 13:22:07 crc kubenswrapper[4986]: I1203 13:22:07.506839 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/67da7713-f27f-48cb-a2f1-4ebea4d2f939-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"67da7713-f27f-48cb-a2f1-4ebea4d2f939\") " pod="openstack/rabbitmq-server-0" Dec 03 13:22:07 crc kubenswrapper[4986]: I1203 13:22:07.608982 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/67da7713-f27f-48cb-a2f1-4ebea4d2f939-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"67da7713-f27f-48cb-a2f1-4ebea4d2f939\") " pod="openstack/rabbitmq-server-0" Dec 03 13:22:07 crc kubenswrapper[4986]: I1203 13:22:07.609037 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/67da7713-f27f-48cb-a2f1-4ebea4d2f939-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"67da7713-f27f-48cb-a2f1-4ebea4d2f939\") " pod="openstack/rabbitmq-server-0" Dec 03 13:22:07 crc kubenswrapper[4986]: I1203 13:22:07.609108 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/67da7713-f27f-48cb-a2f1-4ebea4d2f939-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"67da7713-f27f-48cb-a2f1-4ebea4d2f939\") " pod="openstack/rabbitmq-server-0" Dec 03 13:22:07 crc kubenswrapper[4986]: I1203 13:22:07.609143 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/67da7713-f27f-48cb-a2f1-4ebea4d2f939-pod-info\") pod \"rabbitmq-server-0\" (UID: \"67da7713-f27f-48cb-a2f1-4ebea4d2f939\") " pod="openstack/rabbitmq-server-0" Dec 03 13:22:07 crc kubenswrapper[4986]: I1203 13:22:07.609177 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"67da7713-f27f-48cb-a2f1-4ebea4d2f939\") " pod="openstack/rabbitmq-server-0" Dec 03 13:22:07 crc kubenswrapper[4986]: I1203 13:22:07.609205 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/67da7713-f27f-48cb-a2f1-4ebea4d2f939-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"67da7713-f27f-48cb-a2f1-4ebea4d2f939\") " pod="openstack/rabbitmq-server-0" Dec 03 13:22:07 crc kubenswrapper[4986]: I1203 13:22:07.609263 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6tmn\" (UniqueName: \"kubernetes.io/projected/67da7713-f27f-48cb-a2f1-4ebea4d2f939-kube-api-access-k6tmn\") pod \"rabbitmq-server-0\" (UID: \"67da7713-f27f-48cb-a2f1-4ebea4d2f939\") " pod="openstack/rabbitmq-server-0" Dec 03 13:22:07 crc kubenswrapper[4986]: I1203 13:22:07.609320 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/67da7713-f27f-48cb-a2f1-4ebea4d2f939-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"67da7713-f27f-48cb-a2f1-4ebea4d2f939\") " pod="openstack/rabbitmq-server-0" Dec 03 13:22:07 crc kubenswrapper[4986]: I1203 13:22:07.609345 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/67da7713-f27f-48cb-a2f1-4ebea4d2f939-config-data\") pod \"rabbitmq-server-0\" (UID: \"67da7713-f27f-48cb-a2f1-4ebea4d2f939\") " pod="openstack/rabbitmq-server-0" Dec 03 13:22:07 crc kubenswrapper[4986]: I1203 13:22:07.609395 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/67da7713-f27f-48cb-a2f1-4ebea4d2f939-server-conf\") pod \"rabbitmq-server-0\" (UID: \"67da7713-f27f-48cb-a2f1-4ebea4d2f939\") " pod="openstack/rabbitmq-server-0" Dec 03 13:22:07 crc kubenswrapper[4986]: I1203 13:22:07.609432 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/67da7713-f27f-48cb-a2f1-4ebea4d2f939-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"67da7713-f27f-48cb-a2f1-4ebea4d2f939\") " pod="openstack/rabbitmq-server-0" Dec 03 13:22:07 crc kubenswrapper[4986]: I1203 13:22:07.610099 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/67da7713-f27f-48cb-a2f1-4ebea4d2f939-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"67da7713-f27f-48cb-a2f1-4ebea4d2f939\") " pod="openstack/rabbitmq-server-0" Dec 03 13:22:07 crc kubenswrapper[4986]: I1203 13:22:07.610699 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/67da7713-f27f-48cb-a2f1-4ebea4d2f939-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"67da7713-f27f-48cb-a2f1-4ebea4d2f939\") " pod="openstack/rabbitmq-server-0" Dec 03 13:22:07 crc kubenswrapper[4986]: I1203 13:22:07.610894 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/67da7713-f27f-48cb-a2f1-4ebea4d2f939-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"67da7713-f27f-48cb-a2f1-4ebea4d2f939\") " pod="openstack/rabbitmq-server-0" Dec 03 13:22:07 crc kubenswrapper[4986]: I1203 13:22:07.611117 4986 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"67da7713-f27f-48cb-a2f1-4ebea4d2f939\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-server-0" Dec 03 13:22:07 crc kubenswrapper[4986]: I1203 13:22:07.611679 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/67da7713-f27f-48cb-a2f1-4ebea4d2f939-config-data\") pod \"rabbitmq-server-0\" (UID: \"67da7713-f27f-48cb-a2f1-4ebea4d2f939\") " pod="openstack/rabbitmq-server-0" Dec 03 13:22:07 crc kubenswrapper[4986]: I1203 13:22:07.611842 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/67da7713-f27f-48cb-a2f1-4ebea4d2f939-server-conf\") pod \"rabbitmq-server-0\" (UID: \"67da7713-f27f-48cb-a2f1-4ebea4d2f939\") " pod="openstack/rabbitmq-server-0" Dec 03 13:22:07 crc kubenswrapper[4986]: I1203 13:22:07.614597 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/67da7713-f27f-48cb-a2f1-4ebea4d2f939-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"67da7713-f27f-48cb-a2f1-4ebea4d2f939\") " pod="openstack/rabbitmq-server-0" Dec 03 13:22:07 crc kubenswrapper[4986]: I1203 13:22:07.615429 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/67da7713-f27f-48cb-a2f1-4ebea4d2f939-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"67da7713-f27f-48cb-a2f1-4ebea4d2f939\") " pod="openstack/rabbitmq-server-0" Dec 03 13:22:07 crc kubenswrapper[4986]: I1203 13:22:07.615633 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/67da7713-f27f-48cb-a2f1-4ebea4d2f939-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"67da7713-f27f-48cb-a2f1-4ebea4d2f939\") " pod="openstack/rabbitmq-server-0" Dec 03 13:22:07 crc kubenswrapper[4986]: I1203 13:22:07.620896 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/67da7713-f27f-48cb-a2f1-4ebea4d2f939-pod-info\") pod \"rabbitmq-server-0\" (UID: \"67da7713-f27f-48cb-a2f1-4ebea4d2f939\") " pod="openstack/rabbitmq-server-0" Dec 03 13:22:07 crc kubenswrapper[4986]: I1203 13:22:07.640711 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6tmn\" (UniqueName: \"kubernetes.io/projected/67da7713-f27f-48cb-a2f1-4ebea4d2f939-kube-api-access-k6tmn\") pod \"rabbitmq-server-0\" (UID: \"67da7713-f27f-48cb-a2f1-4ebea4d2f939\") " pod="openstack/rabbitmq-server-0" Dec 03 13:22:07 crc kubenswrapper[4986]: I1203 13:22:07.652413 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"67da7713-f27f-48cb-a2f1-4ebea4d2f939\") " pod="openstack/rabbitmq-server-0" Dec 03 13:22:07 crc kubenswrapper[4986]: I1203 13:22:07.868942 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 03 13:22:07 crc kubenswrapper[4986]: I1203 13:22:07.880489 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 03 13:22:07 crc kubenswrapper[4986]: I1203 13:22:07.920503 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"5e8a62bd-1e92-464d-b905-8eb18cc44646\" (UID: \"5e8a62bd-1e92-464d-b905-8eb18cc44646\") " Dec 03 13:22:07 crc kubenswrapper[4986]: I1203 13:22:07.920567 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5e8a62bd-1e92-464d-b905-8eb18cc44646-rabbitmq-confd\") pod \"5e8a62bd-1e92-464d-b905-8eb18cc44646\" (UID: \"5e8a62bd-1e92-464d-b905-8eb18cc44646\") " Dec 03 13:22:07 crc kubenswrapper[4986]: I1203 13:22:07.920637 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5e8a62bd-1e92-464d-b905-8eb18cc44646-server-conf\") pod \"5e8a62bd-1e92-464d-b905-8eb18cc44646\" (UID: \"5e8a62bd-1e92-464d-b905-8eb18cc44646\") " Dec 03 13:22:07 crc kubenswrapper[4986]: I1203 13:22:07.920689 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5e8a62bd-1e92-464d-b905-8eb18cc44646-rabbitmq-tls\") pod \"5e8a62bd-1e92-464d-b905-8eb18cc44646\" (UID: \"5e8a62bd-1e92-464d-b905-8eb18cc44646\") " Dec 03 13:22:07 crc kubenswrapper[4986]: I1203 13:22:07.920760 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5e8a62bd-1e92-464d-b905-8eb18cc44646-plugins-conf\") pod \"5e8a62bd-1e92-464d-b905-8eb18cc44646\" (UID: \"5e8a62bd-1e92-464d-b905-8eb18cc44646\") " Dec 03 13:22:07 crc kubenswrapper[4986]: I1203 13:22:07.920806 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5e8a62bd-1e92-464d-b905-8eb18cc44646-pod-info\") pod \"5e8a62bd-1e92-464d-b905-8eb18cc44646\" (UID: \"5e8a62bd-1e92-464d-b905-8eb18cc44646\") " Dec 03 13:22:07 crc kubenswrapper[4986]: I1203 13:22:07.920871 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5e8a62bd-1e92-464d-b905-8eb18cc44646-config-data\") pod \"5e8a62bd-1e92-464d-b905-8eb18cc44646\" (UID: \"5e8a62bd-1e92-464d-b905-8eb18cc44646\") " Dec 03 13:22:07 crc kubenswrapper[4986]: I1203 13:22:07.920920 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5e8a62bd-1e92-464d-b905-8eb18cc44646-rabbitmq-erlang-cookie\") pod \"5e8a62bd-1e92-464d-b905-8eb18cc44646\" (UID: \"5e8a62bd-1e92-464d-b905-8eb18cc44646\") " Dec 03 13:22:07 crc kubenswrapper[4986]: I1203 13:22:07.920980 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pdgs8\" (UniqueName: \"kubernetes.io/projected/5e8a62bd-1e92-464d-b905-8eb18cc44646-kube-api-access-pdgs8\") pod \"5e8a62bd-1e92-464d-b905-8eb18cc44646\" (UID: \"5e8a62bd-1e92-464d-b905-8eb18cc44646\") " Dec 03 13:22:07 crc kubenswrapper[4986]: I1203 13:22:07.921020 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5e8a62bd-1e92-464d-b905-8eb18cc44646-erlang-cookie-secret\") pod \"5e8a62bd-1e92-464d-b905-8eb18cc44646\" (UID: \"5e8a62bd-1e92-464d-b905-8eb18cc44646\") " Dec 03 13:22:07 crc kubenswrapper[4986]: I1203 13:22:07.921048 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5e8a62bd-1e92-464d-b905-8eb18cc44646-rabbitmq-plugins\") pod \"5e8a62bd-1e92-464d-b905-8eb18cc44646\" (UID: \"5e8a62bd-1e92-464d-b905-8eb18cc44646\") " Dec 03 13:22:07 crc kubenswrapper[4986]: I1203 13:22:07.922669 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e8a62bd-1e92-464d-b905-8eb18cc44646-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "5e8a62bd-1e92-464d-b905-8eb18cc44646" (UID: "5e8a62bd-1e92-464d-b905-8eb18cc44646"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 13:22:07 crc kubenswrapper[4986]: I1203 13:22:07.928180 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e8a62bd-1e92-464d-b905-8eb18cc44646-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "5e8a62bd-1e92-464d-b905-8eb18cc44646" (UID: "5e8a62bd-1e92-464d-b905-8eb18cc44646"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 13:22:07 crc kubenswrapper[4986]: I1203 13:22:07.933002 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e8a62bd-1e92-464d-b905-8eb18cc44646-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "5e8a62bd-1e92-464d-b905-8eb18cc44646" (UID: "5e8a62bd-1e92-464d-b905-8eb18cc44646"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:22:07 crc kubenswrapper[4986]: I1203 13:22:07.935135 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e8a62bd-1e92-464d-b905-8eb18cc44646-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "5e8a62bd-1e92-464d-b905-8eb18cc44646" (UID: "5e8a62bd-1e92-464d-b905-8eb18cc44646"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:22:07 crc kubenswrapper[4986]: I1203 13:22:07.937818 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e8a62bd-1e92-464d-b905-8eb18cc44646-kube-api-access-pdgs8" (OuterVolumeSpecName: "kube-api-access-pdgs8") pod "5e8a62bd-1e92-464d-b905-8eb18cc44646" (UID: "5e8a62bd-1e92-464d-b905-8eb18cc44646"). InnerVolumeSpecName "kube-api-access-pdgs8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:22:07 crc kubenswrapper[4986]: I1203 13:22:07.940107 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e8a62bd-1e92-464d-b905-8eb18cc44646-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "5e8a62bd-1e92-464d-b905-8eb18cc44646" (UID: "5e8a62bd-1e92-464d-b905-8eb18cc44646"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:22:07 crc kubenswrapper[4986]: I1203 13:22:07.941611 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "persistence") pod "5e8a62bd-1e92-464d-b905-8eb18cc44646" (UID: "5e8a62bd-1e92-464d-b905-8eb18cc44646"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 03 13:22:07 crc kubenswrapper[4986]: I1203 13:22:07.956463 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/5e8a62bd-1e92-464d-b905-8eb18cc44646-pod-info" (OuterVolumeSpecName: "pod-info") pod "5e8a62bd-1e92-464d-b905-8eb18cc44646" (UID: "5e8a62bd-1e92-464d-b905-8eb18cc44646"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Dec 03 13:22:07 crc kubenswrapper[4986]: I1203 13:22:07.964432 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e8a62bd-1e92-464d-b905-8eb18cc44646-config-data" (OuterVolumeSpecName: "config-data") pod "5e8a62bd-1e92-464d-b905-8eb18cc44646" (UID: "5e8a62bd-1e92-464d-b905-8eb18cc44646"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:22:08 crc kubenswrapper[4986]: I1203 13:22:08.024446 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e8a62bd-1e92-464d-b905-8eb18cc44646-server-conf" (OuterVolumeSpecName: "server-conf") pod "5e8a62bd-1e92-464d-b905-8eb18cc44646" (UID: "5e8a62bd-1e92-464d-b905-8eb18cc44646"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:22:08 crc kubenswrapper[4986]: I1203 13:22:08.027778 4986 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Dec 03 13:22:08 crc kubenswrapper[4986]: I1203 13:22:08.027812 4986 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5e8a62bd-1e92-464d-b905-8eb18cc44646-server-conf\") on node \"crc\" DevicePath \"\"" Dec 03 13:22:08 crc kubenswrapper[4986]: I1203 13:22:08.027825 4986 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5e8a62bd-1e92-464d-b905-8eb18cc44646-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Dec 03 13:22:08 crc kubenswrapper[4986]: I1203 13:22:08.027845 4986 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5e8a62bd-1e92-464d-b905-8eb18cc44646-plugins-conf\") on node \"crc\" DevicePath \"\"" Dec 03 13:22:08 crc kubenswrapper[4986]: I1203 13:22:08.027856 4986 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5e8a62bd-1e92-464d-b905-8eb18cc44646-pod-info\") on node \"crc\" DevicePath \"\"" Dec 03 13:22:08 crc kubenswrapper[4986]: I1203 13:22:08.027867 4986 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5e8a62bd-1e92-464d-b905-8eb18cc44646-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 13:22:08 crc kubenswrapper[4986]: I1203 13:22:08.027878 4986 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5e8a62bd-1e92-464d-b905-8eb18cc44646-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Dec 03 13:22:08 crc kubenswrapper[4986]: I1203 13:22:08.027890 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pdgs8\" (UniqueName: \"kubernetes.io/projected/5e8a62bd-1e92-464d-b905-8eb18cc44646-kube-api-access-pdgs8\") on node \"crc\" DevicePath \"\"" Dec 03 13:22:08 crc kubenswrapper[4986]: I1203 13:22:08.027901 4986 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5e8a62bd-1e92-464d-b905-8eb18cc44646-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Dec 03 13:22:08 crc kubenswrapper[4986]: I1203 13:22:08.027911 4986 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5e8a62bd-1e92-464d-b905-8eb18cc44646-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Dec 03 13:22:08 crc kubenswrapper[4986]: I1203 13:22:08.066191 4986 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Dec 03 13:22:08 crc kubenswrapper[4986]: I1203 13:22:08.092246 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e8a62bd-1e92-464d-b905-8eb18cc44646-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "5e8a62bd-1e92-464d-b905-8eb18cc44646" (UID: "5e8a62bd-1e92-464d-b905-8eb18cc44646"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:22:08 crc kubenswrapper[4986]: I1203 13:22:08.141370 4986 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Dec 03 13:22:08 crc kubenswrapper[4986]: I1203 13:22:08.141404 4986 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5e8a62bd-1e92-464d-b905-8eb18cc44646-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Dec 03 13:22:08 crc kubenswrapper[4986]: I1203 13:22:08.359870 4986 generic.go:334] "Generic (PLEG): container finished" podID="5e8a62bd-1e92-464d-b905-8eb18cc44646" containerID="70937ee4d6412671fece4d919a544662e17c8e49205ef4554e63372362219691" exitCode=0 Dec 03 13:22:08 crc kubenswrapper[4986]: I1203 13:22:08.359940 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"5e8a62bd-1e92-464d-b905-8eb18cc44646","Type":"ContainerDied","Data":"70937ee4d6412671fece4d919a544662e17c8e49205ef4554e63372362219691"} Dec 03 13:22:08 crc kubenswrapper[4986]: I1203 13:22:08.360239 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"5e8a62bd-1e92-464d-b905-8eb18cc44646","Type":"ContainerDied","Data":"b1ebe45443ee46ac60425599ab9f28dcb0701675bbd250c8a50ac52f0f0c5dcd"} Dec 03 13:22:08 crc kubenswrapper[4986]: I1203 13:22:08.360270 4986 scope.go:117] "RemoveContainer" containerID="70937ee4d6412671fece4d919a544662e17c8e49205ef4554e63372362219691" Dec 03 13:22:08 crc kubenswrapper[4986]: I1203 13:22:08.360013 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 03 13:22:08 crc kubenswrapper[4986]: I1203 13:22:08.387326 4986 scope.go:117] "RemoveContainer" containerID="920c3ace47a148c8d0728716a7fd4dea74b6788fc8e1d4cbff7078089dbef14c" Dec 03 13:22:08 crc kubenswrapper[4986]: I1203 13:22:08.402369 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 03 13:22:08 crc kubenswrapper[4986]: I1203 13:22:08.416102 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 03 13:22:08 crc kubenswrapper[4986]: I1203 13:22:08.433480 4986 scope.go:117] "RemoveContainer" containerID="70937ee4d6412671fece4d919a544662e17c8e49205ef4554e63372362219691" Dec 03 13:22:08 crc kubenswrapper[4986]: E1203 13:22:08.436255 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70937ee4d6412671fece4d919a544662e17c8e49205ef4554e63372362219691\": container with ID starting with 70937ee4d6412671fece4d919a544662e17c8e49205ef4554e63372362219691 not found: ID does not exist" containerID="70937ee4d6412671fece4d919a544662e17c8e49205ef4554e63372362219691" Dec 03 13:22:08 crc kubenswrapper[4986]: I1203 13:22:08.436313 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70937ee4d6412671fece4d919a544662e17c8e49205ef4554e63372362219691"} err="failed to get container status \"70937ee4d6412671fece4d919a544662e17c8e49205ef4554e63372362219691\": rpc error: code = NotFound desc = could not find container \"70937ee4d6412671fece4d919a544662e17c8e49205ef4554e63372362219691\": container with ID starting with 70937ee4d6412671fece4d919a544662e17c8e49205ef4554e63372362219691 not found: ID does not exist" Dec 03 13:22:08 crc kubenswrapper[4986]: I1203 13:22:08.436338 4986 scope.go:117] "RemoveContainer" containerID="920c3ace47a148c8d0728716a7fd4dea74b6788fc8e1d4cbff7078089dbef14c" Dec 03 13:22:08 crc kubenswrapper[4986]: E1203 13:22:08.436774 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"920c3ace47a148c8d0728716a7fd4dea74b6788fc8e1d4cbff7078089dbef14c\": container with ID starting with 920c3ace47a148c8d0728716a7fd4dea74b6788fc8e1d4cbff7078089dbef14c not found: ID does not exist" containerID="920c3ace47a148c8d0728716a7fd4dea74b6788fc8e1d4cbff7078089dbef14c" Dec 03 13:22:08 crc kubenswrapper[4986]: I1203 13:22:08.436816 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"920c3ace47a148c8d0728716a7fd4dea74b6788fc8e1d4cbff7078089dbef14c"} err="failed to get container status \"920c3ace47a148c8d0728716a7fd4dea74b6788fc8e1d4cbff7078089dbef14c\": rpc error: code = NotFound desc = could not find container \"920c3ace47a148c8d0728716a7fd4dea74b6788fc8e1d4cbff7078089dbef14c\": container with ID starting with 920c3ace47a148c8d0728716a7fd4dea74b6788fc8e1d4cbff7078089dbef14c not found: ID does not exist" Dec 03 13:22:08 crc kubenswrapper[4986]: W1203 13:22:08.441888 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod67da7713_f27f_48cb_a2f1_4ebea4d2f939.slice/crio-0e2ec39470c80e330377acf6ea4332544d818d7b157fe5f48bee07b07fed9c2e WatchSource:0}: Error finding container 0e2ec39470c80e330377acf6ea4332544d818d7b157fe5f48bee07b07fed9c2e: Status 404 returned error can't find the container with id 0e2ec39470c80e330377acf6ea4332544d818d7b157fe5f48bee07b07fed9c2e Dec 03 13:22:08 crc kubenswrapper[4986]: I1203 13:22:08.441946 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 03 13:22:08 crc kubenswrapper[4986]: I1203 13:22:08.454012 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 03 13:22:08 crc kubenswrapper[4986]: E1203 13:22:08.454438 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e8a62bd-1e92-464d-b905-8eb18cc44646" containerName="rabbitmq" Dec 03 13:22:08 crc kubenswrapper[4986]: I1203 13:22:08.454461 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e8a62bd-1e92-464d-b905-8eb18cc44646" containerName="rabbitmq" Dec 03 13:22:08 crc kubenswrapper[4986]: E1203 13:22:08.454481 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e8a62bd-1e92-464d-b905-8eb18cc44646" containerName="setup-container" Dec 03 13:22:08 crc kubenswrapper[4986]: I1203 13:22:08.454488 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e8a62bd-1e92-464d-b905-8eb18cc44646" containerName="setup-container" Dec 03 13:22:08 crc kubenswrapper[4986]: I1203 13:22:08.454694 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e8a62bd-1e92-464d-b905-8eb18cc44646" containerName="rabbitmq" Dec 03 13:22:08 crc kubenswrapper[4986]: I1203 13:22:08.455847 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 03 13:22:08 crc kubenswrapper[4986]: I1203 13:22:08.463681 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Dec 03 13:22:08 crc kubenswrapper[4986]: I1203 13:22:08.463925 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Dec 03 13:22:08 crc kubenswrapper[4986]: I1203 13:22:08.464031 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-qvv8d" Dec 03 13:22:08 crc kubenswrapper[4986]: I1203 13:22:08.464225 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Dec 03 13:22:08 crc kubenswrapper[4986]: I1203 13:22:08.464305 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 03 13:22:08 crc kubenswrapper[4986]: I1203 13:22:08.464422 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Dec 03 13:22:08 crc kubenswrapper[4986]: I1203 13:22:08.464741 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Dec 03 13:22:08 crc kubenswrapper[4986]: I1203 13:22:08.482112 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Dec 03 13:22:08 crc kubenswrapper[4986]: I1203 13:22:08.654961 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x84qs\" (UniqueName: \"kubernetes.io/projected/e06b4596-d4ac-4524-a521-ae6edfc239be-kube-api-access-x84qs\") pod \"rabbitmq-cell1-server-0\" (UID: \"e06b4596-d4ac-4524-a521-ae6edfc239be\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 13:22:08 crc kubenswrapper[4986]: I1203 13:22:08.655028 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e06b4596-d4ac-4524-a521-ae6edfc239be-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"e06b4596-d4ac-4524-a521-ae6edfc239be\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 13:22:08 crc kubenswrapper[4986]: I1203 13:22:08.655105 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e06b4596-d4ac-4524-a521-ae6edfc239be-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"e06b4596-d4ac-4524-a521-ae6edfc239be\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 13:22:08 crc kubenswrapper[4986]: I1203 13:22:08.655158 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e06b4596-d4ac-4524-a521-ae6edfc239be-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"e06b4596-d4ac-4524-a521-ae6edfc239be\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 13:22:08 crc kubenswrapper[4986]: I1203 13:22:08.655187 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e06b4596-d4ac-4524-a521-ae6edfc239be-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"e06b4596-d4ac-4524-a521-ae6edfc239be\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 13:22:08 crc kubenswrapper[4986]: I1203 13:22:08.655225 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e06b4596-d4ac-4524-a521-ae6edfc239be-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"e06b4596-d4ac-4524-a521-ae6edfc239be\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 13:22:08 crc kubenswrapper[4986]: I1203 13:22:08.655250 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e06b4596-d4ac-4524-a521-ae6edfc239be-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e06b4596-d4ac-4524-a521-ae6edfc239be\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 13:22:08 crc kubenswrapper[4986]: I1203 13:22:08.655718 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e06b4596-d4ac-4524-a521-ae6edfc239be-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e06b4596-d4ac-4524-a521-ae6edfc239be\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 13:22:08 crc kubenswrapper[4986]: I1203 13:22:08.655761 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e06b4596-d4ac-4524-a521-ae6edfc239be-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"e06b4596-d4ac-4524-a521-ae6edfc239be\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 13:22:08 crc kubenswrapper[4986]: I1203 13:22:08.655855 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e06b4596-d4ac-4524-a521-ae6edfc239be-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"e06b4596-d4ac-4524-a521-ae6edfc239be\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 13:22:08 crc kubenswrapper[4986]: I1203 13:22:08.655942 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"e06b4596-d4ac-4524-a521-ae6edfc239be\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 13:22:08 crc kubenswrapper[4986]: I1203 13:22:08.757623 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"e06b4596-d4ac-4524-a521-ae6edfc239be\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 13:22:08 crc kubenswrapper[4986]: I1203 13:22:08.757710 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x84qs\" (UniqueName: \"kubernetes.io/projected/e06b4596-d4ac-4524-a521-ae6edfc239be-kube-api-access-x84qs\") pod \"rabbitmq-cell1-server-0\" (UID: \"e06b4596-d4ac-4524-a521-ae6edfc239be\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 13:22:08 crc kubenswrapper[4986]: I1203 13:22:08.757750 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e06b4596-d4ac-4524-a521-ae6edfc239be-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"e06b4596-d4ac-4524-a521-ae6edfc239be\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 13:22:08 crc kubenswrapper[4986]: I1203 13:22:08.757780 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e06b4596-d4ac-4524-a521-ae6edfc239be-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"e06b4596-d4ac-4524-a521-ae6edfc239be\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 13:22:08 crc kubenswrapper[4986]: I1203 13:22:08.757832 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e06b4596-d4ac-4524-a521-ae6edfc239be-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"e06b4596-d4ac-4524-a521-ae6edfc239be\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 13:22:08 crc kubenswrapper[4986]: I1203 13:22:08.757862 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e06b4596-d4ac-4524-a521-ae6edfc239be-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"e06b4596-d4ac-4524-a521-ae6edfc239be\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 13:22:08 crc kubenswrapper[4986]: I1203 13:22:08.758067 4986 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"e06b4596-d4ac-4524-a521-ae6edfc239be\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/rabbitmq-cell1-server-0" Dec 03 13:22:08 crc kubenswrapper[4986]: I1203 13:22:08.761100 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e06b4596-d4ac-4524-a521-ae6edfc239be-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"e06b4596-d4ac-4524-a521-ae6edfc239be\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 13:22:08 crc kubenswrapper[4986]: I1203 13:22:08.761164 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e06b4596-d4ac-4524-a521-ae6edfc239be-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e06b4596-d4ac-4524-a521-ae6edfc239be\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 13:22:08 crc kubenswrapper[4986]: I1203 13:22:08.761308 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e06b4596-d4ac-4524-a521-ae6edfc239be-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e06b4596-d4ac-4524-a521-ae6edfc239be\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 13:22:08 crc kubenswrapper[4986]: I1203 13:22:08.761342 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e06b4596-d4ac-4524-a521-ae6edfc239be-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"e06b4596-d4ac-4524-a521-ae6edfc239be\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 13:22:08 crc kubenswrapper[4986]: I1203 13:22:08.761412 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e06b4596-d4ac-4524-a521-ae6edfc239be-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"e06b4596-d4ac-4524-a521-ae6edfc239be\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 13:22:08 crc kubenswrapper[4986]: I1203 13:22:08.763636 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e06b4596-d4ac-4524-a521-ae6edfc239be-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"e06b4596-d4ac-4524-a521-ae6edfc239be\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 13:22:08 crc kubenswrapper[4986]: I1203 13:22:08.764220 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e06b4596-d4ac-4524-a521-ae6edfc239be-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"e06b4596-d4ac-4524-a521-ae6edfc239be\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 13:22:08 crc kubenswrapper[4986]: I1203 13:22:08.764838 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e06b4596-d4ac-4524-a521-ae6edfc239be-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"e06b4596-d4ac-4524-a521-ae6edfc239be\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 13:22:08 crc kubenswrapper[4986]: I1203 13:22:08.766621 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e06b4596-d4ac-4524-a521-ae6edfc239be-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e06b4596-d4ac-4524-a521-ae6edfc239be\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 13:22:08 crc kubenswrapper[4986]: I1203 13:22:08.768299 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e06b4596-d4ac-4524-a521-ae6edfc239be-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e06b4596-d4ac-4524-a521-ae6edfc239be\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 13:22:08 crc kubenswrapper[4986]: I1203 13:22:08.768459 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e06b4596-d4ac-4524-a521-ae6edfc239be-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"e06b4596-d4ac-4524-a521-ae6edfc239be\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 13:22:08 crc kubenswrapper[4986]: I1203 13:22:08.768956 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e06b4596-d4ac-4524-a521-ae6edfc239be-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"e06b4596-d4ac-4524-a521-ae6edfc239be\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 13:22:08 crc kubenswrapper[4986]: I1203 13:22:08.768968 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e06b4596-d4ac-4524-a521-ae6edfc239be-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"e06b4596-d4ac-4524-a521-ae6edfc239be\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 13:22:08 crc kubenswrapper[4986]: I1203 13:22:08.772154 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e06b4596-d4ac-4524-a521-ae6edfc239be-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"e06b4596-d4ac-4524-a521-ae6edfc239be\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 13:22:08 crc kubenswrapper[4986]: I1203 13:22:08.786256 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x84qs\" (UniqueName: \"kubernetes.io/projected/e06b4596-d4ac-4524-a521-ae6edfc239be-kube-api-access-x84qs\") pod \"rabbitmq-cell1-server-0\" (UID: \"e06b4596-d4ac-4524-a521-ae6edfc239be\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 13:22:08 crc kubenswrapper[4986]: I1203 13:22:08.790400 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"e06b4596-d4ac-4524-a521-ae6edfc239be\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 13:22:08 crc kubenswrapper[4986]: I1203 13:22:08.904540 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 03 13:22:08 crc kubenswrapper[4986]: I1203 13:22:08.957651 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="285bb825-5a17-4d45-87d6-852513d0351b" path="/var/lib/kubelet/pods/285bb825-5a17-4d45-87d6-852513d0351b/volumes" Dec 03 13:22:08 crc kubenswrapper[4986]: I1203 13:22:08.958473 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e8a62bd-1e92-464d-b905-8eb18cc44646" path="/var/lib/kubelet/pods/5e8a62bd-1e92-464d-b905-8eb18cc44646/volumes" Dec 03 13:22:09 crc kubenswrapper[4986]: I1203 13:22:09.331383 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 03 13:22:09 crc kubenswrapper[4986]: W1203 13:22:09.336348 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode06b4596_d4ac_4524_a521_ae6edfc239be.slice/crio-3cf9f52247de1c15034e7df03f07d8c8d8101b7f2b2dde3bfa0a22bf3d706699 WatchSource:0}: Error finding container 3cf9f52247de1c15034e7df03f07d8c8d8101b7f2b2dde3bfa0a22bf3d706699: Status 404 returned error can't find the container with id 3cf9f52247de1c15034e7df03f07d8c8d8101b7f2b2dde3bfa0a22bf3d706699 Dec 03 13:22:09 crc kubenswrapper[4986]: I1203 13:22:09.376276 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e06b4596-d4ac-4524-a521-ae6edfc239be","Type":"ContainerStarted","Data":"3cf9f52247de1c15034e7df03f07d8c8d8101b7f2b2dde3bfa0a22bf3d706699"} Dec 03 13:22:09 crc kubenswrapper[4986]: I1203 13:22:09.377739 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"67da7713-f27f-48cb-a2f1-4ebea4d2f939","Type":"ContainerStarted","Data":"0e2ec39470c80e330377acf6ea4332544d818d7b157fe5f48bee07b07fed9c2e"} Dec 03 13:22:09 crc kubenswrapper[4986]: I1203 13:22:09.573593 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-xfqvx"] Dec 03 13:22:09 crc kubenswrapper[4986]: I1203 13:22:09.575047 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67b789f86c-xfqvx" Dec 03 13:22:09 crc kubenswrapper[4986]: I1203 13:22:09.578716 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Dec 03 13:22:09 crc kubenswrapper[4986]: I1203 13:22:09.594957 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-xfqvx"] Dec 03 13:22:09 crc kubenswrapper[4986]: I1203 13:22:09.678106 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/8c2265e3-e015-4d42-87c0-39947c96de4c-openstack-edpm-ipam\") pod \"dnsmasq-dns-67b789f86c-xfqvx\" (UID: \"8c2265e3-e015-4d42-87c0-39947c96de4c\") " pod="openstack/dnsmasq-dns-67b789f86c-xfqvx" Dec 03 13:22:09 crc kubenswrapper[4986]: I1203 13:22:09.678151 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c2265e3-e015-4d42-87c0-39947c96de4c-config\") pod \"dnsmasq-dns-67b789f86c-xfqvx\" (UID: \"8c2265e3-e015-4d42-87c0-39947c96de4c\") " pod="openstack/dnsmasq-dns-67b789f86c-xfqvx" Dec 03 13:22:09 crc kubenswrapper[4986]: I1203 13:22:09.678188 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgrx7\" (UniqueName: \"kubernetes.io/projected/8c2265e3-e015-4d42-87c0-39947c96de4c-kube-api-access-rgrx7\") pod \"dnsmasq-dns-67b789f86c-xfqvx\" (UID: \"8c2265e3-e015-4d42-87c0-39947c96de4c\") " pod="openstack/dnsmasq-dns-67b789f86c-xfqvx" Dec 03 13:22:09 crc kubenswrapper[4986]: I1203 13:22:09.678245 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8c2265e3-e015-4d42-87c0-39947c96de4c-dns-swift-storage-0\") pod \"dnsmasq-dns-67b789f86c-xfqvx\" (UID: \"8c2265e3-e015-4d42-87c0-39947c96de4c\") " pod="openstack/dnsmasq-dns-67b789f86c-xfqvx" Dec 03 13:22:09 crc kubenswrapper[4986]: I1203 13:22:09.678264 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c2265e3-e015-4d42-87c0-39947c96de4c-dns-svc\") pod \"dnsmasq-dns-67b789f86c-xfqvx\" (UID: \"8c2265e3-e015-4d42-87c0-39947c96de4c\") " pod="openstack/dnsmasq-dns-67b789f86c-xfqvx" Dec 03 13:22:09 crc kubenswrapper[4986]: I1203 13:22:09.678306 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8c2265e3-e015-4d42-87c0-39947c96de4c-ovsdbserver-sb\") pod \"dnsmasq-dns-67b789f86c-xfqvx\" (UID: \"8c2265e3-e015-4d42-87c0-39947c96de4c\") " pod="openstack/dnsmasq-dns-67b789f86c-xfqvx" Dec 03 13:22:09 crc kubenswrapper[4986]: I1203 13:22:09.678379 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8c2265e3-e015-4d42-87c0-39947c96de4c-ovsdbserver-nb\") pod \"dnsmasq-dns-67b789f86c-xfqvx\" (UID: \"8c2265e3-e015-4d42-87c0-39947c96de4c\") " pod="openstack/dnsmasq-dns-67b789f86c-xfqvx" Dec 03 13:22:09 crc kubenswrapper[4986]: I1203 13:22:09.779721 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8c2265e3-e015-4d42-87c0-39947c96de4c-dns-swift-storage-0\") pod \"dnsmasq-dns-67b789f86c-xfqvx\" (UID: \"8c2265e3-e015-4d42-87c0-39947c96de4c\") " pod="openstack/dnsmasq-dns-67b789f86c-xfqvx" Dec 03 13:22:09 crc kubenswrapper[4986]: I1203 13:22:09.779767 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c2265e3-e015-4d42-87c0-39947c96de4c-dns-svc\") pod \"dnsmasq-dns-67b789f86c-xfqvx\" (UID: \"8c2265e3-e015-4d42-87c0-39947c96de4c\") " pod="openstack/dnsmasq-dns-67b789f86c-xfqvx" Dec 03 13:22:09 crc kubenswrapper[4986]: I1203 13:22:09.779790 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8c2265e3-e015-4d42-87c0-39947c96de4c-ovsdbserver-sb\") pod \"dnsmasq-dns-67b789f86c-xfqvx\" (UID: \"8c2265e3-e015-4d42-87c0-39947c96de4c\") " pod="openstack/dnsmasq-dns-67b789f86c-xfqvx" Dec 03 13:22:09 crc kubenswrapper[4986]: I1203 13:22:09.779886 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8c2265e3-e015-4d42-87c0-39947c96de4c-ovsdbserver-nb\") pod \"dnsmasq-dns-67b789f86c-xfqvx\" (UID: \"8c2265e3-e015-4d42-87c0-39947c96de4c\") " pod="openstack/dnsmasq-dns-67b789f86c-xfqvx" Dec 03 13:22:09 crc kubenswrapper[4986]: I1203 13:22:09.779942 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/8c2265e3-e015-4d42-87c0-39947c96de4c-openstack-edpm-ipam\") pod \"dnsmasq-dns-67b789f86c-xfqvx\" (UID: \"8c2265e3-e015-4d42-87c0-39947c96de4c\") " pod="openstack/dnsmasq-dns-67b789f86c-xfqvx" Dec 03 13:22:09 crc kubenswrapper[4986]: I1203 13:22:09.779979 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c2265e3-e015-4d42-87c0-39947c96de4c-config\") pod \"dnsmasq-dns-67b789f86c-xfqvx\" (UID: \"8c2265e3-e015-4d42-87c0-39947c96de4c\") " pod="openstack/dnsmasq-dns-67b789f86c-xfqvx" Dec 03 13:22:09 crc kubenswrapper[4986]: I1203 13:22:09.780019 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgrx7\" (UniqueName: \"kubernetes.io/projected/8c2265e3-e015-4d42-87c0-39947c96de4c-kube-api-access-rgrx7\") pod \"dnsmasq-dns-67b789f86c-xfqvx\" (UID: \"8c2265e3-e015-4d42-87c0-39947c96de4c\") " pod="openstack/dnsmasq-dns-67b789f86c-xfqvx" Dec 03 13:22:09 crc kubenswrapper[4986]: I1203 13:22:09.780644 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8c2265e3-e015-4d42-87c0-39947c96de4c-dns-swift-storage-0\") pod \"dnsmasq-dns-67b789f86c-xfqvx\" (UID: \"8c2265e3-e015-4d42-87c0-39947c96de4c\") " pod="openstack/dnsmasq-dns-67b789f86c-xfqvx" Dec 03 13:22:09 crc kubenswrapper[4986]: I1203 13:22:09.780672 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c2265e3-e015-4d42-87c0-39947c96de4c-dns-svc\") pod \"dnsmasq-dns-67b789f86c-xfqvx\" (UID: \"8c2265e3-e015-4d42-87c0-39947c96de4c\") " pod="openstack/dnsmasq-dns-67b789f86c-xfqvx" Dec 03 13:22:09 crc kubenswrapper[4986]: I1203 13:22:09.780953 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8c2265e3-e015-4d42-87c0-39947c96de4c-ovsdbserver-sb\") pod \"dnsmasq-dns-67b789f86c-xfqvx\" (UID: \"8c2265e3-e015-4d42-87c0-39947c96de4c\") " pod="openstack/dnsmasq-dns-67b789f86c-xfqvx" Dec 03 13:22:09 crc kubenswrapper[4986]: I1203 13:22:09.781468 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8c2265e3-e015-4d42-87c0-39947c96de4c-ovsdbserver-nb\") pod \"dnsmasq-dns-67b789f86c-xfqvx\" (UID: \"8c2265e3-e015-4d42-87c0-39947c96de4c\") " pod="openstack/dnsmasq-dns-67b789f86c-xfqvx" Dec 03 13:22:09 crc kubenswrapper[4986]: I1203 13:22:09.782035 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c2265e3-e015-4d42-87c0-39947c96de4c-config\") pod \"dnsmasq-dns-67b789f86c-xfqvx\" (UID: \"8c2265e3-e015-4d42-87c0-39947c96de4c\") " pod="openstack/dnsmasq-dns-67b789f86c-xfqvx" Dec 03 13:22:09 crc kubenswrapper[4986]: I1203 13:22:09.782094 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/8c2265e3-e015-4d42-87c0-39947c96de4c-openstack-edpm-ipam\") pod \"dnsmasq-dns-67b789f86c-xfqvx\" (UID: \"8c2265e3-e015-4d42-87c0-39947c96de4c\") " pod="openstack/dnsmasq-dns-67b789f86c-xfqvx" Dec 03 13:22:09 crc kubenswrapper[4986]: I1203 13:22:09.835949 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgrx7\" (UniqueName: \"kubernetes.io/projected/8c2265e3-e015-4d42-87c0-39947c96de4c-kube-api-access-rgrx7\") pod \"dnsmasq-dns-67b789f86c-xfqvx\" (UID: \"8c2265e3-e015-4d42-87c0-39947c96de4c\") " pod="openstack/dnsmasq-dns-67b789f86c-xfqvx" Dec 03 13:22:10 crc kubenswrapper[4986]: I1203 13:22:10.004755 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67b789f86c-xfqvx" Dec 03 13:22:10 crc kubenswrapper[4986]: I1203 13:22:10.389077 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"67da7713-f27f-48cb-a2f1-4ebea4d2f939","Type":"ContainerStarted","Data":"87177d16200ea147231735c829b55d78efc758ad7e635bb2344722fc5c59befe"} Dec 03 13:22:10 crc kubenswrapper[4986]: I1203 13:22:10.453236 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-xfqvx"] Dec 03 13:22:11 crc kubenswrapper[4986]: I1203 13:22:11.401344 4986 generic.go:334] "Generic (PLEG): container finished" podID="8c2265e3-e015-4d42-87c0-39947c96de4c" containerID="816ce9ed52374029cf38ef93db4efc855101d87b4ed30fe377db296a5b107945" exitCode=0 Dec 03 13:22:11 crc kubenswrapper[4986]: I1203 13:22:11.401417 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67b789f86c-xfqvx" event={"ID":"8c2265e3-e015-4d42-87c0-39947c96de4c","Type":"ContainerDied","Data":"816ce9ed52374029cf38ef93db4efc855101d87b4ed30fe377db296a5b107945"} Dec 03 13:22:11 crc kubenswrapper[4986]: I1203 13:22:11.401445 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67b789f86c-xfqvx" event={"ID":"8c2265e3-e015-4d42-87c0-39947c96de4c","Type":"ContainerStarted","Data":"da140050a05f7f78579a3963d1d95335f9bdfb94cfab522642d2f78d1fa53529"} Dec 03 13:22:11 crc kubenswrapper[4986]: I1203 13:22:11.405877 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e06b4596-d4ac-4524-a521-ae6edfc239be","Type":"ContainerStarted","Data":"59e7e865fbeb6230f0e51da56a52c9631c407ec482955660c911ca06de3f87c8"} Dec 03 13:22:12 crc kubenswrapper[4986]: I1203 13:22:12.415756 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67b789f86c-xfqvx" event={"ID":"8c2265e3-e015-4d42-87c0-39947c96de4c","Type":"ContainerStarted","Data":"2065b312350f84b7ad745d483cd6f9266f4f739bd396273c0f924caf2a343828"} Dec 03 13:22:12 crc kubenswrapper[4986]: I1203 13:22:12.435820 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-67b789f86c-xfqvx" podStartSLOduration=3.4357894939999998 podStartE2EDuration="3.435789494s" podCreationTimestamp="2025-12-03 13:22:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:22:12.434234532 +0000 UTC m=+1591.900665733" watchObservedRunningTime="2025-12-03 13:22:12.435789494 +0000 UTC m=+1591.902220675" Dec 03 13:22:13 crc kubenswrapper[4986]: I1203 13:22:13.424296 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-67b789f86c-xfqvx" Dec 03 13:22:13 crc kubenswrapper[4986]: I1203 13:22:13.943857 4986 scope.go:117] "RemoveContainer" containerID="3ba053774b99510def9eb1096d5851d891d9961deee4a7966a8cfbf9b5fbf159" Dec 03 13:22:13 crc kubenswrapper[4986]: E1203 13:22:13.944198 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 13:22:20 crc kubenswrapper[4986]: I1203 13:22:20.037515 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-67b789f86c-xfqvx" Dec 03 13:22:20 crc kubenswrapper[4986]: I1203 13:22:20.124951 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-gkvxp"] Dec 03 13:22:20 crc kubenswrapper[4986]: I1203 13:22:20.125204 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-59cf4bdb65-gkvxp" podUID="23dca844-dc9b-4e62-b6e0-1e4da1adc56c" containerName="dnsmasq-dns" containerID="cri-o://2e70e48617359050c096cc8404d18b9a95d775bbc8e20b2a463af72bad203f5f" gracePeriod=10 Dec 03 13:22:20 crc kubenswrapper[4986]: I1203 13:22:20.150652 4986 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-59cf4bdb65-gkvxp" podUID="23dca844-dc9b-4e62-b6e0-1e4da1adc56c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.196:5353: connect: connection refused" Dec 03 13:22:20 crc kubenswrapper[4986]: I1203 13:22:20.261665 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cb6ffcf87-pklsb"] Dec 03 13:22:20 crc kubenswrapper[4986]: I1203 13:22:20.263515 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cb6ffcf87-pklsb" Dec 03 13:22:20 crc kubenswrapper[4986]: I1203 13:22:20.297365 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cb6ffcf87-pklsb"] Dec 03 13:22:20 crc kubenswrapper[4986]: I1203 13:22:20.386210 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/33180d69-fad9-4b8f-877a-f68644b85da8-ovsdbserver-sb\") pod \"dnsmasq-dns-cb6ffcf87-pklsb\" (UID: \"33180d69-fad9-4b8f-877a-f68644b85da8\") " pod="openstack/dnsmasq-dns-cb6ffcf87-pklsb" Dec 03 13:22:20 crc kubenswrapper[4986]: I1203 13:22:20.386326 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/33180d69-fad9-4b8f-877a-f68644b85da8-dns-svc\") pod \"dnsmasq-dns-cb6ffcf87-pklsb\" (UID: \"33180d69-fad9-4b8f-877a-f68644b85da8\") " pod="openstack/dnsmasq-dns-cb6ffcf87-pklsb" Dec 03 13:22:20 crc kubenswrapper[4986]: I1203 13:22:20.386352 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f24nm\" (UniqueName: \"kubernetes.io/projected/33180d69-fad9-4b8f-877a-f68644b85da8-kube-api-access-f24nm\") pod \"dnsmasq-dns-cb6ffcf87-pklsb\" (UID: \"33180d69-fad9-4b8f-877a-f68644b85da8\") " pod="openstack/dnsmasq-dns-cb6ffcf87-pklsb" Dec 03 13:22:20 crc kubenswrapper[4986]: I1203 13:22:20.386370 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33180d69-fad9-4b8f-877a-f68644b85da8-config\") pod \"dnsmasq-dns-cb6ffcf87-pklsb\" (UID: \"33180d69-fad9-4b8f-877a-f68644b85da8\") " pod="openstack/dnsmasq-dns-cb6ffcf87-pklsb" Dec 03 13:22:20 crc kubenswrapper[4986]: I1203 13:22:20.386392 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/33180d69-fad9-4b8f-877a-f68644b85da8-dns-swift-storage-0\") pod \"dnsmasq-dns-cb6ffcf87-pklsb\" (UID: \"33180d69-fad9-4b8f-877a-f68644b85da8\") " pod="openstack/dnsmasq-dns-cb6ffcf87-pklsb" Dec 03 13:22:20 crc kubenswrapper[4986]: I1203 13:22:20.387793 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/33180d69-fad9-4b8f-877a-f68644b85da8-openstack-edpm-ipam\") pod \"dnsmasq-dns-cb6ffcf87-pklsb\" (UID: \"33180d69-fad9-4b8f-877a-f68644b85da8\") " pod="openstack/dnsmasq-dns-cb6ffcf87-pklsb" Dec 03 13:22:20 crc kubenswrapper[4986]: I1203 13:22:20.387831 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/33180d69-fad9-4b8f-877a-f68644b85da8-ovsdbserver-nb\") pod \"dnsmasq-dns-cb6ffcf87-pklsb\" (UID: \"33180d69-fad9-4b8f-877a-f68644b85da8\") " pod="openstack/dnsmasq-dns-cb6ffcf87-pklsb" Dec 03 13:22:20 crc kubenswrapper[4986]: I1203 13:22:20.490153 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/33180d69-fad9-4b8f-877a-f68644b85da8-dns-swift-storage-0\") pod \"dnsmasq-dns-cb6ffcf87-pklsb\" (UID: \"33180d69-fad9-4b8f-877a-f68644b85da8\") " pod="openstack/dnsmasq-dns-cb6ffcf87-pklsb" Dec 03 13:22:20 crc kubenswrapper[4986]: I1203 13:22:20.490262 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/33180d69-fad9-4b8f-877a-f68644b85da8-openstack-edpm-ipam\") pod \"dnsmasq-dns-cb6ffcf87-pklsb\" (UID: \"33180d69-fad9-4b8f-877a-f68644b85da8\") " pod="openstack/dnsmasq-dns-cb6ffcf87-pklsb" Dec 03 13:22:20 crc kubenswrapper[4986]: I1203 13:22:20.490308 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/33180d69-fad9-4b8f-877a-f68644b85da8-ovsdbserver-nb\") pod \"dnsmasq-dns-cb6ffcf87-pklsb\" (UID: \"33180d69-fad9-4b8f-877a-f68644b85da8\") " pod="openstack/dnsmasq-dns-cb6ffcf87-pklsb" Dec 03 13:22:20 crc kubenswrapper[4986]: I1203 13:22:20.490365 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/33180d69-fad9-4b8f-877a-f68644b85da8-ovsdbserver-sb\") pod \"dnsmasq-dns-cb6ffcf87-pklsb\" (UID: \"33180d69-fad9-4b8f-877a-f68644b85da8\") " pod="openstack/dnsmasq-dns-cb6ffcf87-pklsb" Dec 03 13:22:20 crc kubenswrapper[4986]: I1203 13:22:20.490445 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/33180d69-fad9-4b8f-877a-f68644b85da8-dns-svc\") pod \"dnsmasq-dns-cb6ffcf87-pklsb\" (UID: \"33180d69-fad9-4b8f-877a-f68644b85da8\") " pod="openstack/dnsmasq-dns-cb6ffcf87-pklsb" Dec 03 13:22:20 crc kubenswrapper[4986]: I1203 13:22:20.490481 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f24nm\" (UniqueName: \"kubernetes.io/projected/33180d69-fad9-4b8f-877a-f68644b85da8-kube-api-access-f24nm\") pod \"dnsmasq-dns-cb6ffcf87-pklsb\" (UID: \"33180d69-fad9-4b8f-877a-f68644b85da8\") " pod="openstack/dnsmasq-dns-cb6ffcf87-pklsb" Dec 03 13:22:20 crc kubenswrapper[4986]: I1203 13:22:20.490499 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33180d69-fad9-4b8f-877a-f68644b85da8-config\") pod \"dnsmasq-dns-cb6ffcf87-pklsb\" (UID: \"33180d69-fad9-4b8f-877a-f68644b85da8\") " pod="openstack/dnsmasq-dns-cb6ffcf87-pklsb" Dec 03 13:22:20 crc kubenswrapper[4986]: I1203 13:22:20.491517 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/33180d69-fad9-4b8f-877a-f68644b85da8-openstack-edpm-ipam\") pod \"dnsmasq-dns-cb6ffcf87-pklsb\" (UID: \"33180d69-fad9-4b8f-877a-f68644b85da8\") " pod="openstack/dnsmasq-dns-cb6ffcf87-pklsb" Dec 03 13:22:20 crc kubenswrapper[4986]: I1203 13:22:20.491600 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/33180d69-fad9-4b8f-877a-f68644b85da8-dns-svc\") pod \"dnsmasq-dns-cb6ffcf87-pklsb\" (UID: \"33180d69-fad9-4b8f-877a-f68644b85da8\") " pod="openstack/dnsmasq-dns-cb6ffcf87-pklsb" Dec 03 13:22:20 crc kubenswrapper[4986]: I1203 13:22:20.491967 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/33180d69-fad9-4b8f-877a-f68644b85da8-dns-swift-storage-0\") pod \"dnsmasq-dns-cb6ffcf87-pklsb\" (UID: \"33180d69-fad9-4b8f-877a-f68644b85da8\") " pod="openstack/dnsmasq-dns-cb6ffcf87-pklsb" Dec 03 13:22:20 crc kubenswrapper[4986]: I1203 13:22:20.492404 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/33180d69-fad9-4b8f-877a-f68644b85da8-ovsdbserver-sb\") pod \"dnsmasq-dns-cb6ffcf87-pklsb\" (UID: \"33180d69-fad9-4b8f-877a-f68644b85da8\") " pod="openstack/dnsmasq-dns-cb6ffcf87-pklsb" Dec 03 13:22:20 crc kubenswrapper[4986]: I1203 13:22:20.492529 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/33180d69-fad9-4b8f-877a-f68644b85da8-ovsdbserver-nb\") pod \"dnsmasq-dns-cb6ffcf87-pklsb\" (UID: \"33180d69-fad9-4b8f-877a-f68644b85da8\") " pod="openstack/dnsmasq-dns-cb6ffcf87-pklsb" Dec 03 13:22:20 crc kubenswrapper[4986]: I1203 13:22:20.492636 4986 generic.go:334] "Generic (PLEG): container finished" podID="23dca844-dc9b-4e62-b6e0-1e4da1adc56c" containerID="2e70e48617359050c096cc8404d18b9a95d775bbc8e20b2a463af72bad203f5f" exitCode=0 Dec 03 13:22:20 crc kubenswrapper[4986]: I1203 13:22:20.492671 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-gkvxp" event={"ID":"23dca844-dc9b-4e62-b6e0-1e4da1adc56c","Type":"ContainerDied","Data":"2e70e48617359050c096cc8404d18b9a95d775bbc8e20b2a463af72bad203f5f"} Dec 03 13:22:20 crc kubenswrapper[4986]: I1203 13:22:20.492729 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33180d69-fad9-4b8f-877a-f68644b85da8-config\") pod \"dnsmasq-dns-cb6ffcf87-pklsb\" (UID: \"33180d69-fad9-4b8f-877a-f68644b85da8\") " pod="openstack/dnsmasq-dns-cb6ffcf87-pklsb" Dec 03 13:22:20 crc kubenswrapper[4986]: I1203 13:22:20.523764 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f24nm\" (UniqueName: \"kubernetes.io/projected/33180d69-fad9-4b8f-877a-f68644b85da8-kube-api-access-f24nm\") pod \"dnsmasq-dns-cb6ffcf87-pklsb\" (UID: \"33180d69-fad9-4b8f-877a-f68644b85da8\") " pod="openstack/dnsmasq-dns-cb6ffcf87-pklsb" Dec 03 13:22:20 crc kubenswrapper[4986]: I1203 13:22:20.614222 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cb6ffcf87-pklsb" Dec 03 13:22:20 crc kubenswrapper[4986]: I1203 13:22:20.751606 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59cf4bdb65-gkvxp" Dec 03 13:22:20 crc kubenswrapper[4986]: I1203 13:22:20.899861 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjl4g\" (UniqueName: \"kubernetes.io/projected/23dca844-dc9b-4e62-b6e0-1e4da1adc56c-kube-api-access-gjl4g\") pod \"23dca844-dc9b-4e62-b6e0-1e4da1adc56c\" (UID: \"23dca844-dc9b-4e62-b6e0-1e4da1adc56c\") " Dec 03 13:22:20 crc kubenswrapper[4986]: I1203 13:22:20.899918 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23dca844-dc9b-4e62-b6e0-1e4da1adc56c-dns-svc\") pod \"23dca844-dc9b-4e62-b6e0-1e4da1adc56c\" (UID: \"23dca844-dc9b-4e62-b6e0-1e4da1adc56c\") " Dec 03 13:22:20 crc kubenswrapper[4986]: I1203 13:22:20.899993 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/23dca844-dc9b-4e62-b6e0-1e4da1adc56c-ovsdbserver-sb\") pod \"23dca844-dc9b-4e62-b6e0-1e4da1adc56c\" (UID: \"23dca844-dc9b-4e62-b6e0-1e4da1adc56c\") " Dec 03 13:22:20 crc kubenswrapper[4986]: I1203 13:22:20.900048 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/23dca844-dc9b-4e62-b6e0-1e4da1adc56c-ovsdbserver-nb\") pod \"23dca844-dc9b-4e62-b6e0-1e4da1adc56c\" (UID: \"23dca844-dc9b-4e62-b6e0-1e4da1adc56c\") " Dec 03 13:22:20 crc kubenswrapper[4986]: I1203 13:22:20.900071 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23dca844-dc9b-4e62-b6e0-1e4da1adc56c-config\") pod \"23dca844-dc9b-4e62-b6e0-1e4da1adc56c\" (UID: \"23dca844-dc9b-4e62-b6e0-1e4da1adc56c\") " Dec 03 13:22:20 crc kubenswrapper[4986]: I1203 13:22:20.900208 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/23dca844-dc9b-4e62-b6e0-1e4da1adc56c-dns-swift-storage-0\") pod \"23dca844-dc9b-4e62-b6e0-1e4da1adc56c\" (UID: \"23dca844-dc9b-4e62-b6e0-1e4da1adc56c\") " Dec 03 13:22:20 crc kubenswrapper[4986]: I1203 13:22:20.904121 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23dca844-dc9b-4e62-b6e0-1e4da1adc56c-kube-api-access-gjl4g" (OuterVolumeSpecName: "kube-api-access-gjl4g") pod "23dca844-dc9b-4e62-b6e0-1e4da1adc56c" (UID: "23dca844-dc9b-4e62-b6e0-1e4da1adc56c"). InnerVolumeSpecName "kube-api-access-gjl4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:22:20 crc kubenswrapper[4986]: I1203 13:22:20.950107 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23dca844-dc9b-4e62-b6e0-1e4da1adc56c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "23dca844-dc9b-4e62-b6e0-1e4da1adc56c" (UID: "23dca844-dc9b-4e62-b6e0-1e4da1adc56c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:22:20 crc kubenswrapper[4986]: I1203 13:22:20.956764 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23dca844-dc9b-4e62-b6e0-1e4da1adc56c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "23dca844-dc9b-4e62-b6e0-1e4da1adc56c" (UID: "23dca844-dc9b-4e62-b6e0-1e4da1adc56c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:22:20 crc kubenswrapper[4986]: I1203 13:22:20.963729 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23dca844-dc9b-4e62-b6e0-1e4da1adc56c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "23dca844-dc9b-4e62-b6e0-1e4da1adc56c" (UID: "23dca844-dc9b-4e62-b6e0-1e4da1adc56c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:22:20 crc kubenswrapper[4986]: I1203 13:22:20.963831 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23dca844-dc9b-4e62-b6e0-1e4da1adc56c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "23dca844-dc9b-4e62-b6e0-1e4da1adc56c" (UID: "23dca844-dc9b-4e62-b6e0-1e4da1adc56c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:22:20 crc kubenswrapper[4986]: I1203 13:22:20.966808 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23dca844-dc9b-4e62-b6e0-1e4da1adc56c-config" (OuterVolumeSpecName: "config") pod "23dca844-dc9b-4e62-b6e0-1e4da1adc56c" (UID: "23dca844-dc9b-4e62-b6e0-1e4da1adc56c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:22:21 crc kubenswrapper[4986]: I1203 13:22:21.003664 4986 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/23dca844-dc9b-4e62-b6e0-1e4da1adc56c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 03 13:22:21 crc kubenswrapper[4986]: I1203 13:22:21.004254 4986 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/23dca844-dc9b-4e62-b6e0-1e4da1adc56c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 03 13:22:21 crc kubenswrapper[4986]: I1203 13:22:21.004269 4986 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23dca844-dc9b-4e62-b6e0-1e4da1adc56c-config\") on node \"crc\" DevicePath \"\"" Dec 03 13:22:21 crc kubenswrapper[4986]: I1203 13:22:21.004295 4986 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/23dca844-dc9b-4e62-b6e0-1e4da1adc56c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 03 13:22:21 crc kubenswrapper[4986]: I1203 13:22:21.004308 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjl4g\" (UniqueName: \"kubernetes.io/projected/23dca844-dc9b-4e62-b6e0-1e4da1adc56c-kube-api-access-gjl4g\") on node \"crc\" DevicePath \"\"" Dec 03 13:22:21 crc kubenswrapper[4986]: I1203 13:22:21.004320 4986 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23dca844-dc9b-4e62-b6e0-1e4da1adc56c-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 03 13:22:21 crc kubenswrapper[4986]: I1203 13:22:21.119719 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cb6ffcf87-pklsb"] Dec 03 13:22:21 crc kubenswrapper[4986]: W1203 13:22:21.121188 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod33180d69_fad9_4b8f_877a_f68644b85da8.slice/crio-9292b6e0ea19e24e5d068b1cfa1acb67967889e370ae097d82cadbd67bd20001 WatchSource:0}: Error finding container 9292b6e0ea19e24e5d068b1cfa1acb67967889e370ae097d82cadbd67bd20001: Status 404 returned error can't find the container with id 9292b6e0ea19e24e5d068b1cfa1acb67967889e370ae097d82cadbd67bd20001 Dec 03 13:22:21 crc kubenswrapper[4986]: I1203 13:22:21.506417 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-gkvxp" event={"ID":"23dca844-dc9b-4e62-b6e0-1e4da1adc56c","Type":"ContainerDied","Data":"d69dbd973f2d6efe90d6f800e7e49b6791df67a59c29bec6a850a71169980047"} Dec 03 13:22:21 crc kubenswrapper[4986]: I1203 13:22:21.506482 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59cf4bdb65-gkvxp" Dec 03 13:22:21 crc kubenswrapper[4986]: I1203 13:22:21.506722 4986 scope.go:117] "RemoveContainer" containerID="2e70e48617359050c096cc8404d18b9a95d775bbc8e20b2a463af72bad203f5f" Dec 03 13:22:21 crc kubenswrapper[4986]: I1203 13:22:21.510399 4986 generic.go:334] "Generic (PLEG): container finished" podID="33180d69-fad9-4b8f-877a-f68644b85da8" containerID="e9f468862804b23f9be75bb0797c712779aa9354415f019fe960f24e576ccfa6" exitCode=0 Dec 03 13:22:21 crc kubenswrapper[4986]: I1203 13:22:21.510446 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cb6ffcf87-pklsb" event={"ID":"33180d69-fad9-4b8f-877a-f68644b85da8","Type":"ContainerDied","Data":"e9f468862804b23f9be75bb0797c712779aa9354415f019fe960f24e576ccfa6"} Dec 03 13:22:21 crc kubenswrapper[4986]: I1203 13:22:21.510483 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cb6ffcf87-pklsb" event={"ID":"33180d69-fad9-4b8f-877a-f68644b85da8","Type":"ContainerStarted","Data":"9292b6e0ea19e24e5d068b1cfa1acb67967889e370ae097d82cadbd67bd20001"} Dec 03 13:22:21 crc kubenswrapper[4986]: I1203 13:22:21.528966 4986 scope.go:117] "RemoveContainer" containerID="a8892e2c480f999002948bba340f5f58de1d1a642642ff0e3cc8d0b088850f34" Dec 03 13:22:21 crc kubenswrapper[4986]: I1203 13:22:21.728385 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-gkvxp"] Dec 03 13:22:21 crc kubenswrapper[4986]: I1203 13:22:21.735883 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-gkvxp"] Dec 03 13:22:22 crc kubenswrapper[4986]: I1203 13:22:22.540188 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cb6ffcf87-pklsb" event={"ID":"33180d69-fad9-4b8f-877a-f68644b85da8","Type":"ContainerStarted","Data":"a8335e385f2e2807c21d6c95c3fee962d3d3a5ffc1cc7dc4dc1c0ab844329632"} Dec 03 13:22:22 crc kubenswrapper[4986]: I1203 13:22:22.540481 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-cb6ffcf87-pklsb" Dec 03 13:22:22 crc kubenswrapper[4986]: I1203 13:22:22.955967 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23dca844-dc9b-4e62-b6e0-1e4da1adc56c" path="/var/lib/kubelet/pods/23dca844-dc9b-4e62-b6e0-1e4da1adc56c/volumes" Dec 03 13:22:28 crc kubenswrapper[4986]: I1203 13:22:28.944560 4986 scope.go:117] "RemoveContainer" containerID="3ba053774b99510def9eb1096d5851d891d9961deee4a7966a8cfbf9b5fbf159" Dec 03 13:22:28 crc kubenswrapper[4986]: E1203 13:22:28.945845 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 13:22:30 crc kubenswrapper[4986]: I1203 13:22:30.615534 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-cb6ffcf87-pklsb" Dec 03 13:22:30 crc kubenswrapper[4986]: I1203 13:22:30.643604 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-cb6ffcf87-pklsb" podStartSLOduration=10.643584707 podStartE2EDuration="10.643584707s" podCreationTimestamp="2025-12-03 13:22:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:22:22.565665555 +0000 UTC m=+1602.032096746" watchObservedRunningTime="2025-12-03 13:22:30.643584707 +0000 UTC m=+1610.110015898" Dec 03 13:22:30 crc kubenswrapper[4986]: I1203 13:22:30.697169 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-xfqvx"] Dec 03 13:22:30 crc kubenswrapper[4986]: I1203 13:22:30.697519 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-67b789f86c-xfqvx" podUID="8c2265e3-e015-4d42-87c0-39947c96de4c" containerName="dnsmasq-dns" containerID="cri-o://2065b312350f84b7ad745d483cd6f9266f4f739bd396273c0f924caf2a343828" gracePeriod=10 Dec 03 13:22:31 crc kubenswrapper[4986]: I1203 13:22:31.265036 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67b789f86c-xfqvx" Dec 03 13:22:31 crc kubenswrapper[4986]: I1203 13:22:31.411192 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c2265e3-e015-4d42-87c0-39947c96de4c-config\") pod \"8c2265e3-e015-4d42-87c0-39947c96de4c\" (UID: \"8c2265e3-e015-4d42-87c0-39947c96de4c\") " Dec 03 13:22:31 crc kubenswrapper[4986]: I1203 13:22:31.411547 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c2265e3-e015-4d42-87c0-39947c96de4c-dns-svc\") pod \"8c2265e3-e015-4d42-87c0-39947c96de4c\" (UID: \"8c2265e3-e015-4d42-87c0-39947c96de4c\") " Dec 03 13:22:31 crc kubenswrapper[4986]: I1203 13:22:31.411681 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8c2265e3-e015-4d42-87c0-39947c96de4c-dns-swift-storage-0\") pod \"8c2265e3-e015-4d42-87c0-39947c96de4c\" (UID: \"8c2265e3-e015-4d42-87c0-39947c96de4c\") " Dec 03 13:22:31 crc kubenswrapper[4986]: I1203 13:22:31.411715 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rgrx7\" (UniqueName: \"kubernetes.io/projected/8c2265e3-e015-4d42-87c0-39947c96de4c-kube-api-access-rgrx7\") pod \"8c2265e3-e015-4d42-87c0-39947c96de4c\" (UID: \"8c2265e3-e015-4d42-87c0-39947c96de4c\") " Dec 03 13:22:31 crc kubenswrapper[4986]: I1203 13:22:31.411768 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/8c2265e3-e015-4d42-87c0-39947c96de4c-openstack-edpm-ipam\") pod \"8c2265e3-e015-4d42-87c0-39947c96de4c\" (UID: \"8c2265e3-e015-4d42-87c0-39947c96de4c\") " Dec 03 13:22:31 crc kubenswrapper[4986]: I1203 13:22:31.411843 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8c2265e3-e015-4d42-87c0-39947c96de4c-ovsdbserver-sb\") pod \"8c2265e3-e015-4d42-87c0-39947c96de4c\" (UID: \"8c2265e3-e015-4d42-87c0-39947c96de4c\") " Dec 03 13:22:31 crc kubenswrapper[4986]: I1203 13:22:31.411886 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8c2265e3-e015-4d42-87c0-39947c96de4c-ovsdbserver-nb\") pod \"8c2265e3-e015-4d42-87c0-39947c96de4c\" (UID: \"8c2265e3-e015-4d42-87c0-39947c96de4c\") " Dec 03 13:22:31 crc kubenswrapper[4986]: I1203 13:22:31.422425 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c2265e3-e015-4d42-87c0-39947c96de4c-kube-api-access-rgrx7" (OuterVolumeSpecName: "kube-api-access-rgrx7") pod "8c2265e3-e015-4d42-87c0-39947c96de4c" (UID: "8c2265e3-e015-4d42-87c0-39947c96de4c"). InnerVolumeSpecName "kube-api-access-rgrx7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:22:31 crc kubenswrapper[4986]: I1203 13:22:31.466746 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c2265e3-e015-4d42-87c0-39947c96de4c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8c2265e3-e015-4d42-87c0-39947c96de4c" (UID: "8c2265e3-e015-4d42-87c0-39947c96de4c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:22:31 crc kubenswrapper[4986]: I1203 13:22:31.479435 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c2265e3-e015-4d42-87c0-39947c96de4c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "8c2265e3-e015-4d42-87c0-39947c96de4c" (UID: "8c2265e3-e015-4d42-87c0-39947c96de4c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:22:31 crc kubenswrapper[4986]: I1203 13:22:31.483719 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c2265e3-e015-4d42-87c0-39947c96de4c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8c2265e3-e015-4d42-87c0-39947c96de4c" (UID: "8c2265e3-e015-4d42-87c0-39947c96de4c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:22:31 crc kubenswrapper[4986]: I1203 13:22:31.495486 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c2265e3-e015-4d42-87c0-39947c96de4c-config" (OuterVolumeSpecName: "config") pod "8c2265e3-e015-4d42-87c0-39947c96de4c" (UID: "8c2265e3-e015-4d42-87c0-39947c96de4c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:22:31 crc kubenswrapper[4986]: I1203 13:22:31.496309 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c2265e3-e015-4d42-87c0-39947c96de4c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8c2265e3-e015-4d42-87c0-39947c96de4c" (UID: "8c2265e3-e015-4d42-87c0-39947c96de4c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:22:31 crc kubenswrapper[4986]: I1203 13:22:31.513746 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c2265e3-e015-4d42-87c0-39947c96de4c-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "8c2265e3-e015-4d42-87c0-39947c96de4c" (UID: "8c2265e3-e015-4d42-87c0-39947c96de4c"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:22:31 crc kubenswrapper[4986]: I1203 13:22:31.514184 4986 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8c2265e3-e015-4d42-87c0-39947c96de4c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 03 13:22:31 crc kubenswrapper[4986]: I1203 13:22:31.514211 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rgrx7\" (UniqueName: \"kubernetes.io/projected/8c2265e3-e015-4d42-87c0-39947c96de4c-kube-api-access-rgrx7\") on node \"crc\" DevicePath \"\"" Dec 03 13:22:31 crc kubenswrapper[4986]: I1203 13:22:31.514225 4986 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/8c2265e3-e015-4d42-87c0-39947c96de4c-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Dec 03 13:22:31 crc kubenswrapper[4986]: I1203 13:22:31.514236 4986 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8c2265e3-e015-4d42-87c0-39947c96de4c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 03 13:22:31 crc kubenswrapper[4986]: I1203 13:22:31.514246 4986 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8c2265e3-e015-4d42-87c0-39947c96de4c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 03 13:22:31 crc kubenswrapper[4986]: I1203 13:22:31.514258 4986 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c2265e3-e015-4d42-87c0-39947c96de4c-config\") on node \"crc\" DevicePath \"\"" Dec 03 13:22:31 crc kubenswrapper[4986]: I1203 13:22:31.514268 4986 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c2265e3-e015-4d42-87c0-39947c96de4c-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 03 13:22:31 crc kubenswrapper[4986]: I1203 13:22:31.632037 4986 generic.go:334] "Generic (PLEG): container finished" podID="8c2265e3-e015-4d42-87c0-39947c96de4c" containerID="2065b312350f84b7ad745d483cd6f9266f4f739bd396273c0f924caf2a343828" exitCode=0 Dec 03 13:22:31 crc kubenswrapper[4986]: I1203 13:22:31.632081 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67b789f86c-xfqvx" event={"ID":"8c2265e3-e015-4d42-87c0-39947c96de4c","Type":"ContainerDied","Data":"2065b312350f84b7ad745d483cd6f9266f4f739bd396273c0f924caf2a343828"} Dec 03 13:22:31 crc kubenswrapper[4986]: I1203 13:22:31.632110 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67b789f86c-xfqvx" event={"ID":"8c2265e3-e015-4d42-87c0-39947c96de4c","Type":"ContainerDied","Data":"da140050a05f7f78579a3963d1d95335f9bdfb94cfab522642d2f78d1fa53529"} Dec 03 13:22:31 crc kubenswrapper[4986]: I1203 13:22:31.632127 4986 scope.go:117] "RemoveContainer" containerID="2065b312350f84b7ad745d483cd6f9266f4f739bd396273c0f924caf2a343828" Dec 03 13:22:31 crc kubenswrapper[4986]: I1203 13:22:31.632239 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67b789f86c-xfqvx" Dec 03 13:22:31 crc kubenswrapper[4986]: I1203 13:22:31.659424 4986 scope.go:117] "RemoveContainer" containerID="816ce9ed52374029cf38ef93db4efc855101d87b4ed30fe377db296a5b107945" Dec 03 13:22:31 crc kubenswrapper[4986]: I1203 13:22:31.662351 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-xfqvx"] Dec 03 13:22:31 crc kubenswrapper[4986]: I1203 13:22:31.671228 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-xfqvx"] Dec 03 13:22:31 crc kubenswrapper[4986]: I1203 13:22:31.692575 4986 scope.go:117] "RemoveContainer" containerID="2065b312350f84b7ad745d483cd6f9266f4f739bd396273c0f924caf2a343828" Dec 03 13:22:31 crc kubenswrapper[4986]: E1203 13:22:31.693335 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2065b312350f84b7ad745d483cd6f9266f4f739bd396273c0f924caf2a343828\": container with ID starting with 2065b312350f84b7ad745d483cd6f9266f4f739bd396273c0f924caf2a343828 not found: ID does not exist" containerID="2065b312350f84b7ad745d483cd6f9266f4f739bd396273c0f924caf2a343828" Dec 03 13:22:31 crc kubenswrapper[4986]: I1203 13:22:31.693390 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2065b312350f84b7ad745d483cd6f9266f4f739bd396273c0f924caf2a343828"} err="failed to get container status \"2065b312350f84b7ad745d483cd6f9266f4f739bd396273c0f924caf2a343828\": rpc error: code = NotFound desc = could not find container \"2065b312350f84b7ad745d483cd6f9266f4f739bd396273c0f924caf2a343828\": container with ID starting with 2065b312350f84b7ad745d483cd6f9266f4f739bd396273c0f924caf2a343828 not found: ID does not exist" Dec 03 13:22:31 crc kubenswrapper[4986]: I1203 13:22:31.693447 4986 scope.go:117] "RemoveContainer" containerID="816ce9ed52374029cf38ef93db4efc855101d87b4ed30fe377db296a5b107945" Dec 03 13:22:31 crc kubenswrapper[4986]: E1203 13:22:31.693849 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"816ce9ed52374029cf38ef93db4efc855101d87b4ed30fe377db296a5b107945\": container with ID starting with 816ce9ed52374029cf38ef93db4efc855101d87b4ed30fe377db296a5b107945 not found: ID does not exist" containerID="816ce9ed52374029cf38ef93db4efc855101d87b4ed30fe377db296a5b107945" Dec 03 13:22:31 crc kubenswrapper[4986]: I1203 13:22:31.693880 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"816ce9ed52374029cf38ef93db4efc855101d87b4ed30fe377db296a5b107945"} err="failed to get container status \"816ce9ed52374029cf38ef93db4efc855101d87b4ed30fe377db296a5b107945\": rpc error: code = NotFound desc = could not find container \"816ce9ed52374029cf38ef93db4efc855101d87b4ed30fe377db296a5b107945\": container with ID starting with 816ce9ed52374029cf38ef93db4efc855101d87b4ed30fe377db296a5b107945 not found: ID does not exist" Dec 03 13:22:32 crc kubenswrapper[4986]: I1203 13:22:32.955218 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c2265e3-e015-4d42-87c0-39947c96de4c" path="/var/lib/kubelet/pods/8c2265e3-e015-4d42-87c0-39947c96de4c/volumes" Dec 03 13:22:33 crc kubenswrapper[4986]: I1203 13:22:33.979501 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9x84l"] Dec 03 13:22:33 crc kubenswrapper[4986]: E1203 13:22:33.979937 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c2265e3-e015-4d42-87c0-39947c96de4c" containerName="dnsmasq-dns" Dec 03 13:22:33 crc kubenswrapper[4986]: I1203 13:22:33.979950 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c2265e3-e015-4d42-87c0-39947c96de4c" containerName="dnsmasq-dns" Dec 03 13:22:33 crc kubenswrapper[4986]: E1203 13:22:33.979966 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23dca844-dc9b-4e62-b6e0-1e4da1adc56c" containerName="dnsmasq-dns" Dec 03 13:22:33 crc kubenswrapper[4986]: I1203 13:22:33.979971 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="23dca844-dc9b-4e62-b6e0-1e4da1adc56c" containerName="dnsmasq-dns" Dec 03 13:22:33 crc kubenswrapper[4986]: E1203 13:22:33.979985 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23dca844-dc9b-4e62-b6e0-1e4da1adc56c" containerName="init" Dec 03 13:22:33 crc kubenswrapper[4986]: I1203 13:22:33.979991 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="23dca844-dc9b-4e62-b6e0-1e4da1adc56c" containerName="init" Dec 03 13:22:33 crc kubenswrapper[4986]: E1203 13:22:33.980006 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c2265e3-e015-4d42-87c0-39947c96de4c" containerName="init" Dec 03 13:22:33 crc kubenswrapper[4986]: I1203 13:22:33.980011 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c2265e3-e015-4d42-87c0-39947c96de4c" containerName="init" Dec 03 13:22:33 crc kubenswrapper[4986]: I1203 13:22:33.980188 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="23dca844-dc9b-4e62-b6e0-1e4da1adc56c" containerName="dnsmasq-dns" Dec 03 13:22:33 crc kubenswrapper[4986]: I1203 13:22:33.980215 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c2265e3-e015-4d42-87c0-39947c96de4c" containerName="dnsmasq-dns" Dec 03 13:22:33 crc kubenswrapper[4986]: I1203 13:22:33.981489 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9x84l" Dec 03 13:22:33 crc kubenswrapper[4986]: I1203 13:22:33.987772 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9x84l"] Dec 03 13:22:34 crc kubenswrapper[4986]: I1203 13:22:34.062017 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ms8v\" (UniqueName: \"kubernetes.io/projected/0dcb510a-6807-41ce-b6b5-4a781ea0c48b-kube-api-access-2ms8v\") pod \"redhat-operators-9x84l\" (UID: \"0dcb510a-6807-41ce-b6b5-4a781ea0c48b\") " pod="openshift-marketplace/redhat-operators-9x84l" Dec 03 13:22:34 crc kubenswrapper[4986]: I1203 13:22:34.062266 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0dcb510a-6807-41ce-b6b5-4a781ea0c48b-utilities\") pod \"redhat-operators-9x84l\" (UID: \"0dcb510a-6807-41ce-b6b5-4a781ea0c48b\") " pod="openshift-marketplace/redhat-operators-9x84l" Dec 03 13:22:34 crc kubenswrapper[4986]: I1203 13:22:34.062333 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0dcb510a-6807-41ce-b6b5-4a781ea0c48b-catalog-content\") pod \"redhat-operators-9x84l\" (UID: \"0dcb510a-6807-41ce-b6b5-4a781ea0c48b\") " pod="openshift-marketplace/redhat-operators-9x84l" Dec 03 13:22:34 crc kubenswrapper[4986]: I1203 13:22:34.164255 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0dcb510a-6807-41ce-b6b5-4a781ea0c48b-utilities\") pod \"redhat-operators-9x84l\" (UID: \"0dcb510a-6807-41ce-b6b5-4a781ea0c48b\") " pod="openshift-marketplace/redhat-operators-9x84l" Dec 03 13:22:34 crc kubenswrapper[4986]: I1203 13:22:34.164337 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0dcb510a-6807-41ce-b6b5-4a781ea0c48b-catalog-content\") pod \"redhat-operators-9x84l\" (UID: \"0dcb510a-6807-41ce-b6b5-4a781ea0c48b\") " pod="openshift-marketplace/redhat-operators-9x84l" Dec 03 13:22:34 crc kubenswrapper[4986]: I1203 13:22:34.164416 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ms8v\" (UniqueName: \"kubernetes.io/projected/0dcb510a-6807-41ce-b6b5-4a781ea0c48b-kube-api-access-2ms8v\") pod \"redhat-operators-9x84l\" (UID: \"0dcb510a-6807-41ce-b6b5-4a781ea0c48b\") " pod="openshift-marketplace/redhat-operators-9x84l" Dec 03 13:22:34 crc kubenswrapper[4986]: I1203 13:22:34.164878 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0dcb510a-6807-41ce-b6b5-4a781ea0c48b-utilities\") pod \"redhat-operators-9x84l\" (UID: \"0dcb510a-6807-41ce-b6b5-4a781ea0c48b\") " pod="openshift-marketplace/redhat-operators-9x84l" Dec 03 13:22:34 crc kubenswrapper[4986]: I1203 13:22:34.165190 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0dcb510a-6807-41ce-b6b5-4a781ea0c48b-catalog-content\") pod \"redhat-operators-9x84l\" (UID: \"0dcb510a-6807-41ce-b6b5-4a781ea0c48b\") " pod="openshift-marketplace/redhat-operators-9x84l" Dec 03 13:22:34 crc kubenswrapper[4986]: I1203 13:22:34.184037 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ms8v\" (UniqueName: \"kubernetes.io/projected/0dcb510a-6807-41ce-b6b5-4a781ea0c48b-kube-api-access-2ms8v\") pod \"redhat-operators-9x84l\" (UID: \"0dcb510a-6807-41ce-b6b5-4a781ea0c48b\") " pod="openshift-marketplace/redhat-operators-9x84l" Dec 03 13:22:34 crc kubenswrapper[4986]: I1203 13:22:34.310348 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9x84l" Dec 03 13:22:34 crc kubenswrapper[4986]: I1203 13:22:34.776599 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9x84l"] Dec 03 13:22:35 crc kubenswrapper[4986]: I1203 13:22:35.673511 4986 generic.go:334] "Generic (PLEG): container finished" podID="0dcb510a-6807-41ce-b6b5-4a781ea0c48b" containerID="48b883b63c018c8679fb08fe0b8de76020d43f15dd1846f053e29b4d619d3b88" exitCode=0 Dec 03 13:22:35 crc kubenswrapper[4986]: I1203 13:22:35.673801 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9x84l" event={"ID":"0dcb510a-6807-41ce-b6b5-4a781ea0c48b","Type":"ContainerDied","Data":"48b883b63c018c8679fb08fe0b8de76020d43f15dd1846f053e29b4d619d3b88"} Dec 03 13:22:35 crc kubenswrapper[4986]: I1203 13:22:35.673862 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9x84l" event={"ID":"0dcb510a-6807-41ce-b6b5-4a781ea0c48b","Type":"ContainerStarted","Data":"4b85a304db244f4152fe79c2f66f9c93c72d5b43ef5203ece99d527a8560a39c"} Dec 03 13:22:36 crc kubenswrapper[4986]: I1203 13:22:36.685176 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9x84l" event={"ID":"0dcb510a-6807-41ce-b6b5-4a781ea0c48b","Type":"ContainerStarted","Data":"cfbce7872f1ac69b1e073946243041df1429373d89185eaa4112d37b8ef085b1"} Dec 03 13:22:37 crc kubenswrapper[4986]: I1203 13:22:37.696396 4986 generic.go:334] "Generic (PLEG): container finished" podID="0dcb510a-6807-41ce-b6b5-4a781ea0c48b" containerID="cfbce7872f1ac69b1e073946243041df1429373d89185eaa4112d37b8ef085b1" exitCode=0 Dec 03 13:22:37 crc kubenswrapper[4986]: I1203 13:22:37.696506 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9x84l" event={"ID":"0dcb510a-6807-41ce-b6b5-4a781ea0c48b","Type":"ContainerDied","Data":"cfbce7872f1ac69b1e073946243041df1429373d89185eaa4112d37b8ef085b1"} Dec 03 13:22:38 crc kubenswrapper[4986]: I1203 13:22:38.708573 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9x84l" event={"ID":"0dcb510a-6807-41ce-b6b5-4a781ea0c48b","Type":"ContainerStarted","Data":"481be32139949b31a488ad9374a2a505a90c4add8c0c52e78f8eb33b27d6f8ca"} Dec 03 13:22:38 crc kubenswrapper[4986]: I1203 13:22:38.744125 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9x84l" podStartSLOduration=3.192357885 podStartE2EDuration="5.744101929s" podCreationTimestamp="2025-12-03 13:22:33 +0000 UTC" firstStartedPulling="2025-12-03 13:22:35.675475527 +0000 UTC m=+1615.141906718" lastFinishedPulling="2025-12-03 13:22:38.227219571 +0000 UTC m=+1617.693650762" observedRunningTime="2025-12-03 13:22:38.724946592 +0000 UTC m=+1618.191377783" watchObservedRunningTime="2025-12-03 13:22:38.744101929 +0000 UTC m=+1618.210533120" Dec 03 13:22:39 crc kubenswrapper[4986]: I1203 13:22:39.943116 4986 scope.go:117] "RemoveContainer" containerID="3ba053774b99510def9eb1096d5851d891d9961deee4a7966a8cfbf9b5fbf159" Dec 03 13:22:39 crc kubenswrapper[4986]: E1203 13:22:39.943658 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 13:22:43 crc kubenswrapper[4986]: I1203 13:22:43.754706 4986 generic.go:334] "Generic (PLEG): container finished" podID="67da7713-f27f-48cb-a2f1-4ebea4d2f939" containerID="87177d16200ea147231735c829b55d78efc758ad7e635bb2344722fc5c59befe" exitCode=0 Dec 03 13:22:43 crc kubenswrapper[4986]: I1203 13:22:43.754787 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"67da7713-f27f-48cb-a2f1-4ebea4d2f939","Type":"ContainerDied","Data":"87177d16200ea147231735c829b55d78efc758ad7e635bb2344722fc5c59befe"} Dec 03 13:22:43 crc kubenswrapper[4986]: I1203 13:22:43.758971 4986 generic.go:334] "Generic (PLEG): container finished" podID="e06b4596-d4ac-4524-a521-ae6edfc239be" containerID="59e7e865fbeb6230f0e51da56a52c9631c407ec482955660c911ca06de3f87c8" exitCode=0 Dec 03 13:22:43 crc kubenswrapper[4986]: I1203 13:22:43.759013 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e06b4596-d4ac-4524-a521-ae6edfc239be","Type":"ContainerDied","Data":"59e7e865fbeb6230f0e51da56a52c9631c407ec482955660c911ca06de3f87c8"} Dec 03 13:22:43 crc kubenswrapper[4986]: I1203 13:22:43.819109 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qpl5d"] Dec 03 13:22:43 crc kubenswrapper[4986]: I1203 13:22:43.820927 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qpl5d" Dec 03 13:22:43 crc kubenswrapper[4986]: I1203 13:22:43.823383 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jr75v" Dec 03 13:22:43 crc kubenswrapper[4986]: I1203 13:22:43.823616 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 03 13:22:43 crc kubenswrapper[4986]: I1203 13:22:43.823740 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 03 13:22:43 crc kubenswrapper[4986]: I1203 13:22:43.824098 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 03 13:22:43 crc kubenswrapper[4986]: I1203 13:22:43.832709 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qpl5d"] Dec 03 13:22:43 crc kubenswrapper[4986]: I1203 13:22:43.957706 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrmt5\" (UniqueName: \"kubernetes.io/projected/d9886573-7ee1-4a0b-a6d2-f8621fdabf83-kube-api-access-hrmt5\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qpl5d\" (UID: \"d9886573-7ee1-4a0b-a6d2-f8621fdabf83\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qpl5d" Dec 03 13:22:43 crc kubenswrapper[4986]: I1203 13:22:43.958393 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d9886573-7ee1-4a0b-a6d2-f8621fdabf83-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qpl5d\" (UID: \"d9886573-7ee1-4a0b-a6d2-f8621fdabf83\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qpl5d" Dec 03 13:22:43 crc kubenswrapper[4986]: I1203 13:22:43.958570 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d9886573-7ee1-4a0b-a6d2-f8621fdabf83-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qpl5d\" (UID: \"d9886573-7ee1-4a0b-a6d2-f8621fdabf83\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qpl5d" Dec 03 13:22:43 crc kubenswrapper[4986]: I1203 13:22:43.958671 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9886573-7ee1-4a0b-a6d2-f8621fdabf83-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qpl5d\" (UID: \"d9886573-7ee1-4a0b-a6d2-f8621fdabf83\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qpl5d" Dec 03 13:22:44 crc kubenswrapper[4986]: I1203 13:22:44.060838 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d9886573-7ee1-4a0b-a6d2-f8621fdabf83-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qpl5d\" (UID: \"d9886573-7ee1-4a0b-a6d2-f8621fdabf83\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qpl5d" Dec 03 13:22:44 crc kubenswrapper[4986]: I1203 13:22:44.060911 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d9886573-7ee1-4a0b-a6d2-f8621fdabf83-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qpl5d\" (UID: \"d9886573-7ee1-4a0b-a6d2-f8621fdabf83\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qpl5d" Dec 03 13:22:44 crc kubenswrapper[4986]: I1203 13:22:44.060928 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9886573-7ee1-4a0b-a6d2-f8621fdabf83-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qpl5d\" (UID: \"d9886573-7ee1-4a0b-a6d2-f8621fdabf83\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qpl5d" Dec 03 13:22:44 crc kubenswrapper[4986]: I1203 13:22:44.060983 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrmt5\" (UniqueName: \"kubernetes.io/projected/d9886573-7ee1-4a0b-a6d2-f8621fdabf83-kube-api-access-hrmt5\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qpl5d\" (UID: \"d9886573-7ee1-4a0b-a6d2-f8621fdabf83\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qpl5d" Dec 03 13:22:44 crc kubenswrapper[4986]: I1203 13:22:44.067784 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d9886573-7ee1-4a0b-a6d2-f8621fdabf83-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qpl5d\" (UID: \"d9886573-7ee1-4a0b-a6d2-f8621fdabf83\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qpl5d" Dec 03 13:22:44 crc kubenswrapper[4986]: I1203 13:22:44.068165 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d9886573-7ee1-4a0b-a6d2-f8621fdabf83-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qpl5d\" (UID: \"d9886573-7ee1-4a0b-a6d2-f8621fdabf83\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qpl5d" Dec 03 13:22:44 crc kubenswrapper[4986]: I1203 13:22:44.068829 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9886573-7ee1-4a0b-a6d2-f8621fdabf83-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qpl5d\" (UID: \"d9886573-7ee1-4a0b-a6d2-f8621fdabf83\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qpl5d" Dec 03 13:22:44 crc kubenswrapper[4986]: I1203 13:22:44.084469 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrmt5\" (UniqueName: \"kubernetes.io/projected/d9886573-7ee1-4a0b-a6d2-f8621fdabf83-kube-api-access-hrmt5\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qpl5d\" (UID: \"d9886573-7ee1-4a0b-a6d2-f8621fdabf83\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qpl5d" Dec 03 13:22:44 crc kubenswrapper[4986]: I1203 13:22:44.308444 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qpl5d" Dec 03 13:22:44 crc kubenswrapper[4986]: I1203 13:22:44.310523 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9x84l" Dec 03 13:22:44 crc kubenswrapper[4986]: I1203 13:22:44.310790 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9x84l" Dec 03 13:22:44 crc kubenswrapper[4986]: I1203 13:22:44.361262 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9x84l" Dec 03 13:22:45 crc kubenswrapper[4986]: I1203 13:22:44.772367 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e06b4596-d4ac-4524-a521-ae6edfc239be","Type":"ContainerStarted","Data":"57fa1b30d1949a6ade0cb6274e4035a183b2c00793e49433f0f899b77f1c694b"} Dec 03 13:22:45 crc kubenswrapper[4986]: I1203 13:22:44.772819 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Dec 03 13:22:45 crc kubenswrapper[4986]: I1203 13:22:44.774477 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"67da7713-f27f-48cb-a2f1-4ebea4d2f939","Type":"ContainerStarted","Data":"d25ab0e34403e6b69f1ac0878f3911982bab251cda9e2df748b9cab793cb21e5"} Dec 03 13:22:45 crc kubenswrapper[4986]: I1203 13:22:44.774790 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Dec 03 13:22:45 crc kubenswrapper[4986]: I1203 13:22:44.803146 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.803126424 podStartE2EDuration="36.803126424s" podCreationTimestamp="2025-12-03 13:22:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:22:44.794533182 +0000 UTC m=+1624.260964383" watchObservedRunningTime="2025-12-03 13:22:44.803126424 +0000 UTC m=+1624.269557615" Dec 03 13:22:45 crc kubenswrapper[4986]: I1203 13:22:44.830364 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9x84l" Dec 03 13:22:45 crc kubenswrapper[4986]: I1203 13:22:44.831455 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.831434548 podStartE2EDuration="37.831434548s" podCreationTimestamp="2025-12-03 13:22:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:22:44.815535929 +0000 UTC m=+1624.281967160" watchObservedRunningTime="2025-12-03 13:22:44.831434548 +0000 UTC m=+1624.297865739" Dec 03 13:22:45 crc kubenswrapper[4986]: I1203 13:22:44.875981 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9x84l"] Dec 03 13:22:46 crc kubenswrapper[4986]: I1203 13:22:46.277561 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qpl5d"] Dec 03 13:22:46 crc kubenswrapper[4986]: W1203 13:22:46.282724 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd9886573_7ee1_4a0b_a6d2_f8621fdabf83.slice/crio-fdf6d9e6e7b48849df2bd549eb2b5af24c75fd8f9866254be7f13945f220936e WatchSource:0}: Error finding container fdf6d9e6e7b48849df2bd549eb2b5af24c75fd8f9866254be7f13945f220936e: Status 404 returned error can't find the container with id fdf6d9e6e7b48849df2bd549eb2b5af24c75fd8f9866254be7f13945f220936e Dec 03 13:22:46 crc kubenswrapper[4986]: I1203 13:22:46.792062 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qpl5d" event={"ID":"d9886573-7ee1-4a0b-a6d2-f8621fdabf83","Type":"ContainerStarted","Data":"fdf6d9e6e7b48849df2bd549eb2b5af24c75fd8f9866254be7f13945f220936e"} Dec 03 13:22:46 crc kubenswrapper[4986]: I1203 13:22:46.792188 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9x84l" podUID="0dcb510a-6807-41ce-b6b5-4a781ea0c48b" containerName="registry-server" containerID="cri-o://481be32139949b31a488ad9374a2a505a90c4add8c0c52e78f8eb33b27d6f8ca" gracePeriod=2 Dec 03 13:22:47 crc kubenswrapper[4986]: I1203 13:22:47.267251 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9x84l" Dec 03 13:22:47 crc kubenswrapper[4986]: I1203 13:22:47.323327 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0dcb510a-6807-41ce-b6b5-4a781ea0c48b-utilities\") pod \"0dcb510a-6807-41ce-b6b5-4a781ea0c48b\" (UID: \"0dcb510a-6807-41ce-b6b5-4a781ea0c48b\") " Dec 03 13:22:47 crc kubenswrapper[4986]: I1203 13:22:47.323382 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ms8v\" (UniqueName: \"kubernetes.io/projected/0dcb510a-6807-41ce-b6b5-4a781ea0c48b-kube-api-access-2ms8v\") pod \"0dcb510a-6807-41ce-b6b5-4a781ea0c48b\" (UID: \"0dcb510a-6807-41ce-b6b5-4a781ea0c48b\") " Dec 03 13:22:47 crc kubenswrapper[4986]: I1203 13:22:47.323469 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0dcb510a-6807-41ce-b6b5-4a781ea0c48b-catalog-content\") pod \"0dcb510a-6807-41ce-b6b5-4a781ea0c48b\" (UID: \"0dcb510a-6807-41ce-b6b5-4a781ea0c48b\") " Dec 03 13:22:47 crc kubenswrapper[4986]: I1203 13:22:47.325711 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0dcb510a-6807-41ce-b6b5-4a781ea0c48b-utilities" (OuterVolumeSpecName: "utilities") pod "0dcb510a-6807-41ce-b6b5-4a781ea0c48b" (UID: "0dcb510a-6807-41ce-b6b5-4a781ea0c48b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 13:22:47 crc kubenswrapper[4986]: I1203 13:22:47.345323 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dcb510a-6807-41ce-b6b5-4a781ea0c48b-kube-api-access-2ms8v" (OuterVolumeSpecName: "kube-api-access-2ms8v") pod "0dcb510a-6807-41ce-b6b5-4a781ea0c48b" (UID: "0dcb510a-6807-41ce-b6b5-4a781ea0c48b"). InnerVolumeSpecName "kube-api-access-2ms8v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:22:47 crc kubenswrapper[4986]: I1203 13:22:47.425410 4986 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0dcb510a-6807-41ce-b6b5-4a781ea0c48b-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 13:22:47 crc kubenswrapper[4986]: I1203 13:22:47.425701 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ms8v\" (UniqueName: \"kubernetes.io/projected/0dcb510a-6807-41ce-b6b5-4a781ea0c48b-kube-api-access-2ms8v\") on node \"crc\" DevicePath \"\"" Dec 03 13:22:47 crc kubenswrapper[4986]: I1203 13:22:47.456395 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0dcb510a-6807-41ce-b6b5-4a781ea0c48b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0dcb510a-6807-41ce-b6b5-4a781ea0c48b" (UID: "0dcb510a-6807-41ce-b6b5-4a781ea0c48b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 13:22:47 crc kubenswrapper[4986]: I1203 13:22:47.528032 4986 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0dcb510a-6807-41ce-b6b5-4a781ea0c48b-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 13:22:47 crc kubenswrapper[4986]: I1203 13:22:47.807283 4986 generic.go:334] "Generic (PLEG): container finished" podID="0dcb510a-6807-41ce-b6b5-4a781ea0c48b" containerID="481be32139949b31a488ad9374a2a505a90c4add8c0c52e78f8eb33b27d6f8ca" exitCode=0 Dec 03 13:22:47 crc kubenswrapper[4986]: I1203 13:22:47.807367 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9x84l" event={"ID":"0dcb510a-6807-41ce-b6b5-4a781ea0c48b","Type":"ContainerDied","Data":"481be32139949b31a488ad9374a2a505a90c4add8c0c52e78f8eb33b27d6f8ca"} Dec 03 13:22:47 crc kubenswrapper[4986]: I1203 13:22:47.807393 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9x84l" Dec 03 13:22:47 crc kubenswrapper[4986]: I1203 13:22:47.807428 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9x84l" event={"ID":"0dcb510a-6807-41ce-b6b5-4a781ea0c48b","Type":"ContainerDied","Data":"4b85a304db244f4152fe79c2f66f9c93c72d5b43ef5203ece99d527a8560a39c"} Dec 03 13:22:47 crc kubenswrapper[4986]: I1203 13:22:47.807451 4986 scope.go:117] "RemoveContainer" containerID="481be32139949b31a488ad9374a2a505a90c4add8c0c52e78f8eb33b27d6f8ca" Dec 03 13:22:47 crc kubenswrapper[4986]: I1203 13:22:47.838531 4986 scope.go:117] "RemoveContainer" containerID="cfbce7872f1ac69b1e073946243041df1429373d89185eaa4112d37b8ef085b1" Dec 03 13:22:47 crc kubenswrapper[4986]: I1203 13:22:47.850557 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9x84l"] Dec 03 13:22:47 crc kubenswrapper[4986]: I1203 13:22:47.860637 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9x84l"] Dec 03 13:22:47 crc kubenswrapper[4986]: I1203 13:22:47.874092 4986 scope.go:117] "RemoveContainer" containerID="48b883b63c018c8679fb08fe0b8de76020d43f15dd1846f053e29b4d619d3b88" Dec 03 13:22:47 crc kubenswrapper[4986]: I1203 13:22:47.916404 4986 scope.go:117] "RemoveContainer" containerID="481be32139949b31a488ad9374a2a505a90c4add8c0c52e78f8eb33b27d6f8ca" Dec 03 13:22:47 crc kubenswrapper[4986]: E1203 13:22:47.917225 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"481be32139949b31a488ad9374a2a505a90c4add8c0c52e78f8eb33b27d6f8ca\": container with ID starting with 481be32139949b31a488ad9374a2a505a90c4add8c0c52e78f8eb33b27d6f8ca not found: ID does not exist" containerID="481be32139949b31a488ad9374a2a505a90c4add8c0c52e78f8eb33b27d6f8ca" Dec 03 13:22:47 crc kubenswrapper[4986]: I1203 13:22:47.917266 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"481be32139949b31a488ad9374a2a505a90c4add8c0c52e78f8eb33b27d6f8ca"} err="failed to get container status \"481be32139949b31a488ad9374a2a505a90c4add8c0c52e78f8eb33b27d6f8ca\": rpc error: code = NotFound desc = could not find container \"481be32139949b31a488ad9374a2a505a90c4add8c0c52e78f8eb33b27d6f8ca\": container with ID starting with 481be32139949b31a488ad9374a2a505a90c4add8c0c52e78f8eb33b27d6f8ca not found: ID does not exist" Dec 03 13:22:47 crc kubenswrapper[4986]: I1203 13:22:47.917305 4986 scope.go:117] "RemoveContainer" containerID="cfbce7872f1ac69b1e073946243041df1429373d89185eaa4112d37b8ef085b1" Dec 03 13:22:47 crc kubenswrapper[4986]: E1203 13:22:47.917679 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cfbce7872f1ac69b1e073946243041df1429373d89185eaa4112d37b8ef085b1\": container with ID starting with cfbce7872f1ac69b1e073946243041df1429373d89185eaa4112d37b8ef085b1 not found: ID does not exist" containerID="cfbce7872f1ac69b1e073946243041df1429373d89185eaa4112d37b8ef085b1" Dec 03 13:22:47 crc kubenswrapper[4986]: I1203 13:22:47.917722 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfbce7872f1ac69b1e073946243041df1429373d89185eaa4112d37b8ef085b1"} err="failed to get container status \"cfbce7872f1ac69b1e073946243041df1429373d89185eaa4112d37b8ef085b1\": rpc error: code = NotFound desc = could not find container \"cfbce7872f1ac69b1e073946243041df1429373d89185eaa4112d37b8ef085b1\": container with ID starting with cfbce7872f1ac69b1e073946243041df1429373d89185eaa4112d37b8ef085b1 not found: ID does not exist" Dec 03 13:22:47 crc kubenswrapper[4986]: I1203 13:22:47.917750 4986 scope.go:117] "RemoveContainer" containerID="48b883b63c018c8679fb08fe0b8de76020d43f15dd1846f053e29b4d619d3b88" Dec 03 13:22:47 crc kubenswrapper[4986]: E1203 13:22:47.918097 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48b883b63c018c8679fb08fe0b8de76020d43f15dd1846f053e29b4d619d3b88\": container with ID starting with 48b883b63c018c8679fb08fe0b8de76020d43f15dd1846f053e29b4d619d3b88 not found: ID does not exist" containerID="48b883b63c018c8679fb08fe0b8de76020d43f15dd1846f053e29b4d619d3b88" Dec 03 13:22:47 crc kubenswrapper[4986]: I1203 13:22:47.918127 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48b883b63c018c8679fb08fe0b8de76020d43f15dd1846f053e29b4d619d3b88"} err="failed to get container status \"48b883b63c018c8679fb08fe0b8de76020d43f15dd1846f053e29b4d619d3b88\": rpc error: code = NotFound desc = could not find container \"48b883b63c018c8679fb08fe0b8de76020d43f15dd1846f053e29b4d619d3b88\": container with ID starting with 48b883b63c018c8679fb08fe0b8de76020d43f15dd1846f053e29b4d619d3b88 not found: ID does not exist" Dec 03 13:22:48 crc kubenswrapper[4986]: I1203 13:22:48.959651 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dcb510a-6807-41ce-b6b5-4a781ea0c48b" path="/var/lib/kubelet/pods/0dcb510a-6807-41ce-b6b5-4a781ea0c48b/volumes" Dec 03 13:22:51 crc kubenswrapper[4986]: I1203 13:22:51.943525 4986 scope.go:117] "RemoveContainer" containerID="3ba053774b99510def9eb1096d5851d891d9961deee4a7966a8cfbf9b5fbf159" Dec 03 13:22:51 crc kubenswrapper[4986]: E1203 13:22:51.944355 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 13:22:54 crc kubenswrapper[4986]: I1203 13:22:54.509617 4986 scope.go:117] "RemoveContainer" containerID="421818a60f44605b9b746518d35de5d7105048f493fbefed3e5df88142b445f0" Dec 03 13:22:55 crc kubenswrapper[4986]: I1203 13:22:55.232551 4986 scope.go:117] "RemoveContainer" containerID="698c470fb9b32a1eb83707a21d64e144815eb224820f9d502bef562fa5008fc3" Dec 03 13:22:56 crc kubenswrapper[4986]: I1203 13:22:56.899672 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qpl5d" event={"ID":"d9886573-7ee1-4a0b-a6d2-f8621fdabf83","Type":"ContainerStarted","Data":"09146132b8e5acb9ae6ffab181fd4aa663ea5ede5f04546c4b1e306cfa4265c3"} Dec 03 13:22:56 crc kubenswrapper[4986]: I1203 13:22:56.925529 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qpl5d" podStartSLOduration=4.913596875 podStartE2EDuration="13.92550995s" podCreationTimestamp="2025-12-03 13:22:43 +0000 UTC" firstStartedPulling="2025-12-03 13:22:46.285137753 +0000 UTC m=+1625.751568944" lastFinishedPulling="2025-12-03 13:22:55.297050828 +0000 UTC m=+1634.763482019" observedRunningTime="2025-12-03 13:22:56.91475165 +0000 UTC m=+1636.381182841" watchObservedRunningTime="2025-12-03 13:22:56.92550995 +0000 UTC m=+1636.391941141" Dec 03 13:22:57 crc kubenswrapper[4986]: I1203 13:22:57.885534 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Dec 03 13:22:58 crc kubenswrapper[4986]: I1203 13:22:58.908179 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Dec 03 13:23:05 crc kubenswrapper[4986]: I1203 13:23:05.943702 4986 scope.go:117] "RemoveContainer" containerID="3ba053774b99510def9eb1096d5851d891d9961deee4a7966a8cfbf9b5fbf159" Dec 03 13:23:05 crc kubenswrapper[4986]: E1203 13:23:05.946088 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 13:23:07 crc kubenswrapper[4986]: I1203 13:23:07.022536 4986 generic.go:334] "Generic (PLEG): container finished" podID="d9886573-7ee1-4a0b-a6d2-f8621fdabf83" containerID="09146132b8e5acb9ae6ffab181fd4aa663ea5ede5f04546c4b1e306cfa4265c3" exitCode=0 Dec 03 13:23:07 crc kubenswrapper[4986]: I1203 13:23:07.022657 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qpl5d" event={"ID":"d9886573-7ee1-4a0b-a6d2-f8621fdabf83","Type":"ContainerDied","Data":"09146132b8e5acb9ae6ffab181fd4aa663ea5ede5f04546c4b1e306cfa4265c3"} Dec 03 13:23:08 crc kubenswrapper[4986]: I1203 13:23:08.515053 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qpl5d" Dec 03 13:23:08 crc kubenswrapper[4986]: I1203 13:23:08.675226 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrmt5\" (UniqueName: \"kubernetes.io/projected/d9886573-7ee1-4a0b-a6d2-f8621fdabf83-kube-api-access-hrmt5\") pod \"d9886573-7ee1-4a0b-a6d2-f8621fdabf83\" (UID: \"d9886573-7ee1-4a0b-a6d2-f8621fdabf83\") " Dec 03 13:23:08 crc kubenswrapper[4986]: I1203 13:23:08.675372 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9886573-7ee1-4a0b-a6d2-f8621fdabf83-repo-setup-combined-ca-bundle\") pod \"d9886573-7ee1-4a0b-a6d2-f8621fdabf83\" (UID: \"d9886573-7ee1-4a0b-a6d2-f8621fdabf83\") " Dec 03 13:23:08 crc kubenswrapper[4986]: I1203 13:23:08.675715 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d9886573-7ee1-4a0b-a6d2-f8621fdabf83-ssh-key\") pod \"d9886573-7ee1-4a0b-a6d2-f8621fdabf83\" (UID: \"d9886573-7ee1-4a0b-a6d2-f8621fdabf83\") " Dec 03 13:23:08 crc kubenswrapper[4986]: I1203 13:23:08.675809 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d9886573-7ee1-4a0b-a6d2-f8621fdabf83-inventory\") pod \"d9886573-7ee1-4a0b-a6d2-f8621fdabf83\" (UID: \"d9886573-7ee1-4a0b-a6d2-f8621fdabf83\") " Dec 03 13:23:08 crc kubenswrapper[4986]: I1203 13:23:08.681765 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9886573-7ee1-4a0b-a6d2-f8621fdabf83-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "d9886573-7ee1-4a0b-a6d2-f8621fdabf83" (UID: "d9886573-7ee1-4a0b-a6d2-f8621fdabf83"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:23:08 crc kubenswrapper[4986]: I1203 13:23:08.682072 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9886573-7ee1-4a0b-a6d2-f8621fdabf83-kube-api-access-hrmt5" (OuterVolumeSpecName: "kube-api-access-hrmt5") pod "d9886573-7ee1-4a0b-a6d2-f8621fdabf83" (UID: "d9886573-7ee1-4a0b-a6d2-f8621fdabf83"). InnerVolumeSpecName "kube-api-access-hrmt5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:23:08 crc kubenswrapper[4986]: I1203 13:23:08.705816 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9886573-7ee1-4a0b-a6d2-f8621fdabf83-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "d9886573-7ee1-4a0b-a6d2-f8621fdabf83" (UID: "d9886573-7ee1-4a0b-a6d2-f8621fdabf83"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:23:08 crc kubenswrapper[4986]: I1203 13:23:08.727154 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9886573-7ee1-4a0b-a6d2-f8621fdabf83-inventory" (OuterVolumeSpecName: "inventory") pod "d9886573-7ee1-4a0b-a6d2-f8621fdabf83" (UID: "d9886573-7ee1-4a0b-a6d2-f8621fdabf83"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:23:08 crc kubenswrapper[4986]: I1203 13:23:08.778570 4986 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d9886573-7ee1-4a0b-a6d2-f8621fdabf83-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 03 13:23:08 crc kubenswrapper[4986]: I1203 13:23:08.778614 4986 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d9886573-7ee1-4a0b-a6d2-f8621fdabf83-inventory\") on node \"crc\" DevicePath \"\"" Dec 03 13:23:08 crc kubenswrapper[4986]: I1203 13:23:08.778627 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hrmt5\" (UniqueName: \"kubernetes.io/projected/d9886573-7ee1-4a0b-a6d2-f8621fdabf83-kube-api-access-hrmt5\") on node \"crc\" DevicePath \"\"" Dec 03 13:23:08 crc kubenswrapper[4986]: I1203 13:23:08.778643 4986 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9886573-7ee1-4a0b-a6d2-f8621fdabf83-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 13:23:09 crc kubenswrapper[4986]: I1203 13:23:09.045540 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qpl5d" event={"ID":"d9886573-7ee1-4a0b-a6d2-f8621fdabf83","Type":"ContainerDied","Data":"fdf6d9e6e7b48849df2bd549eb2b5af24c75fd8f9866254be7f13945f220936e"} Dec 03 13:23:09 crc kubenswrapper[4986]: I1203 13:23:09.045595 4986 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fdf6d9e6e7b48849df2bd549eb2b5af24c75fd8f9866254be7f13945f220936e" Dec 03 13:23:09 crc kubenswrapper[4986]: I1203 13:23:09.045596 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qpl5d" Dec 03 13:23:09 crc kubenswrapper[4986]: I1203 13:23:09.132400 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-c9qzr"] Dec 03 13:23:09 crc kubenswrapper[4986]: E1203 13:23:09.132922 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dcb510a-6807-41ce-b6b5-4a781ea0c48b" containerName="extract-content" Dec 03 13:23:09 crc kubenswrapper[4986]: I1203 13:23:09.132937 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dcb510a-6807-41ce-b6b5-4a781ea0c48b" containerName="extract-content" Dec 03 13:23:09 crc kubenswrapper[4986]: E1203 13:23:09.132953 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dcb510a-6807-41ce-b6b5-4a781ea0c48b" containerName="extract-utilities" Dec 03 13:23:09 crc kubenswrapper[4986]: I1203 13:23:09.132962 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dcb510a-6807-41ce-b6b5-4a781ea0c48b" containerName="extract-utilities" Dec 03 13:23:09 crc kubenswrapper[4986]: E1203 13:23:09.132988 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9886573-7ee1-4a0b-a6d2-f8621fdabf83" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 03 13:23:09 crc kubenswrapper[4986]: I1203 13:23:09.132998 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9886573-7ee1-4a0b-a6d2-f8621fdabf83" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 03 13:23:09 crc kubenswrapper[4986]: E1203 13:23:09.133012 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dcb510a-6807-41ce-b6b5-4a781ea0c48b" containerName="registry-server" Dec 03 13:23:09 crc kubenswrapper[4986]: I1203 13:23:09.133019 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dcb510a-6807-41ce-b6b5-4a781ea0c48b" containerName="registry-server" Dec 03 13:23:09 crc kubenswrapper[4986]: I1203 13:23:09.133241 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9886573-7ee1-4a0b-a6d2-f8621fdabf83" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 03 13:23:09 crc kubenswrapper[4986]: I1203 13:23:09.133264 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="0dcb510a-6807-41ce-b6b5-4a781ea0c48b" containerName="registry-server" Dec 03 13:23:09 crc kubenswrapper[4986]: I1203 13:23:09.134097 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-c9qzr" Dec 03 13:23:09 crc kubenswrapper[4986]: I1203 13:23:09.136059 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 03 13:23:09 crc kubenswrapper[4986]: I1203 13:23:09.136198 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jr75v" Dec 03 13:23:09 crc kubenswrapper[4986]: I1203 13:23:09.137069 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 03 13:23:09 crc kubenswrapper[4986]: I1203 13:23:09.141628 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-c9qzr"] Dec 03 13:23:09 crc kubenswrapper[4986]: I1203 13:23:09.148371 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 03 13:23:09 crc kubenswrapper[4986]: I1203 13:23:09.188147 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxjvn\" (UniqueName: \"kubernetes.io/projected/931d4925-ed6c-4a1f-8b14-4e726641d115-kube-api-access-kxjvn\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-c9qzr\" (UID: \"931d4925-ed6c-4a1f-8b14-4e726641d115\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-c9qzr" Dec 03 13:23:09 crc kubenswrapper[4986]: I1203 13:23:09.188252 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/931d4925-ed6c-4a1f-8b14-4e726641d115-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-c9qzr\" (UID: \"931d4925-ed6c-4a1f-8b14-4e726641d115\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-c9qzr" Dec 03 13:23:09 crc kubenswrapper[4986]: I1203 13:23:09.188385 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/931d4925-ed6c-4a1f-8b14-4e726641d115-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-c9qzr\" (UID: \"931d4925-ed6c-4a1f-8b14-4e726641d115\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-c9qzr" Dec 03 13:23:09 crc kubenswrapper[4986]: I1203 13:23:09.290269 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxjvn\" (UniqueName: \"kubernetes.io/projected/931d4925-ed6c-4a1f-8b14-4e726641d115-kube-api-access-kxjvn\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-c9qzr\" (UID: \"931d4925-ed6c-4a1f-8b14-4e726641d115\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-c9qzr" Dec 03 13:23:09 crc kubenswrapper[4986]: I1203 13:23:09.290455 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/931d4925-ed6c-4a1f-8b14-4e726641d115-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-c9qzr\" (UID: \"931d4925-ed6c-4a1f-8b14-4e726641d115\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-c9qzr" Dec 03 13:23:09 crc kubenswrapper[4986]: I1203 13:23:09.290532 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/931d4925-ed6c-4a1f-8b14-4e726641d115-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-c9qzr\" (UID: \"931d4925-ed6c-4a1f-8b14-4e726641d115\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-c9qzr" Dec 03 13:23:09 crc kubenswrapper[4986]: I1203 13:23:09.294947 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/931d4925-ed6c-4a1f-8b14-4e726641d115-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-c9qzr\" (UID: \"931d4925-ed6c-4a1f-8b14-4e726641d115\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-c9qzr" Dec 03 13:23:09 crc kubenswrapper[4986]: I1203 13:23:09.296314 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/931d4925-ed6c-4a1f-8b14-4e726641d115-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-c9qzr\" (UID: \"931d4925-ed6c-4a1f-8b14-4e726641d115\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-c9qzr" Dec 03 13:23:09 crc kubenswrapper[4986]: I1203 13:23:09.313665 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxjvn\" (UniqueName: \"kubernetes.io/projected/931d4925-ed6c-4a1f-8b14-4e726641d115-kube-api-access-kxjvn\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-c9qzr\" (UID: \"931d4925-ed6c-4a1f-8b14-4e726641d115\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-c9qzr" Dec 03 13:23:09 crc kubenswrapper[4986]: I1203 13:23:09.459023 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-c9qzr" Dec 03 13:23:10 crc kubenswrapper[4986]: I1203 13:23:10.056262 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-c9qzr"] Dec 03 13:23:10 crc kubenswrapper[4986]: W1203 13:23:10.066201 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod931d4925_ed6c_4a1f_8b14_4e726641d115.slice/crio-38aedb406d70de9ea5a273530b82895d91b8ae76f43018b2313f95ec902f93ec WatchSource:0}: Error finding container 38aedb406d70de9ea5a273530b82895d91b8ae76f43018b2313f95ec902f93ec: Status 404 returned error can't find the container with id 38aedb406d70de9ea5a273530b82895d91b8ae76f43018b2313f95ec902f93ec Dec 03 13:23:11 crc kubenswrapper[4986]: I1203 13:23:11.062705 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-c9qzr" event={"ID":"931d4925-ed6c-4a1f-8b14-4e726641d115","Type":"ContainerStarted","Data":"93795c5353d3ddc1e7b5026e301f66dab802e3689f936e8c707bbbac7594ccf8"} Dec 03 13:23:11 crc kubenswrapper[4986]: I1203 13:23:11.063036 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-c9qzr" event={"ID":"931d4925-ed6c-4a1f-8b14-4e726641d115","Type":"ContainerStarted","Data":"38aedb406d70de9ea5a273530b82895d91b8ae76f43018b2313f95ec902f93ec"} Dec 03 13:23:14 crc kubenswrapper[4986]: I1203 13:23:14.094573 4986 generic.go:334] "Generic (PLEG): container finished" podID="931d4925-ed6c-4a1f-8b14-4e726641d115" containerID="93795c5353d3ddc1e7b5026e301f66dab802e3689f936e8c707bbbac7594ccf8" exitCode=0 Dec 03 13:23:14 crc kubenswrapper[4986]: I1203 13:23:14.094662 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-c9qzr" event={"ID":"931d4925-ed6c-4a1f-8b14-4e726641d115","Type":"ContainerDied","Data":"93795c5353d3ddc1e7b5026e301f66dab802e3689f936e8c707bbbac7594ccf8"} Dec 03 13:23:15 crc kubenswrapper[4986]: I1203 13:23:15.475834 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-c9qzr" Dec 03 13:23:15 crc kubenswrapper[4986]: I1203 13:23:15.501502 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/931d4925-ed6c-4a1f-8b14-4e726641d115-ssh-key\") pod \"931d4925-ed6c-4a1f-8b14-4e726641d115\" (UID: \"931d4925-ed6c-4a1f-8b14-4e726641d115\") " Dec 03 13:23:15 crc kubenswrapper[4986]: I1203 13:23:15.501547 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/931d4925-ed6c-4a1f-8b14-4e726641d115-inventory\") pod \"931d4925-ed6c-4a1f-8b14-4e726641d115\" (UID: \"931d4925-ed6c-4a1f-8b14-4e726641d115\") " Dec 03 13:23:15 crc kubenswrapper[4986]: I1203 13:23:15.501641 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxjvn\" (UniqueName: \"kubernetes.io/projected/931d4925-ed6c-4a1f-8b14-4e726641d115-kube-api-access-kxjvn\") pod \"931d4925-ed6c-4a1f-8b14-4e726641d115\" (UID: \"931d4925-ed6c-4a1f-8b14-4e726641d115\") " Dec 03 13:23:15 crc kubenswrapper[4986]: I1203 13:23:15.507595 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/931d4925-ed6c-4a1f-8b14-4e726641d115-kube-api-access-kxjvn" (OuterVolumeSpecName: "kube-api-access-kxjvn") pod "931d4925-ed6c-4a1f-8b14-4e726641d115" (UID: "931d4925-ed6c-4a1f-8b14-4e726641d115"). InnerVolumeSpecName "kube-api-access-kxjvn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:23:15 crc kubenswrapper[4986]: I1203 13:23:15.530628 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/931d4925-ed6c-4a1f-8b14-4e726641d115-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "931d4925-ed6c-4a1f-8b14-4e726641d115" (UID: "931d4925-ed6c-4a1f-8b14-4e726641d115"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:23:15 crc kubenswrapper[4986]: I1203 13:23:15.539734 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/931d4925-ed6c-4a1f-8b14-4e726641d115-inventory" (OuterVolumeSpecName: "inventory") pod "931d4925-ed6c-4a1f-8b14-4e726641d115" (UID: "931d4925-ed6c-4a1f-8b14-4e726641d115"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:23:15 crc kubenswrapper[4986]: I1203 13:23:15.603136 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kxjvn\" (UniqueName: \"kubernetes.io/projected/931d4925-ed6c-4a1f-8b14-4e726641d115-kube-api-access-kxjvn\") on node \"crc\" DevicePath \"\"" Dec 03 13:23:15 crc kubenswrapper[4986]: I1203 13:23:15.603167 4986 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/931d4925-ed6c-4a1f-8b14-4e726641d115-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 03 13:23:15 crc kubenswrapper[4986]: I1203 13:23:15.603177 4986 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/931d4925-ed6c-4a1f-8b14-4e726641d115-inventory\") on node \"crc\" DevicePath \"\"" Dec 03 13:23:16 crc kubenswrapper[4986]: I1203 13:23:16.144156 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-c9qzr" event={"ID":"931d4925-ed6c-4a1f-8b14-4e726641d115","Type":"ContainerDied","Data":"38aedb406d70de9ea5a273530b82895d91b8ae76f43018b2313f95ec902f93ec"} Dec 03 13:23:16 crc kubenswrapper[4986]: I1203 13:23:16.144543 4986 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38aedb406d70de9ea5a273530b82895d91b8ae76f43018b2313f95ec902f93ec" Dec 03 13:23:16 crc kubenswrapper[4986]: I1203 13:23:16.144633 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-c9qzr" Dec 03 13:23:16 crc kubenswrapper[4986]: I1203 13:23:16.198497 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-f4dgg"] Dec 03 13:23:16 crc kubenswrapper[4986]: E1203 13:23:16.198929 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="931d4925-ed6c-4a1f-8b14-4e726641d115" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Dec 03 13:23:16 crc kubenswrapper[4986]: I1203 13:23:16.198950 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="931d4925-ed6c-4a1f-8b14-4e726641d115" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Dec 03 13:23:16 crc kubenswrapper[4986]: I1203 13:23:16.199142 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="931d4925-ed6c-4a1f-8b14-4e726641d115" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Dec 03 13:23:16 crc kubenswrapper[4986]: I1203 13:23:16.199749 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-f4dgg" Dec 03 13:23:16 crc kubenswrapper[4986]: I1203 13:23:16.202104 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 03 13:23:16 crc kubenswrapper[4986]: I1203 13:23:16.202194 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 03 13:23:16 crc kubenswrapper[4986]: I1203 13:23:16.205657 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 03 13:23:16 crc kubenswrapper[4986]: I1203 13:23:16.205782 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jr75v" Dec 03 13:23:16 crc kubenswrapper[4986]: I1203 13:23:16.208413 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-f4dgg"] Dec 03 13:23:16 crc kubenswrapper[4986]: I1203 13:23:16.233880 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b4057123-895b-4436-93ac-02902a78df76-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-f4dgg\" (UID: \"b4057123-895b-4436-93ac-02902a78df76\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-f4dgg" Dec 03 13:23:16 crc kubenswrapper[4986]: I1203 13:23:16.233965 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zm9xl\" (UniqueName: \"kubernetes.io/projected/b4057123-895b-4436-93ac-02902a78df76-kube-api-access-zm9xl\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-f4dgg\" (UID: \"b4057123-895b-4436-93ac-02902a78df76\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-f4dgg" Dec 03 13:23:16 crc kubenswrapper[4986]: I1203 13:23:16.234161 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4057123-895b-4436-93ac-02902a78df76-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-f4dgg\" (UID: \"b4057123-895b-4436-93ac-02902a78df76\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-f4dgg" Dec 03 13:23:16 crc kubenswrapper[4986]: I1203 13:23:16.234306 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b4057123-895b-4436-93ac-02902a78df76-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-f4dgg\" (UID: \"b4057123-895b-4436-93ac-02902a78df76\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-f4dgg" Dec 03 13:23:16 crc kubenswrapper[4986]: I1203 13:23:16.335442 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zm9xl\" (UniqueName: \"kubernetes.io/projected/b4057123-895b-4436-93ac-02902a78df76-kube-api-access-zm9xl\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-f4dgg\" (UID: \"b4057123-895b-4436-93ac-02902a78df76\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-f4dgg" Dec 03 13:23:16 crc kubenswrapper[4986]: I1203 13:23:16.335560 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4057123-895b-4436-93ac-02902a78df76-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-f4dgg\" (UID: \"b4057123-895b-4436-93ac-02902a78df76\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-f4dgg" Dec 03 13:23:16 crc kubenswrapper[4986]: I1203 13:23:16.335618 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b4057123-895b-4436-93ac-02902a78df76-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-f4dgg\" (UID: \"b4057123-895b-4436-93ac-02902a78df76\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-f4dgg" Dec 03 13:23:16 crc kubenswrapper[4986]: I1203 13:23:16.335677 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b4057123-895b-4436-93ac-02902a78df76-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-f4dgg\" (UID: \"b4057123-895b-4436-93ac-02902a78df76\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-f4dgg" Dec 03 13:23:16 crc kubenswrapper[4986]: I1203 13:23:16.340702 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b4057123-895b-4436-93ac-02902a78df76-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-f4dgg\" (UID: \"b4057123-895b-4436-93ac-02902a78df76\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-f4dgg" Dec 03 13:23:16 crc kubenswrapper[4986]: I1203 13:23:16.343011 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4057123-895b-4436-93ac-02902a78df76-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-f4dgg\" (UID: \"b4057123-895b-4436-93ac-02902a78df76\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-f4dgg" Dec 03 13:23:16 crc kubenswrapper[4986]: I1203 13:23:16.343475 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b4057123-895b-4436-93ac-02902a78df76-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-f4dgg\" (UID: \"b4057123-895b-4436-93ac-02902a78df76\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-f4dgg" Dec 03 13:23:16 crc kubenswrapper[4986]: I1203 13:23:16.354954 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zm9xl\" (UniqueName: \"kubernetes.io/projected/b4057123-895b-4436-93ac-02902a78df76-kube-api-access-zm9xl\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-f4dgg\" (UID: \"b4057123-895b-4436-93ac-02902a78df76\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-f4dgg" Dec 03 13:23:16 crc kubenswrapper[4986]: I1203 13:23:16.517345 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-f4dgg" Dec 03 13:23:16 crc kubenswrapper[4986]: I1203 13:23:16.943251 4986 scope.go:117] "RemoveContainer" containerID="3ba053774b99510def9eb1096d5851d891d9961deee4a7966a8cfbf9b5fbf159" Dec 03 13:23:16 crc kubenswrapper[4986]: E1203 13:23:16.943532 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 13:23:17 crc kubenswrapper[4986]: I1203 13:23:17.047971 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-f4dgg"] Dec 03 13:23:17 crc kubenswrapper[4986]: I1203 13:23:17.153810 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-f4dgg" event={"ID":"b4057123-895b-4436-93ac-02902a78df76","Type":"ContainerStarted","Data":"fcd4df5501214669bae08e209a53e9a1accf9fb3420cb63c12bd6eaf6c664b11"} Dec 03 13:23:18 crc kubenswrapper[4986]: I1203 13:23:18.176525 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-f4dgg" event={"ID":"b4057123-895b-4436-93ac-02902a78df76","Type":"ContainerStarted","Data":"210212e40c80d36a3470647d5cb90ee04ac67f2ee9c6b9b35eee2419886e3c18"} Dec 03 13:23:18 crc kubenswrapper[4986]: I1203 13:23:18.198911 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-f4dgg" podStartSLOduration=1.746842248 podStartE2EDuration="2.198889456s" podCreationTimestamp="2025-12-03 13:23:16 +0000 UTC" firstStartedPulling="2025-12-03 13:23:17.052605835 +0000 UTC m=+1656.519037026" lastFinishedPulling="2025-12-03 13:23:17.504653043 +0000 UTC m=+1656.971084234" observedRunningTime="2025-12-03 13:23:18.195815902 +0000 UTC m=+1657.662247093" watchObservedRunningTime="2025-12-03 13:23:18.198889456 +0000 UTC m=+1657.665320647" Dec 03 13:23:27 crc kubenswrapper[4986]: I1203 13:23:27.943900 4986 scope.go:117] "RemoveContainer" containerID="3ba053774b99510def9eb1096d5851d891d9961deee4a7966a8cfbf9b5fbf159" Dec 03 13:23:27 crc kubenswrapper[4986]: E1203 13:23:27.945949 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 13:23:40 crc kubenswrapper[4986]: I1203 13:23:40.951090 4986 scope.go:117] "RemoveContainer" containerID="3ba053774b99510def9eb1096d5851d891d9961deee4a7966a8cfbf9b5fbf159" Dec 03 13:23:40 crc kubenswrapper[4986]: E1203 13:23:40.951876 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 13:23:51 crc kubenswrapper[4986]: I1203 13:23:51.943670 4986 scope.go:117] "RemoveContainer" containerID="3ba053774b99510def9eb1096d5851d891d9961deee4a7966a8cfbf9b5fbf159" Dec 03 13:23:51 crc kubenswrapper[4986]: E1203 13:23:51.944412 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 13:23:55 crc kubenswrapper[4986]: I1203 13:23:55.408208 4986 scope.go:117] "RemoveContainer" containerID="b951f7cf14f01a5b8c721de9534bb34469eddf3f7b318ff8fe714f44176bdb0d" Dec 03 13:24:02 crc kubenswrapper[4986]: I1203 13:24:02.942996 4986 scope.go:117] "RemoveContainer" containerID="3ba053774b99510def9eb1096d5851d891d9961deee4a7966a8cfbf9b5fbf159" Dec 03 13:24:02 crc kubenswrapper[4986]: E1203 13:24:02.943872 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 13:24:14 crc kubenswrapper[4986]: I1203 13:24:14.943753 4986 scope.go:117] "RemoveContainer" containerID="3ba053774b99510def9eb1096d5851d891d9961deee4a7966a8cfbf9b5fbf159" Dec 03 13:24:14 crc kubenswrapper[4986]: E1203 13:24:14.944494 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 13:24:25 crc kubenswrapper[4986]: I1203 13:24:25.944404 4986 scope.go:117] "RemoveContainer" containerID="3ba053774b99510def9eb1096d5851d891d9961deee4a7966a8cfbf9b5fbf159" Dec 03 13:24:25 crc kubenswrapper[4986]: E1203 13:24:25.945186 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 13:24:38 crc kubenswrapper[4986]: I1203 13:24:38.943694 4986 scope.go:117] "RemoveContainer" containerID="3ba053774b99510def9eb1096d5851d891d9961deee4a7966a8cfbf9b5fbf159" Dec 03 13:24:38 crc kubenswrapper[4986]: E1203 13:24:38.944748 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 13:24:53 crc kubenswrapper[4986]: I1203 13:24:53.943962 4986 scope.go:117] "RemoveContainer" containerID="3ba053774b99510def9eb1096d5851d891d9961deee4a7966a8cfbf9b5fbf159" Dec 03 13:24:53 crc kubenswrapper[4986]: E1203 13:24:53.944936 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 13:24:55 crc kubenswrapper[4986]: I1203 13:24:55.471795 4986 scope.go:117] "RemoveContainer" containerID="a7998adde4658342c5c5e5589478470b48450fa38842fde0977d1e5d2b60bb60" Dec 03 13:25:05 crc kubenswrapper[4986]: I1203 13:25:05.943916 4986 scope.go:117] "RemoveContainer" containerID="3ba053774b99510def9eb1096d5851d891d9961deee4a7966a8cfbf9b5fbf159" Dec 03 13:25:05 crc kubenswrapper[4986]: E1203 13:25:05.944727 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 13:25:16 crc kubenswrapper[4986]: I1203 13:25:16.943704 4986 scope.go:117] "RemoveContainer" containerID="3ba053774b99510def9eb1096d5851d891d9961deee4a7966a8cfbf9b5fbf159" Dec 03 13:25:16 crc kubenswrapper[4986]: E1203 13:25:16.944370 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 13:25:29 crc kubenswrapper[4986]: I1203 13:25:29.944124 4986 scope.go:117] "RemoveContainer" containerID="3ba053774b99510def9eb1096d5851d891d9961deee4a7966a8cfbf9b5fbf159" Dec 03 13:25:29 crc kubenswrapper[4986]: E1203 13:25:29.945077 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 13:25:43 crc kubenswrapper[4986]: I1203 13:25:43.944038 4986 scope.go:117] "RemoveContainer" containerID="3ba053774b99510def9eb1096d5851d891d9961deee4a7966a8cfbf9b5fbf159" Dec 03 13:25:43 crc kubenswrapper[4986]: E1203 13:25:43.945191 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 13:25:57 crc kubenswrapper[4986]: I1203 13:25:57.944171 4986 scope.go:117] "RemoveContainer" containerID="3ba053774b99510def9eb1096d5851d891d9961deee4a7966a8cfbf9b5fbf159" Dec 03 13:25:57 crc kubenswrapper[4986]: E1203 13:25:57.945053 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 13:26:09 crc kubenswrapper[4986]: I1203 13:26:09.943510 4986 scope.go:117] "RemoveContainer" containerID="3ba053774b99510def9eb1096d5851d891d9961deee4a7966a8cfbf9b5fbf159" Dec 03 13:26:09 crc kubenswrapper[4986]: E1203 13:26:09.944439 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 13:26:22 crc kubenswrapper[4986]: I1203 13:26:22.944206 4986 scope.go:117] "RemoveContainer" containerID="3ba053774b99510def9eb1096d5851d891d9961deee4a7966a8cfbf9b5fbf159" Dec 03 13:26:22 crc kubenswrapper[4986]: E1203 13:26:22.944927 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 13:26:33 crc kubenswrapper[4986]: I1203 13:26:33.106239 4986 generic.go:334] "Generic (PLEG): container finished" podID="b4057123-895b-4436-93ac-02902a78df76" containerID="210212e40c80d36a3470647d5cb90ee04ac67f2ee9c6b9b35eee2419886e3c18" exitCode=0 Dec 03 13:26:33 crc kubenswrapper[4986]: I1203 13:26:33.106325 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-f4dgg" event={"ID":"b4057123-895b-4436-93ac-02902a78df76","Type":"ContainerDied","Data":"210212e40c80d36a3470647d5cb90ee04ac67f2ee9c6b9b35eee2419886e3c18"} Dec 03 13:26:34 crc kubenswrapper[4986]: I1203 13:26:34.509097 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-f4dgg" Dec 03 13:26:34 crc kubenswrapper[4986]: I1203 13:26:34.642388 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zm9xl\" (UniqueName: \"kubernetes.io/projected/b4057123-895b-4436-93ac-02902a78df76-kube-api-access-zm9xl\") pod \"b4057123-895b-4436-93ac-02902a78df76\" (UID: \"b4057123-895b-4436-93ac-02902a78df76\") " Dec 03 13:26:34 crc kubenswrapper[4986]: I1203 13:26:34.642942 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4057123-895b-4436-93ac-02902a78df76-bootstrap-combined-ca-bundle\") pod \"b4057123-895b-4436-93ac-02902a78df76\" (UID: \"b4057123-895b-4436-93ac-02902a78df76\") " Dec 03 13:26:34 crc kubenswrapper[4986]: I1203 13:26:34.643148 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b4057123-895b-4436-93ac-02902a78df76-ssh-key\") pod \"b4057123-895b-4436-93ac-02902a78df76\" (UID: \"b4057123-895b-4436-93ac-02902a78df76\") " Dec 03 13:26:34 crc kubenswrapper[4986]: I1203 13:26:34.643256 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b4057123-895b-4436-93ac-02902a78df76-inventory\") pod \"b4057123-895b-4436-93ac-02902a78df76\" (UID: \"b4057123-895b-4436-93ac-02902a78df76\") " Dec 03 13:26:34 crc kubenswrapper[4986]: I1203 13:26:34.648817 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4057123-895b-4436-93ac-02902a78df76-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "b4057123-895b-4436-93ac-02902a78df76" (UID: "b4057123-895b-4436-93ac-02902a78df76"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:26:34 crc kubenswrapper[4986]: I1203 13:26:34.649186 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4057123-895b-4436-93ac-02902a78df76-kube-api-access-zm9xl" (OuterVolumeSpecName: "kube-api-access-zm9xl") pod "b4057123-895b-4436-93ac-02902a78df76" (UID: "b4057123-895b-4436-93ac-02902a78df76"). InnerVolumeSpecName "kube-api-access-zm9xl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:26:34 crc kubenswrapper[4986]: I1203 13:26:34.673185 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4057123-895b-4436-93ac-02902a78df76-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "b4057123-895b-4436-93ac-02902a78df76" (UID: "b4057123-895b-4436-93ac-02902a78df76"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:26:34 crc kubenswrapper[4986]: I1203 13:26:34.674614 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4057123-895b-4436-93ac-02902a78df76-inventory" (OuterVolumeSpecName: "inventory") pod "b4057123-895b-4436-93ac-02902a78df76" (UID: "b4057123-895b-4436-93ac-02902a78df76"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:26:34 crc kubenswrapper[4986]: I1203 13:26:34.744780 4986 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4057123-895b-4436-93ac-02902a78df76-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 13:26:34 crc kubenswrapper[4986]: I1203 13:26:34.744815 4986 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b4057123-895b-4436-93ac-02902a78df76-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 03 13:26:34 crc kubenswrapper[4986]: I1203 13:26:34.744826 4986 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b4057123-895b-4436-93ac-02902a78df76-inventory\") on node \"crc\" DevicePath \"\"" Dec 03 13:26:34 crc kubenswrapper[4986]: I1203 13:26:34.744834 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zm9xl\" (UniqueName: \"kubernetes.io/projected/b4057123-895b-4436-93ac-02902a78df76-kube-api-access-zm9xl\") on node \"crc\" DevicePath \"\"" Dec 03 13:26:35 crc kubenswrapper[4986]: I1203 13:26:35.125494 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-f4dgg" event={"ID":"b4057123-895b-4436-93ac-02902a78df76","Type":"ContainerDied","Data":"fcd4df5501214669bae08e209a53e9a1accf9fb3420cb63c12bd6eaf6c664b11"} Dec 03 13:26:35 crc kubenswrapper[4986]: I1203 13:26:35.125592 4986 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fcd4df5501214669bae08e209a53e9a1accf9fb3420cb63c12bd6eaf6c664b11" Dec 03 13:26:35 crc kubenswrapper[4986]: I1203 13:26:35.125565 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-f4dgg" Dec 03 13:26:35 crc kubenswrapper[4986]: I1203 13:26:35.230319 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2696c"] Dec 03 13:26:35 crc kubenswrapper[4986]: E1203 13:26:35.230719 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4057123-895b-4436-93ac-02902a78df76" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 03 13:26:35 crc kubenswrapper[4986]: I1203 13:26:35.230741 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4057123-895b-4436-93ac-02902a78df76" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 03 13:26:35 crc kubenswrapper[4986]: I1203 13:26:35.230944 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4057123-895b-4436-93ac-02902a78df76" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 03 13:26:35 crc kubenswrapper[4986]: I1203 13:26:35.231562 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2696c" Dec 03 13:26:35 crc kubenswrapper[4986]: I1203 13:26:35.233694 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 03 13:26:35 crc kubenswrapper[4986]: I1203 13:26:35.241364 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 03 13:26:35 crc kubenswrapper[4986]: I1203 13:26:35.241781 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jr75v" Dec 03 13:26:35 crc kubenswrapper[4986]: I1203 13:26:35.245254 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 03 13:26:35 crc kubenswrapper[4986]: I1203 13:26:35.249232 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2696c"] Dec 03 13:26:35 crc kubenswrapper[4986]: I1203 13:26:35.356550 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4dc44651-c2df-4ca1-abf4-f9093fa3f70d-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-2696c\" (UID: \"4dc44651-c2df-4ca1-abf4-f9093fa3f70d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2696c" Dec 03 13:26:35 crc kubenswrapper[4986]: I1203 13:26:35.356623 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cm7m\" (UniqueName: \"kubernetes.io/projected/4dc44651-c2df-4ca1-abf4-f9093fa3f70d-kube-api-access-7cm7m\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-2696c\" (UID: \"4dc44651-c2df-4ca1-abf4-f9093fa3f70d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2696c" Dec 03 13:26:35 crc kubenswrapper[4986]: I1203 13:26:35.357060 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4dc44651-c2df-4ca1-abf4-f9093fa3f70d-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-2696c\" (UID: \"4dc44651-c2df-4ca1-abf4-f9093fa3f70d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2696c" Dec 03 13:26:35 crc kubenswrapper[4986]: I1203 13:26:35.459062 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4dc44651-c2df-4ca1-abf4-f9093fa3f70d-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-2696c\" (UID: \"4dc44651-c2df-4ca1-abf4-f9093fa3f70d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2696c" Dec 03 13:26:35 crc kubenswrapper[4986]: I1203 13:26:35.459347 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7cm7m\" (UniqueName: \"kubernetes.io/projected/4dc44651-c2df-4ca1-abf4-f9093fa3f70d-kube-api-access-7cm7m\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-2696c\" (UID: \"4dc44651-c2df-4ca1-abf4-f9093fa3f70d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2696c" Dec 03 13:26:35 crc kubenswrapper[4986]: I1203 13:26:35.459511 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4dc44651-c2df-4ca1-abf4-f9093fa3f70d-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-2696c\" (UID: \"4dc44651-c2df-4ca1-abf4-f9093fa3f70d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2696c" Dec 03 13:26:35 crc kubenswrapper[4986]: I1203 13:26:35.463415 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4dc44651-c2df-4ca1-abf4-f9093fa3f70d-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-2696c\" (UID: \"4dc44651-c2df-4ca1-abf4-f9093fa3f70d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2696c" Dec 03 13:26:35 crc kubenswrapper[4986]: I1203 13:26:35.463946 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4dc44651-c2df-4ca1-abf4-f9093fa3f70d-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-2696c\" (UID: \"4dc44651-c2df-4ca1-abf4-f9093fa3f70d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2696c" Dec 03 13:26:35 crc kubenswrapper[4986]: I1203 13:26:35.480252 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7cm7m\" (UniqueName: \"kubernetes.io/projected/4dc44651-c2df-4ca1-abf4-f9093fa3f70d-kube-api-access-7cm7m\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-2696c\" (UID: \"4dc44651-c2df-4ca1-abf4-f9093fa3f70d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2696c" Dec 03 13:26:35 crc kubenswrapper[4986]: I1203 13:26:35.551920 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2696c" Dec 03 13:26:36 crc kubenswrapper[4986]: I1203 13:26:36.116629 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2696c"] Dec 03 13:26:36 crc kubenswrapper[4986]: I1203 13:26:36.124536 4986 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 03 13:26:36 crc kubenswrapper[4986]: I1203 13:26:36.137937 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2696c" event={"ID":"4dc44651-c2df-4ca1-abf4-f9093fa3f70d","Type":"ContainerStarted","Data":"4fc34281fd735e67afbae61a7e53764dce38a48b7c916efb228008fcbf15ac30"} Dec 03 13:26:36 crc kubenswrapper[4986]: I1203 13:26:36.944411 4986 scope.go:117] "RemoveContainer" containerID="3ba053774b99510def9eb1096d5851d891d9961deee4a7966a8cfbf9b5fbf159" Dec 03 13:26:37 crc kubenswrapper[4986]: I1203 13:26:37.148763 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2696c" event={"ID":"4dc44651-c2df-4ca1-abf4-f9093fa3f70d","Type":"ContainerStarted","Data":"e0a1426d672777ce0e7bda3c16a477cf27e90380dd1ba2039faac6c94cbabded"} Dec 03 13:26:37 crc kubenswrapper[4986]: I1203 13:26:37.173786 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2696c" podStartSLOduration=1.567884749 podStartE2EDuration="2.173760808s" podCreationTimestamp="2025-12-03 13:26:35 +0000 UTC" firstStartedPulling="2025-12-03 13:26:36.12433641 +0000 UTC m=+1855.590767601" lastFinishedPulling="2025-12-03 13:26:36.730212469 +0000 UTC m=+1856.196643660" observedRunningTime="2025-12-03 13:26:37.163631285 +0000 UTC m=+1856.630062496" watchObservedRunningTime="2025-12-03 13:26:37.173760808 +0000 UTC m=+1856.640192009" Dec 03 13:26:38 crc kubenswrapper[4986]: I1203 13:26:38.159762 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" event={"ID":"f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c","Type":"ContainerStarted","Data":"8e39db31bf59f71d1f97b2921b1e19e08a20fea403196bf76d65bd1aeb3c113e"} Dec 03 13:27:25 crc kubenswrapper[4986]: I1203 13:27:25.048031 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-x5mrn"] Dec 03 13:27:25 crc kubenswrapper[4986]: I1203 13:27:25.057709 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-3f59-account-create-update-x5h2k"] Dec 03 13:27:25 crc kubenswrapper[4986]: I1203 13:27:25.066767 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-x5mrn"] Dec 03 13:27:25 crc kubenswrapper[4986]: I1203 13:27:25.077817 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-3f59-account-create-update-x5h2k"] Dec 03 13:27:26 crc kubenswrapper[4986]: I1203 13:27:26.062242 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-b707-account-create-update-k5hp4"] Dec 03 13:27:26 crc kubenswrapper[4986]: I1203 13:27:26.072298 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-28x7q"] Dec 03 13:27:26 crc kubenswrapper[4986]: I1203 13:27:26.080247 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-28x7q"] Dec 03 13:27:26 crc kubenswrapper[4986]: I1203 13:27:26.088235 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-b707-account-create-update-k5hp4"] Dec 03 13:27:26 crc kubenswrapper[4986]: I1203 13:27:26.096273 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-fabf-account-create-update-hhmms"] Dec 03 13:27:26 crc kubenswrapper[4986]: I1203 13:27:26.105131 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-8k4ww"] Dec 03 13:27:26 crc kubenswrapper[4986]: I1203 13:27:26.115511 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-fabf-account-create-update-hhmms"] Dec 03 13:27:26 crc kubenswrapper[4986]: I1203 13:27:26.125227 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-8k4ww"] Dec 03 13:27:26 crc kubenswrapper[4986]: I1203 13:27:26.959027 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39b99d6d-488c-4eba-ae0b-fe1e7d8ec681" path="/var/lib/kubelet/pods/39b99d6d-488c-4eba-ae0b-fe1e7d8ec681/volumes" Dec 03 13:27:26 crc kubenswrapper[4986]: I1203 13:27:26.960031 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47e9face-2119-40b5-a421-74eabeb2971a" path="/var/lib/kubelet/pods/47e9face-2119-40b5-a421-74eabeb2971a/volumes" Dec 03 13:27:26 crc kubenswrapper[4986]: I1203 13:27:26.960602 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50f31d81-05b7-43b6-b74a-07cd1a9f90b4" path="/var/lib/kubelet/pods/50f31d81-05b7-43b6-b74a-07cd1a9f90b4/volumes" Dec 03 13:27:26 crc kubenswrapper[4986]: I1203 13:27:26.961197 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bdae7ad-2192-4b77-a71a-075565088c9b" path="/var/lib/kubelet/pods/7bdae7ad-2192-4b77-a71a-075565088c9b/volumes" Dec 03 13:27:26 crc kubenswrapper[4986]: I1203 13:27:26.962214 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e65336fe-a982-408e-8858-894e6b336af0" path="/var/lib/kubelet/pods/e65336fe-a982-408e-8858-894e6b336af0/volumes" Dec 03 13:27:26 crc kubenswrapper[4986]: I1203 13:27:26.962888 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee3e34f6-cd96-4584-a59f-27e2e0d13ddd" path="/var/lib/kubelet/pods/ee3e34f6-cd96-4584-a59f-27e2e0d13ddd/volumes" Dec 03 13:27:38 crc kubenswrapper[4986]: I1203 13:27:38.044203 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-75vv8"] Dec 03 13:27:38 crc kubenswrapper[4986]: I1203 13:27:38.054625 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-75vv8"] Dec 03 13:27:38 crc kubenswrapper[4986]: I1203 13:27:38.066641 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-7sp42"] Dec 03 13:27:38 crc kubenswrapper[4986]: I1203 13:27:38.079513 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-b742-account-create-update-h8phq"] Dec 03 13:27:38 crc kubenswrapper[4986]: I1203 13:27:38.091514 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-7sp42"] Dec 03 13:27:38 crc kubenswrapper[4986]: I1203 13:27:38.100854 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-b742-account-create-update-h8phq"] Dec 03 13:27:38 crc kubenswrapper[4986]: I1203 13:27:38.954935 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0559f774-91cb-40f6-b047-15591ea39ebe" path="/var/lib/kubelet/pods/0559f774-91cb-40f6-b047-15591ea39ebe/volumes" Dec 03 13:27:38 crc kubenswrapper[4986]: I1203 13:27:38.955714 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97bbae24-c2a8-4d37-bcec-2d934bdf4cea" path="/var/lib/kubelet/pods/97bbae24-c2a8-4d37-bcec-2d934bdf4cea/volumes" Dec 03 13:27:38 crc kubenswrapper[4986]: I1203 13:27:38.956238 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f130f3db-61e7-49d0-ab61-3f3d16349860" path="/var/lib/kubelet/pods/f130f3db-61e7-49d0-ab61-3f3d16349860/volumes" Dec 03 13:27:52 crc kubenswrapper[4986]: I1203 13:27:52.052226 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-r7nf5"] Dec 03 13:27:52 crc kubenswrapper[4986]: I1203 13:27:52.064712 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-3a26-account-create-update-6d5g6"] Dec 03 13:27:52 crc kubenswrapper[4986]: I1203 13:27:52.092697 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-0669-account-create-update-58qxr"] Dec 03 13:27:52 crc kubenswrapper[4986]: I1203 13:27:52.100916 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-0669-account-create-update-58qxr"] Dec 03 13:27:52 crc kubenswrapper[4986]: I1203 13:27:52.108035 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-3a26-account-create-update-6d5g6"] Dec 03 13:27:52 crc kubenswrapper[4986]: I1203 13:27:52.116082 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-r7nf5"] Dec 03 13:27:52 crc kubenswrapper[4986]: I1203 13:27:52.956788 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb71e9c0-6c77-4e47-8b4d-8e77a05cca59" path="/var/lib/kubelet/pods/bb71e9c0-6c77-4e47-8b4d-8e77a05cca59/volumes" Dec 03 13:27:52 crc kubenswrapper[4986]: I1203 13:27:52.957701 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbde4185-c1c7-47e8-b369-d23ff4d4092c" path="/var/lib/kubelet/pods/cbde4185-c1c7-47e8-b369-d23ff4d4092c/volumes" Dec 03 13:27:52 crc kubenswrapper[4986]: I1203 13:27:52.958386 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6fe773b-71aa-4762-ad97-bfb9636906c2" path="/var/lib/kubelet/pods/f6fe773b-71aa-4762-ad97-bfb9636906c2/volumes" Dec 03 13:27:55 crc kubenswrapper[4986]: I1203 13:27:55.588713 4986 scope.go:117] "RemoveContainer" containerID="fe6fb5a581d4b359f7e4eaf250175528ac0986041fb221f2d4c3bb36e410199f" Dec 03 13:27:55 crc kubenswrapper[4986]: I1203 13:27:55.610827 4986 scope.go:117] "RemoveContainer" containerID="f21d8c8045fad323b71f79041de6c44cb4ab826c5d615d90ebebe53fdba0c4a5" Dec 03 13:27:55 crc kubenswrapper[4986]: I1203 13:27:55.665881 4986 scope.go:117] "RemoveContainer" containerID="0f944103ad3896b9d879413f5384fc2fa00d73b5882cf87ae2aa7a25f3069320" Dec 03 13:27:55 crc kubenswrapper[4986]: I1203 13:27:55.706636 4986 scope.go:117] "RemoveContainer" containerID="6de88ecfba6dbb362b821f8765bc09066efc3682b0a90efe23287b844163c29b" Dec 03 13:27:55 crc kubenswrapper[4986]: I1203 13:27:55.745133 4986 scope.go:117] "RemoveContainer" containerID="20b07599d1311ccc0995729e3fcdb9087c9da6a6db9c10453163a3bb48f919ee" Dec 03 13:27:55 crc kubenswrapper[4986]: I1203 13:27:55.784326 4986 scope.go:117] "RemoveContainer" containerID="5bd2c846230b30451b644e278628c3e0410fd3d3eecf52bf7d01336263a49969" Dec 03 13:27:55 crc kubenswrapper[4986]: I1203 13:27:55.836842 4986 scope.go:117] "RemoveContainer" containerID="6ed6e0cec26c36bf59dd08dc66d9114352fac63214e60adde6faa248467decea" Dec 03 13:27:55 crc kubenswrapper[4986]: I1203 13:27:55.865454 4986 scope.go:117] "RemoveContainer" containerID="1ab9d00e0ff7b5c63de8c4b1518e1354faed78b2b09c9f3b82deb2a2bf8ba4d9" Dec 03 13:27:55 crc kubenswrapper[4986]: I1203 13:27:55.883690 4986 scope.go:117] "RemoveContainer" containerID="004d8f697ae1fd369046140c2c327428b2686d0bce755968a4b4e5e959ffbcbd" Dec 03 13:27:55 crc kubenswrapper[4986]: I1203 13:27:55.910849 4986 scope.go:117] "RemoveContainer" containerID="1276216af52852b74b7b994a366629d36e4ba71525209a2cba12d6ae3626304d" Dec 03 13:27:55 crc kubenswrapper[4986]: I1203 13:27:55.934694 4986 scope.go:117] "RemoveContainer" containerID="8440bfded328d55648f43303fa627aa731e69eba9b5e5f8bec2d88552b71ba8b" Dec 03 13:27:55 crc kubenswrapper[4986]: I1203 13:27:55.958518 4986 scope.go:117] "RemoveContainer" containerID="0c010c527909e71a1090188eadc464bde7d31041e303ca47265e912741e72735" Dec 03 13:27:58 crc kubenswrapper[4986]: I1203 13:27:58.029938 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-zbccr"] Dec 03 13:27:58 crc kubenswrapper[4986]: I1203 13:27:58.045457 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-zbccr"] Dec 03 13:27:58 crc kubenswrapper[4986]: I1203 13:27:58.956587 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3efb58d2-ea87-4349-a581-519e1b458a37" path="/var/lib/kubelet/pods/3efb58d2-ea87-4349-a581-519e1b458a37/volumes" Dec 03 13:28:30 crc kubenswrapper[4986]: I1203 13:28:30.214194 4986 generic.go:334] "Generic (PLEG): container finished" podID="4dc44651-c2df-4ca1-abf4-f9093fa3f70d" containerID="e0a1426d672777ce0e7bda3c16a477cf27e90380dd1ba2039faac6c94cbabded" exitCode=0 Dec 03 13:28:30 crc kubenswrapper[4986]: I1203 13:28:30.214294 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2696c" event={"ID":"4dc44651-c2df-4ca1-abf4-f9093fa3f70d","Type":"ContainerDied","Data":"e0a1426d672777ce0e7bda3c16a477cf27e90380dd1ba2039faac6c94cbabded"} Dec 03 13:28:31 crc kubenswrapper[4986]: I1203 13:28:31.643597 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2696c" Dec 03 13:28:31 crc kubenswrapper[4986]: I1203 13:28:31.724830 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4dc44651-c2df-4ca1-abf4-f9093fa3f70d-inventory\") pod \"4dc44651-c2df-4ca1-abf4-f9093fa3f70d\" (UID: \"4dc44651-c2df-4ca1-abf4-f9093fa3f70d\") " Dec 03 13:28:31 crc kubenswrapper[4986]: I1203 13:28:31.724901 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4dc44651-c2df-4ca1-abf4-f9093fa3f70d-ssh-key\") pod \"4dc44651-c2df-4ca1-abf4-f9093fa3f70d\" (UID: \"4dc44651-c2df-4ca1-abf4-f9093fa3f70d\") " Dec 03 13:28:31 crc kubenswrapper[4986]: I1203 13:28:31.725083 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7cm7m\" (UniqueName: \"kubernetes.io/projected/4dc44651-c2df-4ca1-abf4-f9093fa3f70d-kube-api-access-7cm7m\") pod \"4dc44651-c2df-4ca1-abf4-f9093fa3f70d\" (UID: \"4dc44651-c2df-4ca1-abf4-f9093fa3f70d\") " Dec 03 13:28:31 crc kubenswrapper[4986]: I1203 13:28:31.733600 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4dc44651-c2df-4ca1-abf4-f9093fa3f70d-kube-api-access-7cm7m" (OuterVolumeSpecName: "kube-api-access-7cm7m") pod "4dc44651-c2df-4ca1-abf4-f9093fa3f70d" (UID: "4dc44651-c2df-4ca1-abf4-f9093fa3f70d"). InnerVolumeSpecName "kube-api-access-7cm7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:28:31 crc kubenswrapper[4986]: I1203 13:28:31.755946 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4dc44651-c2df-4ca1-abf4-f9093fa3f70d-inventory" (OuterVolumeSpecName: "inventory") pod "4dc44651-c2df-4ca1-abf4-f9093fa3f70d" (UID: "4dc44651-c2df-4ca1-abf4-f9093fa3f70d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:28:31 crc kubenswrapper[4986]: I1203 13:28:31.759554 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4dc44651-c2df-4ca1-abf4-f9093fa3f70d-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "4dc44651-c2df-4ca1-abf4-f9093fa3f70d" (UID: "4dc44651-c2df-4ca1-abf4-f9093fa3f70d"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:28:31 crc kubenswrapper[4986]: I1203 13:28:31.827700 4986 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4dc44651-c2df-4ca1-abf4-f9093fa3f70d-inventory\") on node \"crc\" DevicePath \"\"" Dec 03 13:28:31 crc kubenswrapper[4986]: I1203 13:28:31.827735 4986 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4dc44651-c2df-4ca1-abf4-f9093fa3f70d-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 03 13:28:31 crc kubenswrapper[4986]: I1203 13:28:31.827744 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7cm7m\" (UniqueName: \"kubernetes.io/projected/4dc44651-c2df-4ca1-abf4-f9093fa3f70d-kube-api-access-7cm7m\") on node \"crc\" DevicePath \"\"" Dec 03 13:28:32 crc kubenswrapper[4986]: I1203 13:28:32.235004 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2696c" event={"ID":"4dc44651-c2df-4ca1-abf4-f9093fa3f70d","Type":"ContainerDied","Data":"4fc34281fd735e67afbae61a7e53764dce38a48b7c916efb228008fcbf15ac30"} Dec 03 13:28:32 crc kubenswrapper[4986]: I1203 13:28:32.235044 4986 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4fc34281fd735e67afbae61a7e53764dce38a48b7c916efb228008fcbf15ac30" Dec 03 13:28:32 crc kubenswrapper[4986]: I1203 13:28:32.235079 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2696c" Dec 03 13:28:32 crc kubenswrapper[4986]: I1203 13:28:32.317513 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8smpl"] Dec 03 13:28:32 crc kubenswrapper[4986]: E1203 13:28:32.317897 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dc44651-c2df-4ca1-abf4-f9093fa3f70d" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 03 13:28:32 crc kubenswrapper[4986]: I1203 13:28:32.317918 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dc44651-c2df-4ca1-abf4-f9093fa3f70d" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 03 13:28:32 crc kubenswrapper[4986]: I1203 13:28:32.318162 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="4dc44651-c2df-4ca1-abf4-f9093fa3f70d" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 03 13:28:32 crc kubenswrapper[4986]: I1203 13:28:32.319062 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8smpl" Dec 03 13:28:32 crc kubenswrapper[4986]: I1203 13:28:32.325692 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 03 13:28:32 crc kubenswrapper[4986]: I1203 13:28:32.325797 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 03 13:28:32 crc kubenswrapper[4986]: I1203 13:28:32.326012 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 03 13:28:32 crc kubenswrapper[4986]: I1203 13:28:32.326885 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jr75v" Dec 03 13:28:32 crc kubenswrapper[4986]: I1203 13:28:32.331480 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8smpl"] Dec 03 13:28:32 crc kubenswrapper[4986]: I1203 13:28:32.439738 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbmcx\" (UniqueName: \"kubernetes.io/projected/8fec002b-a660-4a80-8a57-51a2ce32cf29-kube-api-access-wbmcx\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8smpl\" (UID: \"8fec002b-a660-4a80-8a57-51a2ce32cf29\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8smpl" Dec 03 13:28:32 crc kubenswrapper[4986]: I1203 13:28:32.439786 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8fec002b-a660-4a80-8a57-51a2ce32cf29-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8smpl\" (UID: \"8fec002b-a660-4a80-8a57-51a2ce32cf29\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8smpl" Dec 03 13:28:32 crc kubenswrapper[4986]: I1203 13:28:32.439811 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8fec002b-a660-4a80-8a57-51a2ce32cf29-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8smpl\" (UID: \"8fec002b-a660-4a80-8a57-51a2ce32cf29\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8smpl" Dec 03 13:28:32 crc kubenswrapper[4986]: I1203 13:28:32.542644 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbmcx\" (UniqueName: \"kubernetes.io/projected/8fec002b-a660-4a80-8a57-51a2ce32cf29-kube-api-access-wbmcx\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8smpl\" (UID: \"8fec002b-a660-4a80-8a57-51a2ce32cf29\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8smpl" Dec 03 13:28:32 crc kubenswrapper[4986]: I1203 13:28:32.543234 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8fec002b-a660-4a80-8a57-51a2ce32cf29-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8smpl\" (UID: \"8fec002b-a660-4a80-8a57-51a2ce32cf29\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8smpl" Dec 03 13:28:32 crc kubenswrapper[4986]: I1203 13:28:32.543448 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8fec002b-a660-4a80-8a57-51a2ce32cf29-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8smpl\" (UID: \"8fec002b-a660-4a80-8a57-51a2ce32cf29\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8smpl" Dec 03 13:28:32 crc kubenswrapper[4986]: I1203 13:28:32.547972 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8fec002b-a660-4a80-8a57-51a2ce32cf29-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8smpl\" (UID: \"8fec002b-a660-4a80-8a57-51a2ce32cf29\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8smpl" Dec 03 13:28:32 crc kubenswrapper[4986]: I1203 13:28:32.551987 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8fec002b-a660-4a80-8a57-51a2ce32cf29-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8smpl\" (UID: \"8fec002b-a660-4a80-8a57-51a2ce32cf29\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8smpl" Dec 03 13:28:32 crc kubenswrapper[4986]: I1203 13:28:32.562431 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbmcx\" (UniqueName: \"kubernetes.io/projected/8fec002b-a660-4a80-8a57-51a2ce32cf29-kube-api-access-wbmcx\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8smpl\" (UID: \"8fec002b-a660-4a80-8a57-51a2ce32cf29\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8smpl" Dec 03 13:28:32 crc kubenswrapper[4986]: I1203 13:28:32.641963 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8smpl" Dec 03 13:28:33 crc kubenswrapper[4986]: I1203 13:28:33.046930 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-4s25c"] Dec 03 13:28:33 crc kubenswrapper[4986]: I1203 13:28:33.070323 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-4s25c"] Dec 03 13:28:33 crc kubenswrapper[4986]: I1203 13:28:33.177691 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8smpl"] Dec 03 13:28:33 crc kubenswrapper[4986]: W1203 13:28:33.181765 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8fec002b_a660_4a80_8a57_51a2ce32cf29.slice/crio-b98a3607176da6585e66dcb54e8f03b30a0ea9d3d3af499b9d593aaee40968e3 WatchSource:0}: Error finding container b98a3607176da6585e66dcb54e8f03b30a0ea9d3d3af499b9d593aaee40968e3: Status 404 returned error can't find the container with id b98a3607176da6585e66dcb54e8f03b30a0ea9d3d3af499b9d593aaee40968e3 Dec 03 13:28:33 crc kubenswrapper[4986]: I1203 13:28:33.243554 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8smpl" event={"ID":"8fec002b-a660-4a80-8a57-51a2ce32cf29","Type":"ContainerStarted","Data":"b98a3607176da6585e66dcb54e8f03b30a0ea9d3d3af499b9d593aaee40968e3"} Dec 03 13:28:34 crc kubenswrapper[4986]: I1203 13:28:34.251933 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8smpl" event={"ID":"8fec002b-a660-4a80-8a57-51a2ce32cf29","Type":"ContainerStarted","Data":"e09fb382b67374c3266abdf72cbbb50efd54bb78fd6d1589b0a605cd690e829e"} Dec 03 13:28:34 crc kubenswrapper[4986]: I1203 13:28:34.276543 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8smpl" podStartSLOduration=1.816041978 podStartE2EDuration="2.276522851s" podCreationTimestamp="2025-12-03 13:28:32 +0000 UTC" firstStartedPulling="2025-12-03 13:28:33.185036678 +0000 UTC m=+1972.651467879" lastFinishedPulling="2025-12-03 13:28:33.645517561 +0000 UTC m=+1973.111948752" observedRunningTime="2025-12-03 13:28:34.270289243 +0000 UTC m=+1973.736720434" watchObservedRunningTime="2025-12-03 13:28:34.276522851 +0000 UTC m=+1973.742954042" Dec 03 13:28:34 crc kubenswrapper[4986]: I1203 13:28:34.957962 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14924968-6b9f-4a33-b504-dbfd64956b30" path="/var/lib/kubelet/pods/14924968-6b9f-4a33-b504-dbfd64956b30/volumes" Dec 03 13:28:36 crc kubenswrapper[4986]: I1203 13:28:36.032870 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-wrbjn"] Dec 03 13:28:36 crc kubenswrapper[4986]: I1203 13:28:36.041120 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-8v24s"] Dec 03 13:28:36 crc kubenswrapper[4986]: I1203 13:28:36.052413 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-wrbjn"] Dec 03 13:28:36 crc kubenswrapper[4986]: I1203 13:28:36.061785 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-8v24s"] Dec 03 13:28:36 crc kubenswrapper[4986]: I1203 13:28:36.953780 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ca26c94-4546-4d5d-8600-30257c724198" path="/var/lib/kubelet/pods/9ca26c94-4546-4d5d-8600-30257c724198/volumes" Dec 03 13:28:36 crc kubenswrapper[4986]: I1203 13:28:36.954579 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e71caade-d3ac-45c2-8369-2c2a1d896370" path="/var/lib/kubelet/pods/e71caade-d3ac-45c2-8369-2c2a1d896370/volumes" Dec 03 13:28:40 crc kubenswrapper[4986]: I1203 13:28:40.035562 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-2k7gk"] Dec 03 13:28:40 crc kubenswrapper[4986]: I1203 13:28:40.047864 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-2k7gk"] Dec 03 13:28:40 crc kubenswrapper[4986]: I1203 13:28:40.954775 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d560e389-543d-4341-a450-e6cb0f2a3057" path="/var/lib/kubelet/pods/d560e389-543d-4341-a450-e6cb0f2a3057/volumes" Dec 03 13:28:51 crc kubenswrapper[4986]: I1203 13:28:51.042793 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-8f5bt"] Dec 03 13:28:51 crc kubenswrapper[4986]: I1203 13:28:51.053055 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-8f5bt"] Dec 03 13:28:52 crc kubenswrapper[4986]: I1203 13:28:52.954940 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0874137d-06da-450a-9e93-ad53257c5115" path="/var/lib/kubelet/pods/0874137d-06da-450a-9e93-ad53257c5115/volumes" Dec 03 13:28:56 crc kubenswrapper[4986]: I1203 13:28:56.164593 4986 scope.go:117] "RemoveContainer" containerID="af08236fb2a6423e2ffea78204750043b885a47204fb99a7e8a9956c7d5d0a68" Dec 03 13:28:56 crc kubenswrapper[4986]: I1203 13:28:56.202934 4986 scope.go:117] "RemoveContainer" containerID="b86fc780bc92a3f4d38f408567bda5883068934dad3ebc7618aec704e587797b" Dec 03 13:28:56 crc kubenswrapper[4986]: I1203 13:28:56.268999 4986 scope.go:117] "RemoveContainer" containerID="dbd6cfcaed1a911968e997f6a395fdd03f3717d39b70c037e0fa75aadf9e7f0d" Dec 03 13:28:56 crc kubenswrapper[4986]: I1203 13:28:56.309169 4986 scope.go:117] "RemoveContainer" containerID="eb1cbefc6c37e055dbcce2b749fbdf02cc41bb51b8fcdfd46e94ad0193235add" Dec 03 13:28:56 crc kubenswrapper[4986]: I1203 13:28:56.349700 4986 scope.go:117] "RemoveContainer" containerID="83c8b6807b6a9e6d3bfae00be0405ade99cc8d834fdaef0b74ab5b47706fa3a4" Dec 03 13:28:56 crc kubenswrapper[4986]: I1203 13:28:56.388745 4986 scope.go:117] "RemoveContainer" containerID="dd577315d509f9c14bcfc2c833af575a89f8978df77e1e1cbb64fa674057668c" Dec 03 13:28:57 crc kubenswrapper[4986]: I1203 13:28:57.026747 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-qpv6c"] Dec 03 13:28:57 crc kubenswrapper[4986]: I1203 13:28:57.034580 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-qpv6c"] Dec 03 13:28:58 crc kubenswrapper[4986]: I1203 13:28:58.957057 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b8ed42a-89ce-4098-9489-5291e678bf18" path="/var/lib/kubelet/pods/7b8ed42a-89ce-4098-9489-5291e678bf18/volumes" Dec 03 13:29:03 crc kubenswrapper[4986]: I1203 13:29:03.491629 4986 patch_prober.go:28] interesting pod/machine-config-daemon-xggpj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 13:29:03 crc kubenswrapper[4986]: I1203 13:29:03.491946 4986 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 13:29:30 crc kubenswrapper[4986]: I1203 13:29:30.040181 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-wv2h7"] Dec 03 13:29:30 crc kubenswrapper[4986]: I1203 13:29:30.048776 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-wv2h7"] Dec 03 13:29:30 crc kubenswrapper[4986]: I1203 13:29:30.954041 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67426b10-306c-4d32-94e7-99267dc8e435" path="/var/lib/kubelet/pods/67426b10-306c-4d32-94e7-99267dc8e435/volumes" Dec 03 13:29:31 crc kubenswrapper[4986]: I1203 13:29:31.042167 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-nqdzr"] Dec 03 13:29:31 crc kubenswrapper[4986]: I1203 13:29:31.051864 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-c47e-account-create-update-g6vlv"] Dec 03 13:29:31 crc kubenswrapper[4986]: I1203 13:29:31.059455 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-nqdzr"] Dec 03 13:29:31 crc kubenswrapper[4986]: I1203 13:29:31.066153 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-c47e-account-create-update-g6vlv"] Dec 03 13:29:32 crc kubenswrapper[4986]: I1203 13:29:32.037265 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-22db-account-create-update-ftg9x"] Dec 03 13:29:32 crc kubenswrapper[4986]: I1203 13:29:32.048695 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-e9c3-account-create-update-7vqrm"] Dec 03 13:29:32 crc kubenswrapper[4986]: I1203 13:29:32.060219 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-qbwm4"] Dec 03 13:29:32 crc kubenswrapper[4986]: I1203 13:29:32.069841 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-e9c3-account-create-update-7vqrm"] Dec 03 13:29:32 crc kubenswrapper[4986]: I1203 13:29:32.078215 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-22db-account-create-update-ftg9x"] Dec 03 13:29:32 crc kubenswrapper[4986]: I1203 13:29:32.087532 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-qbwm4"] Dec 03 13:29:32 crc kubenswrapper[4986]: I1203 13:29:32.957604 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f8a0583-c937-4c77-8d43-ea7dcb406886" path="/var/lib/kubelet/pods/5f8a0583-c937-4c77-8d43-ea7dcb406886/volumes" Dec 03 13:29:32 crc kubenswrapper[4986]: I1203 13:29:32.959455 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c8061fb-865f-46dd-846b-87907d3f12f7" path="/var/lib/kubelet/pods/8c8061fb-865f-46dd-846b-87907d3f12f7/volumes" Dec 03 13:29:32 crc kubenswrapper[4986]: I1203 13:29:32.960731 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3245c21-420c-442d-934f-73f6917fcb21" path="/var/lib/kubelet/pods/c3245c21-420c-442d-934f-73f6917fcb21/volumes" Dec 03 13:29:32 crc kubenswrapper[4986]: I1203 13:29:32.962041 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6b2f7e4-b8b0-4440-ac4b-291ea7b92d36" path="/var/lib/kubelet/pods/c6b2f7e4-b8b0-4440-ac4b-291ea7b92d36/volumes" Dec 03 13:29:32 crc kubenswrapper[4986]: I1203 13:29:32.964399 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f756aad9-8f89-429d-af35-c412a37c78cb" path="/var/lib/kubelet/pods/f756aad9-8f89-429d-af35-c412a37c78cb/volumes" Dec 03 13:29:33 crc kubenswrapper[4986]: I1203 13:29:33.491331 4986 patch_prober.go:28] interesting pod/machine-config-daemon-xggpj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 13:29:33 crc kubenswrapper[4986]: I1203 13:29:33.491404 4986 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 13:29:49 crc kubenswrapper[4986]: I1203 13:29:49.922873 4986 generic.go:334] "Generic (PLEG): container finished" podID="8fec002b-a660-4a80-8a57-51a2ce32cf29" containerID="e09fb382b67374c3266abdf72cbbb50efd54bb78fd6d1589b0a605cd690e829e" exitCode=0 Dec 03 13:29:49 crc kubenswrapper[4986]: I1203 13:29:49.922940 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8smpl" event={"ID":"8fec002b-a660-4a80-8a57-51a2ce32cf29","Type":"ContainerDied","Data":"e09fb382b67374c3266abdf72cbbb50efd54bb78fd6d1589b0a605cd690e829e"} Dec 03 13:29:51 crc kubenswrapper[4986]: I1203 13:29:51.317363 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8smpl" Dec 03 13:29:51 crc kubenswrapper[4986]: I1203 13:29:51.472868 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8fec002b-a660-4a80-8a57-51a2ce32cf29-inventory\") pod \"8fec002b-a660-4a80-8a57-51a2ce32cf29\" (UID: \"8fec002b-a660-4a80-8a57-51a2ce32cf29\") " Dec 03 13:29:51 crc kubenswrapper[4986]: I1203 13:29:51.473274 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbmcx\" (UniqueName: \"kubernetes.io/projected/8fec002b-a660-4a80-8a57-51a2ce32cf29-kube-api-access-wbmcx\") pod \"8fec002b-a660-4a80-8a57-51a2ce32cf29\" (UID: \"8fec002b-a660-4a80-8a57-51a2ce32cf29\") " Dec 03 13:29:51 crc kubenswrapper[4986]: I1203 13:29:51.473349 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8fec002b-a660-4a80-8a57-51a2ce32cf29-ssh-key\") pod \"8fec002b-a660-4a80-8a57-51a2ce32cf29\" (UID: \"8fec002b-a660-4a80-8a57-51a2ce32cf29\") " Dec 03 13:29:51 crc kubenswrapper[4986]: I1203 13:29:51.479690 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fec002b-a660-4a80-8a57-51a2ce32cf29-kube-api-access-wbmcx" (OuterVolumeSpecName: "kube-api-access-wbmcx") pod "8fec002b-a660-4a80-8a57-51a2ce32cf29" (UID: "8fec002b-a660-4a80-8a57-51a2ce32cf29"). InnerVolumeSpecName "kube-api-access-wbmcx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:29:51 crc kubenswrapper[4986]: I1203 13:29:51.501400 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fec002b-a660-4a80-8a57-51a2ce32cf29-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "8fec002b-a660-4a80-8a57-51a2ce32cf29" (UID: "8fec002b-a660-4a80-8a57-51a2ce32cf29"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:29:51 crc kubenswrapper[4986]: I1203 13:29:51.517496 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fec002b-a660-4a80-8a57-51a2ce32cf29-inventory" (OuterVolumeSpecName: "inventory") pod "8fec002b-a660-4a80-8a57-51a2ce32cf29" (UID: "8fec002b-a660-4a80-8a57-51a2ce32cf29"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:29:51 crc kubenswrapper[4986]: I1203 13:29:51.575118 4986 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8fec002b-a660-4a80-8a57-51a2ce32cf29-inventory\") on node \"crc\" DevicePath \"\"" Dec 03 13:29:51 crc kubenswrapper[4986]: I1203 13:29:51.575146 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wbmcx\" (UniqueName: \"kubernetes.io/projected/8fec002b-a660-4a80-8a57-51a2ce32cf29-kube-api-access-wbmcx\") on node \"crc\" DevicePath \"\"" Dec 03 13:29:51 crc kubenswrapper[4986]: I1203 13:29:51.575156 4986 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8fec002b-a660-4a80-8a57-51a2ce32cf29-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 03 13:29:51 crc kubenswrapper[4986]: I1203 13:29:51.948652 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8smpl" event={"ID":"8fec002b-a660-4a80-8a57-51a2ce32cf29","Type":"ContainerDied","Data":"b98a3607176da6585e66dcb54e8f03b30a0ea9d3d3af499b9d593aaee40968e3"} Dec 03 13:29:51 crc kubenswrapper[4986]: I1203 13:29:51.948705 4986 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b98a3607176da6585e66dcb54e8f03b30a0ea9d3d3af499b9d593aaee40968e3" Dec 03 13:29:51 crc kubenswrapper[4986]: I1203 13:29:51.948728 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8smpl" Dec 03 13:29:52 crc kubenswrapper[4986]: I1203 13:29:52.040625 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mq9d2"] Dec 03 13:29:52 crc kubenswrapper[4986]: E1203 13:29:52.041072 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fec002b-a660-4a80-8a57-51a2ce32cf29" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Dec 03 13:29:52 crc kubenswrapper[4986]: I1203 13:29:52.041097 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fec002b-a660-4a80-8a57-51a2ce32cf29" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Dec 03 13:29:52 crc kubenswrapper[4986]: I1203 13:29:52.041376 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fec002b-a660-4a80-8a57-51a2ce32cf29" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Dec 03 13:29:52 crc kubenswrapper[4986]: I1203 13:29:52.042123 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mq9d2" Dec 03 13:29:52 crc kubenswrapper[4986]: I1203 13:29:52.045677 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 03 13:29:52 crc kubenswrapper[4986]: I1203 13:29:52.045810 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 03 13:29:52 crc kubenswrapper[4986]: I1203 13:29:52.046606 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jr75v" Dec 03 13:29:52 crc kubenswrapper[4986]: I1203 13:29:52.046880 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 03 13:29:52 crc kubenswrapper[4986]: I1203 13:29:52.055468 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mq9d2"] Dec 03 13:29:52 crc kubenswrapper[4986]: I1203 13:29:52.187490 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6d8473a8-d750-4ff5-84be-96088a3eea45-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mq9d2\" (UID: \"6d8473a8-d750-4ff5-84be-96088a3eea45\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mq9d2" Dec 03 13:29:52 crc kubenswrapper[4986]: I1203 13:29:52.187564 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gspqm\" (UniqueName: \"kubernetes.io/projected/6d8473a8-d750-4ff5-84be-96088a3eea45-kube-api-access-gspqm\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mq9d2\" (UID: \"6d8473a8-d750-4ff5-84be-96088a3eea45\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mq9d2" Dec 03 13:29:52 crc kubenswrapper[4986]: I1203 13:29:52.188249 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6d8473a8-d750-4ff5-84be-96088a3eea45-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mq9d2\" (UID: \"6d8473a8-d750-4ff5-84be-96088a3eea45\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mq9d2" Dec 03 13:29:52 crc kubenswrapper[4986]: I1203 13:29:52.290251 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6d8473a8-d750-4ff5-84be-96088a3eea45-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mq9d2\" (UID: \"6d8473a8-d750-4ff5-84be-96088a3eea45\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mq9d2" Dec 03 13:29:52 crc kubenswrapper[4986]: I1203 13:29:52.290354 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6d8473a8-d750-4ff5-84be-96088a3eea45-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mq9d2\" (UID: \"6d8473a8-d750-4ff5-84be-96088a3eea45\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mq9d2" Dec 03 13:29:52 crc kubenswrapper[4986]: I1203 13:29:52.290395 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gspqm\" (UniqueName: \"kubernetes.io/projected/6d8473a8-d750-4ff5-84be-96088a3eea45-kube-api-access-gspqm\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mq9d2\" (UID: \"6d8473a8-d750-4ff5-84be-96088a3eea45\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mq9d2" Dec 03 13:29:52 crc kubenswrapper[4986]: I1203 13:29:52.294370 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6d8473a8-d750-4ff5-84be-96088a3eea45-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mq9d2\" (UID: \"6d8473a8-d750-4ff5-84be-96088a3eea45\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mq9d2" Dec 03 13:29:52 crc kubenswrapper[4986]: I1203 13:29:52.294808 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6d8473a8-d750-4ff5-84be-96088a3eea45-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mq9d2\" (UID: \"6d8473a8-d750-4ff5-84be-96088a3eea45\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mq9d2" Dec 03 13:29:52 crc kubenswrapper[4986]: I1203 13:29:52.307852 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gspqm\" (UniqueName: \"kubernetes.io/projected/6d8473a8-d750-4ff5-84be-96088a3eea45-kube-api-access-gspqm\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mq9d2\" (UID: \"6d8473a8-d750-4ff5-84be-96088a3eea45\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mq9d2" Dec 03 13:29:52 crc kubenswrapper[4986]: I1203 13:29:52.373723 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mq9d2" Dec 03 13:29:52 crc kubenswrapper[4986]: I1203 13:29:52.987225 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mq9d2"] Dec 03 13:29:53 crc kubenswrapper[4986]: I1203 13:29:53.976656 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mq9d2" event={"ID":"6d8473a8-d750-4ff5-84be-96088a3eea45","Type":"ContainerStarted","Data":"0f34290816a305f3a4feb60330ccfb95995b3a8e490c6be99e938a2e3c380fc3"} Dec 03 13:29:55 crc kubenswrapper[4986]: I1203 13:29:55.998890 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mq9d2" event={"ID":"6d8473a8-d750-4ff5-84be-96088a3eea45","Type":"ContainerStarted","Data":"d981ee4453353f8f4d0c3e297fdec26d4faa77c79b7169e03ddce9c4cdcdc3e7"} Dec 03 13:29:56 crc kubenswrapper[4986]: I1203 13:29:56.534101 4986 scope.go:117] "RemoveContainer" containerID="4c4753feaadd0261c00da6983cc4d7980e020557577e4254fee703cc8b3362e7" Dec 03 13:29:56 crc kubenswrapper[4986]: I1203 13:29:56.558413 4986 scope.go:117] "RemoveContainer" containerID="6d37bbc015c90c052e92f17f5bc953ba2333fdb98cb2a10b033bf2717f8f16a7" Dec 03 13:29:56 crc kubenswrapper[4986]: I1203 13:29:56.621298 4986 scope.go:117] "RemoveContainer" containerID="62badd142a85ea6f0382f6f27db0497df741d71dba57238651e1f5009ba07b33" Dec 03 13:29:56 crc kubenswrapper[4986]: I1203 13:29:56.675053 4986 scope.go:117] "RemoveContainer" containerID="700d858f0986e9aa3ad329ac3ac7246c350f9e40f1ad140df00c0b85de0bda74" Dec 03 13:29:56 crc kubenswrapper[4986]: I1203 13:29:56.707249 4986 scope.go:117] "RemoveContainer" containerID="85d977ecdb3eb494457ba18ea8ef8b41b7366ed30b9bebe5bb21dcf9596a6dc7" Dec 03 13:29:56 crc kubenswrapper[4986]: I1203 13:29:56.748594 4986 scope.go:117] "RemoveContainer" containerID="18886cff3842efc33ba6818a9946242f28ff95bfae0ee8564fed6be9b265c47f" Dec 03 13:29:56 crc kubenswrapper[4986]: I1203 13:29:56.802667 4986 scope.go:117] "RemoveContainer" containerID="a3153ae505ed0e0cde5955015e094b254025fb8a23ac54ca88e4a31915d6058d" Dec 03 13:30:00 crc kubenswrapper[4986]: I1203 13:30:00.126872 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mq9d2" podStartSLOduration=5.933119842 podStartE2EDuration="8.126851558s" podCreationTimestamp="2025-12-03 13:29:52 +0000 UTC" firstStartedPulling="2025-12-03 13:29:52.979432435 +0000 UTC m=+2052.445863636" lastFinishedPulling="2025-12-03 13:29:55.173164161 +0000 UTC m=+2054.639595352" observedRunningTime="2025-12-03 13:29:56.020910172 +0000 UTC m=+2055.487341373" watchObservedRunningTime="2025-12-03 13:30:00.126851558 +0000 UTC m=+2059.593282749" Dec 03 13:30:00 crc kubenswrapper[4986]: I1203 13:30:00.134546 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412810-v7zdw"] Dec 03 13:30:00 crc kubenswrapper[4986]: I1203 13:30:00.135771 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412810-v7zdw" Dec 03 13:30:00 crc kubenswrapper[4986]: I1203 13:30:00.139160 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 03 13:30:00 crc kubenswrapper[4986]: I1203 13:30:00.139254 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 03 13:30:00 crc kubenswrapper[4986]: I1203 13:30:00.143312 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412810-v7zdw"] Dec 03 13:30:00 crc kubenswrapper[4986]: I1203 13:30:00.271692 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aff8aecd-2002-457a-86e6-9fdb87097b4f-secret-volume\") pod \"collect-profiles-29412810-v7zdw\" (UID: \"aff8aecd-2002-457a-86e6-9fdb87097b4f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412810-v7zdw" Dec 03 13:30:00 crc kubenswrapper[4986]: I1203 13:30:00.271825 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gt5fv\" (UniqueName: \"kubernetes.io/projected/aff8aecd-2002-457a-86e6-9fdb87097b4f-kube-api-access-gt5fv\") pod \"collect-profiles-29412810-v7zdw\" (UID: \"aff8aecd-2002-457a-86e6-9fdb87097b4f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412810-v7zdw" Dec 03 13:30:00 crc kubenswrapper[4986]: I1203 13:30:00.272051 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aff8aecd-2002-457a-86e6-9fdb87097b4f-config-volume\") pod \"collect-profiles-29412810-v7zdw\" (UID: \"aff8aecd-2002-457a-86e6-9fdb87097b4f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412810-v7zdw" Dec 03 13:30:00 crc kubenswrapper[4986]: I1203 13:30:00.375692 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aff8aecd-2002-457a-86e6-9fdb87097b4f-config-volume\") pod \"collect-profiles-29412810-v7zdw\" (UID: \"aff8aecd-2002-457a-86e6-9fdb87097b4f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412810-v7zdw" Dec 03 13:30:00 crc kubenswrapper[4986]: I1203 13:30:00.374383 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aff8aecd-2002-457a-86e6-9fdb87097b4f-config-volume\") pod \"collect-profiles-29412810-v7zdw\" (UID: \"aff8aecd-2002-457a-86e6-9fdb87097b4f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412810-v7zdw" Dec 03 13:30:00 crc kubenswrapper[4986]: I1203 13:30:00.375945 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aff8aecd-2002-457a-86e6-9fdb87097b4f-secret-volume\") pod \"collect-profiles-29412810-v7zdw\" (UID: \"aff8aecd-2002-457a-86e6-9fdb87097b4f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412810-v7zdw" Dec 03 13:30:00 crc kubenswrapper[4986]: I1203 13:30:00.376079 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gt5fv\" (UniqueName: \"kubernetes.io/projected/aff8aecd-2002-457a-86e6-9fdb87097b4f-kube-api-access-gt5fv\") pod \"collect-profiles-29412810-v7zdw\" (UID: \"aff8aecd-2002-457a-86e6-9fdb87097b4f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412810-v7zdw" Dec 03 13:30:00 crc kubenswrapper[4986]: I1203 13:30:00.388493 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aff8aecd-2002-457a-86e6-9fdb87097b4f-secret-volume\") pod \"collect-profiles-29412810-v7zdw\" (UID: \"aff8aecd-2002-457a-86e6-9fdb87097b4f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412810-v7zdw" Dec 03 13:30:00 crc kubenswrapper[4986]: I1203 13:30:00.390597 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gt5fv\" (UniqueName: \"kubernetes.io/projected/aff8aecd-2002-457a-86e6-9fdb87097b4f-kube-api-access-gt5fv\") pod \"collect-profiles-29412810-v7zdw\" (UID: \"aff8aecd-2002-457a-86e6-9fdb87097b4f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412810-v7zdw" Dec 03 13:30:00 crc kubenswrapper[4986]: I1203 13:30:00.473510 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412810-v7zdw" Dec 03 13:30:00 crc kubenswrapper[4986]: I1203 13:30:00.942112 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412810-v7zdw"] Dec 03 13:30:01 crc kubenswrapper[4986]: I1203 13:30:01.049148 4986 generic.go:334] "Generic (PLEG): container finished" podID="6d8473a8-d750-4ff5-84be-96088a3eea45" containerID="d981ee4453353f8f4d0c3e297fdec26d4faa77c79b7169e03ddce9c4cdcdc3e7" exitCode=0 Dec 03 13:30:01 crc kubenswrapper[4986]: I1203 13:30:01.049236 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mq9d2" event={"ID":"6d8473a8-d750-4ff5-84be-96088a3eea45","Type":"ContainerDied","Data":"d981ee4453353f8f4d0c3e297fdec26d4faa77c79b7169e03ddce9c4cdcdc3e7"} Dec 03 13:30:01 crc kubenswrapper[4986]: I1203 13:30:01.050759 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412810-v7zdw" event={"ID":"aff8aecd-2002-457a-86e6-9fdb87097b4f","Type":"ContainerStarted","Data":"e92bc18f1bc41e6fa731d08613b00bde4125d0169abde60d4f62c7bb45435502"} Dec 03 13:30:02 crc kubenswrapper[4986]: I1203 13:30:02.531892 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mq9d2" Dec 03 13:30:02 crc kubenswrapper[4986]: I1203 13:30:02.723320 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gspqm\" (UniqueName: \"kubernetes.io/projected/6d8473a8-d750-4ff5-84be-96088a3eea45-kube-api-access-gspqm\") pod \"6d8473a8-d750-4ff5-84be-96088a3eea45\" (UID: \"6d8473a8-d750-4ff5-84be-96088a3eea45\") " Dec 03 13:30:02 crc kubenswrapper[4986]: I1203 13:30:02.723603 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6d8473a8-d750-4ff5-84be-96088a3eea45-ssh-key\") pod \"6d8473a8-d750-4ff5-84be-96088a3eea45\" (UID: \"6d8473a8-d750-4ff5-84be-96088a3eea45\") " Dec 03 13:30:02 crc kubenswrapper[4986]: I1203 13:30:02.723667 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6d8473a8-d750-4ff5-84be-96088a3eea45-inventory\") pod \"6d8473a8-d750-4ff5-84be-96088a3eea45\" (UID: \"6d8473a8-d750-4ff5-84be-96088a3eea45\") " Dec 03 13:30:02 crc kubenswrapper[4986]: I1203 13:30:02.729441 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d8473a8-d750-4ff5-84be-96088a3eea45-kube-api-access-gspqm" (OuterVolumeSpecName: "kube-api-access-gspqm") pod "6d8473a8-d750-4ff5-84be-96088a3eea45" (UID: "6d8473a8-d750-4ff5-84be-96088a3eea45"). InnerVolumeSpecName "kube-api-access-gspqm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:30:02 crc kubenswrapper[4986]: I1203 13:30:02.756021 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d8473a8-d750-4ff5-84be-96088a3eea45-inventory" (OuterVolumeSpecName: "inventory") pod "6d8473a8-d750-4ff5-84be-96088a3eea45" (UID: "6d8473a8-d750-4ff5-84be-96088a3eea45"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:30:02 crc kubenswrapper[4986]: I1203 13:30:02.764426 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d8473a8-d750-4ff5-84be-96088a3eea45-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "6d8473a8-d750-4ff5-84be-96088a3eea45" (UID: "6d8473a8-d750-4ff5-84be-96088a3eea45"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:30:02 crc kubenswrapper[4986]: I1203 13:30:02.825992 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gspqm\" (UniqueName: \"kubernetes.io/projected/6d8473a8-d750-4ff5-84be-96088a3eea45-kube-api-access-gspqm\") on node \"crc\" DevicePath \"\"" Dec 03 13:30:02 crc kubenswrapper[4986]: I1203 13:30:02.826030 4986 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6d8473a8-d750-4ff5-84be-96088a3eea45-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 03 13:30:02 crc kubenswrapper[4986]: I1203 13:30:02.826042 4986 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6d8473a8-d750-4ff5-84be-96088a3eea45-inventory\") on node \"crc\" DevicePath \"\"" Dec 03 13:30:03 crc kubenswrapper[4986]: I1203 13:30:03.078235 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mq9d2" event={"ID":"6d8473a8-d750-4ff5-84be-96088a3eea45","Type":"ContainerDied","Data":"0f34290816a305f3a4feb60330ccfb95995b3a8e490c6be99e938a2e3c380fc3"} Dec 03 13:30:03 crc kubenswrapper[4986]: I1203 13:30:03.078740 4986 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f34290816a305f3a4feb60330ccfb95995b3a8e490c6be99e938a2e3c380fc3" Dec 03 13:30:03 crc kubenswrapper[4986]: I1203 13:30:03.078255 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mq9d2" Dec 03 13:30:03 crc kubenswrapper[4986]: I1203 13:30:03.082513 4986 generic.go:334] "Generic (PLEG): container finished" podID="aff8aecd-2002-457a-86e6-9fdb87097b4f" containerID="8c5c7ead76201c9e43159330a03af4a3d133bbfaa7f01ccb646d34f475765a22" exitCode=0 Dec 03 13:30:03 crc kubenswrapper[4986]: I1203 13:30:03.082593 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412810-v7zdw" event={"ID":"aff8aecd-2002-457a-86e6-9fdb87097b4f","Type":"ContainerDied","Data":"8c5c7ead76201c9e43159330a03af4a3d133bbfaa7f01ccb646d34f475765a22"} Dec 03 13:30:03 crc kubenswrapper[4986]: I1203 13:30:03.160803 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-gpv76"] Dec 03 13:30:03 crc kubenswrapper[4986]: E1203 13:30:03.161516 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d8473a8-d750-4ff5-84be-96088a3eea45" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Dec 03 13:30:03 crc kubenswrapper[4986]: I1203 13:30:03.161535 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d8473a8-d750-4ff5-84be-96088a3eea45" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Dec 03 13:30:03 crc kubenswrapper[4986]: I1203 13:30:03.162153 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d8473a8-d750-4ff5-84be-96088a3eea45" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Dec 03 13:30:03 crc kubenswrapper[4986]: I1203 13:30:03.163123 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-gpv76" Dec 03 13:30:03 crc kubenswrapper[4986]: I1203 13:30:03.180164 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 03 13:30:03 crc kubenswrapper[4986]: I1203 13:30:03.180335 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 03 13:30:03 crc kubenswrapper[4986]: I1203 13:30:03.180414 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jr75v" Dec 03 13:30:03 crc kubenswrapper[4986]: I1203 13:30:03.183158 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 03 13:30:03 crc kubenswrapper[4986]: I1203 13:30:03.202963 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-gpv76"] Dec 03 13:30:03 crc kubenswrapper[4986]: I1203 13:30:03.336188 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5436a47f-49ff-42be-b125-e3d98fbce1e9-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-gpv76\" (UID: \"5436a47f-49ff-42be-b125-e3d98fbce1e9\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-gpv76" Dec 03 13:30:03 crc kubenswrapper[4986]: I1203 13:30:03.336256 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7b7n8\" (UniqueName: \"kubernetes.io/projected/5436a47f-49ff-42be-b125-e3d98fbce1e9-kube-api-access-7b7n8\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-gpv76\" (UID: \"5436a47f-49ff-42be-b125-e3d98fbce1e9\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-gpv76" Dec 03 13:30:03 crc kubenswrapper[4986]: I1203 13:30:03.336418 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5436a47f-49ff-42be-b125-e3d98fbce1e9-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-gpv76\" (UID: \"5436a47f-49ff-42be-b125-e3d98fbce1e9\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-gpv76" Dec 03 13:30:03 crc kubenswrapper[4986]: I1203 13:30:03.438772 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5436a47f-49ff-42be-b125-e3d98fbce1e9-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-gpv76\" (UID: \"5436a47f-49ff-42be-b125-e3d98fbce1e9\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-gpv76" Dec 03 13:30:03 crc kubenswrapper[4986]: I1203 13:30:03.438843 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7b7n8\" (UniqueName: \"kubernetes.io/projected/5436a47f-49ff-42be-b125-e3d98fbce1e9-kube-api-access-7b7n8\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-gpv76\" (UID: \"5436a47f-49ff-42be-b125-e3d98fbce1e9\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-gpv76" Dec 03 13:30:03 crc kubenswrapper[4986]: I1203 13:30:03.438902 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5436a47f-49ff-42be-b125-e3d98fbce1e9-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-gpv76\" (UID: \"5436a47f-49ff-42be-b125-e3d98fbce1e9\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-gpv76" Dec 03 13:30:03 crc kubenswrapper[4986]: I1203 13:30:03.444786 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5436a47f-49ff-42be-b125-e3d98fbce1e9-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-gpv76\" (UID: \"5436a47f-49ff-42be-b125-e3d98fbce1e9\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-gpv76" Dec 03 13:30:03 crc kubenswrapper[4986]: I1203 13:30:03.445023 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5436a47f-49ff-42be-b125-e3d98fbce1e9-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-gpv76\" (UID: \"5436a47f-49ff-42be-b125-e3d98fbce1e9\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-gpv76" Dec 03 13:30:03 crc kubenswrapper[4986]: I1203 13:30:03.468928 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7b7n8\" (UniqueName: \"kubernetes.io/projected/5436a47f-49ff-42be-b125-e3d98fbce1e9-kube-api-access-7b7n8\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-gpv76\" (UID: \"5436a47f-49ff-42be-b125-e3d98fbce1e9\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-gpv76" Dec 03 13:30:03 crc kubenswrapper[4986]: I1203 13:30:03.492973 4986 patch_prober.go:28] interesting pod/machine-config-daemon-xggpj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 13:30:03 crc kubenswrapper[4986]: I1203 13:30:03.493047 4986 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 13:30:03 crc kubenswrapper[4986]: I1203 13:30:03.493109 4986 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" Dec 03 13:30:03 crc kubenswrapper[4986]: I1203 13:30:03.493955 4986 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8e39db31bf59f71d1f97b2921b1e19e08a20fea403196bf76d65bd1aeb3c113e"} pod="openshift-machine-config-operator/machine-config-daemon-xggpj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 13:30:03 crc kubenswrapper[4986]: I1203 13:30:03.494029 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" containerName="machine-config-daemon" containerID="cri-o://8e39db31bf59f71d1f97b2921b1e19e08a20fea403196bf76d65bd1aeb3c113e" gracePeriod=600 Dec 03 13:30:03 crc kubenswrapper[4986]: I1203 13:30:03.520647 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-gpv76" Dec 03 13:30:04 crc kubenswrapper[4986]: I1203 13:30:04.069799 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-gpv76"] Dec 03 13:30:04 crc kubenswrapper[4986]: W1203 13:30:04.074160 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5436a47f_49ff_42be_b125_e3d98fbce1e9.slice/crio-3c8604a0c31dc7c4255c86645208245905ef8ddec8bc0357586f72c3b50c1619 WatchSource:0}: Error finding container 3c8604a0c31dc7c4255c86645208245905ef8ddec8bc0357586f72c3b50c1619: Status 404 returned error can't find the container with id 3c8604a0c31dc7c4255c86645208245905ef8ddec8bc0357586f72c3b50c1619 Dec 03 13:30:04 crc kubenswrapper[4986]: I1203 13:30:04.091649 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-gpv76" event={"ID":"5436a47f-49ff-42be-b125-e3d98fbce1e9","Type":"ContainerStarted","Data":"3c8604a0c31dc7c4255c86645208245905ef8ddec8bc0357586f72c3b50c1619"} Dec 03 13:30:04 crc kubenswrapper[4986]: I1203 13:30:04.094208 4986 generic.go:334] "Generic (PLEG): container finished" podID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" containerID="8e39db31bf59f71d1f97b2921b1e19e08a20fea403196bf76d65bd1aeb3c113e" exitCode=0 Dec 03 13:30:04 crc kubenswrapper[4986]: I1203 13:30:04.094390 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" event={"ID":"f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c","Type":"ContainerDied","Data":"8e39db31bf59f71d1f97b2921b1e19e08a20fea403196bf76d65bd1aeb3c113e"} Dec 03 13:30:04 crc kubenswrapper[4986]: I1203 13:30:04.094423 4986 scope.go:117] "RemoveContainer" containerID="3ba053774b99510def9eb1096d5851d891d9961deee4a7966a8cfbf9b5fbf159" Dec 03 13:30:04 crc kubenswrapper[4986]: I1203 13:30:04.376576 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412810-v7zdw" Dec 03 13:30:04 crc kubenswrapper[4986]: I1203 13:30:04.563660 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gt5fv\" (UniqueName: \"kubernetes.io/projected/aff8aecd-2002-457a-86e6-9fdb87097b4f-kube-api-access-gt5fv\") pod \"aff8aecd-2002-457a-86e6-9fdb87097b4f\" (UID: \"aff8aecd-2002-457a-86e6-9fdb87097b4f\") " Dec 03 13:30:04 crc kubenswrapper[4986]: I1203 13:30:04.563960 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aff8aecd-2002-457a-86e6-9fdb87097b4f-config-volume\") pod \"aff8aecd-2002-457a-86e6-9fdb87097b4f\" (UID: \"aff8aecd-2002-457a-86e6-9fdb87097b4f\") " Dec 03 13:30:04 crc kubenswrapper[4986]: I1203 13:30:04.564043 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aff8aecd-2002-457a-86e6-9fdb87097b4f-secret-volume\") pod \"aff8aecd-2002-457a-86e6-9fdb87097b4f\" (UID: \"aff8aecd-2002-457a-86e6-9fdb87097b4f\") " Dec 03 13:30:04 crc kubenswrapper[4986]: I1203 13:30:04.564734 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aff8aecd-2002-457a-86e6-9fdb87097b4f-config-volume" (OuterVolumeSpecName: "config-volume") pod "aff8aecd-2002-457a-86e6-9fdb87097b4f" (UID: "aff8aecd-2002-457a-86e6-9fdb87097b4f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:30:04 crc kubenswrapper[4986]: I1203 13:30:04.570183 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aff8aecd-2002-457a-86e6-9fdb87097b4f-kube-api-access-gt5fv" (OuterVolumeSpecName: "kube-api-access-gt5fv") pod "aff8aecd-2002-457a-86e6-9fdb87097b4f" (UID: "aff8aecd-2002-457a-86e6-9fdb87097b4f"). InnerVolumeSpecName "kube-api-access-gt5fv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:30:04 crc kubenswrapper[4986]: I1203 13:30:04.570927 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aff8aecd-2002-457a-86e6-9fdb87097b4f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "aff8aecd-2002-457a-86e6-9fdb87097b4f" (UID: "aff8aecd-2002-457a-86e6-9fdb87097b4f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:30:04 crc kubenswrapper[4986]: I1203 13:30:04.668482 4986 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aff8aecd-2002-457a-86e6-9fdb87097b4f-config-volume\") on node \"crc\" DevicePath \"\"" Dec 03 13:30:04 crc kubenswrapper[4986]: I1203 13:30:04.668541 4986 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aff8aecd-2002-457a-86e6-9fdb87097b4f-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 03 13:30:04 crc kubenswrapper[4986]: I1203 13:30:04.668568 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gt5fv\" (UniqueName: \"kubernetes.io/projected/aff8aecd-2002-457a-86e6-9fdb87097b4f-kube-api-access-gt5fv\") on node \"crc\" DevicePath \"\"" Dec 03 13:30:05 crc kubenswrapper[4986]: I1203 13:30:05.108917 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" event={"ID":"f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c","Type":"ContainerStarted","Data":"4069b648601fa33b27bda022b1a72851b773bff566b310c1ebd23beaecd8cde7"} Dec 03 13:30:05 crc kubenswrapper[4986]: I1203 13:30:05.111259 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412810-v7zdw" event={"ID":"aff8aecd-2002-457a-86e6-9fdb87097b4f","Type":"ContainerDied","Data":"e92bc18f1bc41e6fa731d08613b00bde4125d0169abde60d4f62c7bb45435502"} Dec 03 13:30:05 crc kubenswrapper[4986]: I1203 13:30:05.111314 4986 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e92bc18f1bc41e6fa731d08613b00bde4125d0169abde60d4f62c7bb45435502" Dec 03 13:30:05 crc kubenswrapper[4986]: I1203 13:30:05.111362 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412810-v7zdw" Dec 03 13:30:05 crc kubenswrapper[4986]: I1203 13:30:05.470747 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412765-29wpf"] Dec 03 13:30:05 crc kubenswrapper[4986]: I1203 13:30:05.482334 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412765-29wpf"] Dec 03 13:30:06 crc kubenswrapper[4986]: I1203 13:30:06.142625 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-gpv76" event={"ID":"5436a47f-49ff-42be-b125-e3d98fbce1e9","Type":"ContainerStarted","Data":"1521aa9cf2a0583cf6d83c15c869c55fb51641b95b9815bb3d1d0b16a1b33775"} Dec 03 13:30:06 crc kubenswrapper[4986]: I1203 13:30:06.180212 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-gpv76" podStartSLOduration=2.218417355 podStartE2EDuration="3.180191619s" podCreationTimestamp="2025-12-03 13:30:03 +0000 UTC" firstStartedPulling="2025-12-03 13:30:04.078838908 +0000 UTC m=+2063.545270099" lastFinishedPulling="2025-12-03 13:30:05.040613172 +0000 UTC m=+2064.507044363" observedRunningTime="2025-12-03 13:30:06.176601161 +0000 UTC m=+2065.643032352" watchObservedRunningTime="2025-12-03 13:30:06.180191619 +0000 UTC m=+2065.646622810" Dec 03 13:30:06 crc kubenswrapper[4986]: I1203 13:30:06.962909 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c102da0-9cf4-4521-97fe-3153aa47a43e" path="/var/lib/kubelet/pods/0c102da0-9cf4-4521-97fe-3153aa47a43e/volumes" Dec 03 13:30:14 crc kubenswrapper[4986]: I1203 13:30:14.031322 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-xg69r"] Dec 03 13:30:14 crc kubenswrapper[4986]: I1203 13:30:14.039921 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-xg69r"] Dec 03 13:30:14 crc kubenswrapper[4986]: I1203 13:30:14.957521 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25ce0dda-434c-4a77-a144-a6956f2e0407" path="/var/lib/kubelet/pods/25ce0dda-434c-4a77-a144-a6956f2e0407/volumes" Dec 03 13:30:21 crc kubenswrapper[4986]: I1203 13:30:21.759726 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-lnk6p"] Dec 03 13:30:21 crc kubenswrapper[4986]: E1203 13:30:21.761370 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aff8aecd-2002-457a-86e6-9fdb87097b4f" containerName="collect-profiles" Dec 03 13:30:21 crc kubenswrapper[4986]: I1203 13:30:21.761411 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="aff8aecd-2002-457a-86e6-9fdb87097b4f" containerName="collect-profiles" Dec 03 13:30:21 crc kubenswrapper[4986]: I1203 13:30:21.761694 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="aff8aecd-2002-457a-86e6-9fdb87097b4f" containerName="collect-profiles" Dec 03 13:30:21 crc kubenswrapper[4986]: I1203 13:30:21.764155 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lnk6p" Dec 03 13:30:21 crc kubenswrapper[4986]: I1203 13:30:21.772762 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lnk6p"] Dec 03 13:30:21 crc kubenswrapper[4986]: I1203 13:30:21.947408 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8bee8b3-4740-48b7-86d2-5c621389c577-utilities\") pod \"certified-operators-lnk6p\" (UID: \"a8bee8b3-4740-48b7-86d2-5c621389c577\") " pod="openshift-marketplace/certified-operators-lnk6p" Dec 03 13:30:21 crc kubenswrapper[4986]: I1203 13:30:21.947456 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8bee8b3-4740-48b7-86d2-5c621389c577-catalog-content\") pod \"certified-operators-lnk6p\" (UID: \"a8bee8b3-4740-48b7-86d2-5c621389c577\") " pod="openshift-marketplace/certified-operators-lnk6p" Dec 03 13:30:21 crc kubenswrapper[4986]: I1203 13:30:21.947626 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fx42\" (UniqueName: \"kubernetes.io/projected/a8bee8b3-4740-48b7-86d2-5c621389c577-kube-api-access-5fx42\") pod \"certified-operators-lnk6p\" (UID: \"a8bee8b3-4740-48b7-86d2-5c621389c577\") " pod="openshift-marketplace/certified-operators-lnk6p" Dec 03 13:30:22 crc kubenswrapper[4986]: I1203 13:30:22.048948 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8bee8b3-4740-48b7-86d2-5c621389c577-utilities\") pod \"certified-operators-lnk6p\" (UID: \"a8bee8b3-4740-48b7-86d2-5c621389c577\") " pod="openshift-marketplace/certified-operators-lnk6p" Dec 03 13:30:22 crc kubenswrapper[4986]: I1203 13:30:22.048992 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8bee8b3-4740-48b7-86d2-5c621389c577-catalog-content\") pod \"certified-operators-lnk6p\" (UID: \"a8bee8b3-4740-48b7-86d2-5c621389c577\") " pod="openshift-marketplace/certified-operators-lnk6p" Dec 03 13:30:22 crc kubenswrapper[4986]: I1203 13:30:22.049123 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fx42\" (UniqueName: \"kubernetes.io/projected/a8bee8b3-4740-48b7-86d2-5c621389c577-kube-api-access-5fx42\") pod \"certified-operators-lnk6p\" (UID: \"a8bee8b3-4740-48b7-86d2-5c621389c577\") " pod="openshift-marketplace/certified-operators-lnk6p" Dec 03 13:30:22 crc kubenswrapper[4986]: I1203 13:30:22.049605 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8bee8b3-4740-48b7-86d2-5c621389c577-utilities\") pod \"certified-operators-lnk6p\" (UID: \"a8bee8b3-4740-48b7-86d2-5c621389c577\") " pod="openshift-marketplace/certified-operators-lnk6p" Dec 03 13:30:22 crc kubenswrapper[4986]: I1203 13:30:22.049679 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8bee8b3-4740-48b7-86d2-5c621389c577-catalog-content\") pod \"certified-operators-lnk6p\" (UID: \"a8bee8b3-4740-48b7-86d2-5c621389c577\") " pod="openshift-marketplace/certified-operators-lnk6p" Dec 03 13:30:22 crc kubenswrapper[4986]: I1203 13:30:22.077380 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fx42\" (UniqueName: \"kubernetes.io/projected/a8bee8b3-4740-48b7-86d2-5c621389c577-kube-api-access-5fx42\") pod \"certified-operators-lnk6p\" (UID: \"a8bee8b3-4740-48b7-86d2-5c621389c577\") " pod="openshift-marketplace/certified-operators-lnk6p" Dec 03 13:30:22 crc kubenswrapper[4986]: I1203 13:30:22.095081 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lnk6p" Dec 03 13:30:22 crc kubenswrapper[4986]: I1203 13:30:22.755042 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lnk6p"] Dec 03 13:30:23 crc kubenswrapper[4986]: I1203 13:30:23.297786 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lnk6p" event={"ID":"a8bee8b3-4740-48b7-86d2-5c621389c577","Type":"ContainerStarted","Data":"7614be6af8d9c4edd6d0f5319e85af7dee50629455a8a6e87562afe852ca4c57"} Dec 03 13:30:24 crc kubenswrapper[4986]: I1203 13:30:24.309587 4986 generic.go:334] "Generic (PLEG): container finished" podID="a8bee8b3-4740-48b7-86d2-5c621389c577" containerID="f6cb268bff79d9a5ea2bced76e9b66846035e997dec50de46734543e9fcf7ac2" exitCode=0 Dec 03 13:30:24 crc kubenswrapper[4986]: I1203 13:30:24.309712 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lnk6p" event={"ID":"a8bee8b3-4740-48b7-86d2-5c621389c577","Type":"ContainerDied","Data":"f6cb268bff79d9a5ea2bced76e9b66846035e997dec50de46734543e9fcf7ac2"} Dec 03 13:30:27 crc kubenswrapper[4986]: I1203 13:30:27.363458 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lnk6p" event={"ID":"a8bee8b3-4740-48b7-86d2-5c621389c577","Type":"ContainerStarted","Data":"93ae57c6bace61a7816945df2efcbd89c23e6582d3cc27ef7bf7fe740f35a072"} Dec 03 13:30:28 crc kubenswrapper[4986]: I1203 13:30:28.375570 4986 generic.go:334] "Generic (PLEG): container finished" podID="a8bee8b3-4740-48b7-86d2-5c621389c577" containerID="93ae57c6bace61a7816945df2efcbd89c23e6582d3cc27ef7bf7fe740f35a072" exitCode=0 Dec 03 13:30:28 crc kubenswrapper[4986]: I1203 13:30:28.375694 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lnk6p" event={"ID":"a8bee8b3-4740-48b7-86d2-5c621389c577","Type":"ContainerDied","Data":"93ae57c6bace61a7816945df2efcbd89c23e6582d3cc27ef7bf7fe740f35a072"} Dec 03 13:30:30 crc kubenswrapper[4986]: I1203 13:30:30.393345 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lnk6p" event={"ID":"a8bee8b3-4740-48b7-86d2-5c621389c577","Type":"ContainerStarted","Data":"5ab8c67515a9c1c2c6eb1e685d8643de19bae666fa589773ac077c4f62003417"} Dec 03 13:30:30 crc kubenswrapper[4986]: I1203 13:30:30.411470 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-lnk6p" podStartSLOduration=4.512980227 podStartE2EDuration="9.411454336s" podCreationTimestamp="2025-12-03 13:30:21 +0000 UTC" firstStartedPulling="2025-12-03 13:30:24.314113161 +0000 UTC m=+2083.780544352" lastFinishedPulling="2025-12-03 13:30:29.21258727 +0000 UTC m=+2088.679018461" observedRunningTime="2025-12-03 13:30:30.4079559 +0000 UTC m=+2089.874387091" watchObservedRunningTime="2025-12-03 13:30:30.411454336 +0000 UTC m=+2089.877885527" Dec 03 13:30:32 crc kubenswrapper[4986]: I1203 13:30:32.049781 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-5z7tp"] Dec 03 13:30:32 crc kubenswrapper[4986]: I1203 13:30:32.064073 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-bxnmh"] Dec 03 13:30:32 crc kubenswrapper[4986]: I1203 13:30:32.067476 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-bxnmh"] Dec 03 13:30:32 crc kubenswrapper[4986]: I1203 13:30:32.075671 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-5z7tp"] Dec 03 13:30:32 crc kubenswrapper[4986]: I1203 13:30:32.095304 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-lnk6p" Dec 03 13:30:32 crc kubenswrapper[4986]: I1203 13:30:32.095347 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-lnk6p" Dec 03 13:30:32 crc kubenswrapper[4986]: I1203 13:30:32.142577 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-lnk6p" Dec 03 13:30:32 crc kubenswrapper[4986]: I1203 13:30:32.968717 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e6b4f09-32ab-4e2d-95e1-69b0abd29fe3" path="/var/lib/kubelet/pods/5e6b4f09-32ab-4e2d-95e1-69b0abd29fe3/volumes" Dec 03 13:30:32 crc kubenswrapper[4986]: I1203 13:30:32.970224 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fd749d1-c0e5-4462-a7d0-62586902c0b7" path="/var/lib/kubelet/pods/7fd749d1-c0e5-4462-a7d0-62586902c0b7/volumes" Dec 03 13:30:42 crc kubenswrapper[4986]: I1203 13:30:42.142058 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-lnk6p" Dec 03 13:30:42 crc kubenswrapper[4986]: I1203 13:30:42.191891 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lnk6p"] Dec 03 13:30:42 crc kubenswrapper[4986]: I1203 13:30:42.526750 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-lnk6p" podUID="a8bee8b3-4740-48b7-86d2-5c621389c577" containerName="registry-server" containerID="cri-o://5ab8c67515a9c1c2c6eb1e685d8643de19bae666fa589773ac077c4f62003417" gracePeriod=2 Dec 03 13:30:43 crc kubenswrapper[4986]: I1203 13:30:43.020074 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lnk6p" Dec 03 13:30:43 crc kubenswrapper[4986]: I1203 13:30:43.055117 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8bee8b3-4740-48b7-86d2-5c621389c577-catalog-content\") pod \"a8bee8b3-4740-48b7-86d2-5c621389c577\" (UID: \"a8bee8b3-4740-48b7-86d2-5c621389c577\") " Dec 03 13:30:43 crc kubenswrapper[4986]: I1203 13:30:43.055261 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5fx42\" (UniqueName: \"kubernetes.io/projected/a8bee8b3-4740-48b7-86d2-5c621389c577-kube-api-access-5fx42\") pod \"a8bee8b3-4740-48b7-86d2-5c621389c577\" (UID: \"a8bee8b3-4740-48b7-86d2-5c621389c577\") " Dec 03 13:30:43 crc kubenswrapper[4986]: I1203 13:30:43.055319 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8bee8b3-4740-48b7-86d2-5c621389c577-utilities\") pod \"a8bee8b3-4740-48b7-86d2-5c621389c577\" (UID: \"a8bee8b3-4740-48b7-86d2-5c621389c577\") " Dec 03 13:30:43 crc kubenswrapper[4986]: I1203 13:30:43.056760 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8bee8b3-4740-48b7-86d2-5c621389c577-utilities" (OuterVolumeSpecName: "utilities") pod "a8bee8b3-4740-48b7-86d2-5c621389c577" (UID: "a8bee8b3-4740-48b7-86d2-5c621389c577"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 13:30:43 crc kubenswrapper[4986]: I1203 13:30:43.061992 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8bee8b3-4740-48b7-86d2-5c621389c577-kube-api-access-5fx42" (OuterVolumeSpecName: "kube-api-access-5fx42") pod "a8bee8b3-4740-48b7-86d2-5c621389c577" (UID: "a8bee8b3-4740-48b7-86d2-5c621389c577"). InnerVolumeSpecName "kube-api-access-5fx42". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:30:43 crc kubenswrapper[4986]: I1203 13:30:43.124979 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8bee8b3-4740-48b7-86d2-5c621389c577-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a8bee8b3-4740-48b7-86d2-5c621389c577" (UID: "a8bee8b3-4740-48b7-86d2-5c621389c577"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 13:30:43 crc kubenswrapper[4986]: I1203 13:30:43.157057 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5fx42\" (UniqueName: \"kubernetes.io/projected/a8bee8b3-4740-48b7-86d2-5c621389c577-kube-api-access-5fx42\") on node \"crc\" DevicePath \"\"" Dec 03 13:30:43 crc kubenswrapper[4986]: I1203 13:30:43.157091 4986 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8bee8b3-4740-48b7-86d2-5c621389c577-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 13:30:43 crc kubenswrapper[4986]: I1203 13:30:43.157104 4986 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8bee8b3-4740-48b7-86d2-5c621389c577-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 13:30:43 crc kubenswrapper[4986]: I1203 13:30:43.539382 4986 generic.go:334] "Generic (PLEG): container finished" podID="a8bee8b3-4740-48b7-86d2-5c621389c577" containerID="5ab8c67515a9c1c2c6eb1e685d8643de19bae666fa589773ac077c4f62003417" exitCode=0 Dec 03 13:30:43 crc kubenswrapper[4986]: I1203 13:30:43.539423 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lnk6p" Dec 03 13:30:43 crc kubenswrapper[4986]: I1203 13:30:43.539430 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lnk6p" event={"ID":"a8bee8b3-4740-48b7-86d2-5c621389c577","Type":"ContainerDied","Data":"5ab8c67515a9c1c2c6eb1e685d8643de19bae666fa589773ac077c4f62003417"} Dec 03 13:30:43 crc kubenswrapper[4986]: I1203 13:30:43.539516 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lnk6p" event={"ID":"a8bee8b3-4740-48b7-86d2-5c621389c577","Type":"ContainerDied","Data":"7614be6af8d9c4edd6d0f5319e85af7dee50629455a8a6e87562afe852ca4c57"} Dec 03 13:30:43 crc kubenswrapper[4986]: I1203 13:30:43.539569 4986 scope.go:117] "RemoveContainer" containerID="5ab8c67515a9c1c2c6eb1e685d8643de19bae666fa589773ac077c4f62003417" Dec 03 13:30:43 crc kubenswrapper[4986]: I1203 13:30:43.571605 4986 scope.go:117] "RemoveContainer" containerID="93ae57c6bace61a7816945df2efcbd89c23e6582d3cc27ef7bf7fe740f35a072" Dec 03 13:30:43 crc kubenswrapper[4986]: I1203 13:30:43.575622 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lnk6p"] Dec 03 13:30:43 crc kubenswrapper[4986]: I1203 13:30:43.593937 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-lnk6p"] Dec 03 13:30:43 crc kubenswrapper[4986]: I1203 13:30:43.607034 4986 scope.go:117] "RemoveContainer" containerID="f6cb268bff79d9a5ea2bced76e9b66846035e997dec50de46734543e9fcf7ac2" Dec 03 13:30:43 crc kubenswrapper[4986]: I1203 13:30:43.643782 4986 scope.go:117] "RemoveContainer" containerID="5ab8c67515a9c1c2c6eb1e685d8643de19bae666fa589773ac077c4f62003417" Dec 03 13:30:43 crc kubenswrapper[4986]: E1203 13:30:43.644554 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ab8c67515a9c1c2c6eb1e685d8643de19bae666fa589773ac077c4f62003417\": container with ID starting with 5ab8c67515a9c1c2c6eb1e685d8643de19bae666fa589773ac077c4f62003417 not found: ID does not exist" containerID="5ab8c67515a9c1c2c6eb1e685d8643de19bae666fa589773ac077c4f62003417" Dec 03 13:30:43 crc kubenswrapper[4986]: I1203 13:30:43.644654 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ab8c67515a9c1c2c6eb1e685d8643de19bae666fa589773ac077c4f62003417"} err="failed to get container status \"5ab8c67515a9c1c2c6eb1e685d8643de19bae666fa589773ac077c4f62003417\": rpc error: code = NotFound desc = could not find container \"5ab8c67515a9c1c2c6eb1e685d8643de19bae666fa589773ac077c4f62003417\": container with ID starting with 5ab8c67515a9c1c2c6eb1e685d8643de19bae666fa589773ac077c4f62003417 not found: ID does not exist" Dec 03 13:30:43 crc kubenswrapper[4986]: I1203 13:30:43.644690 4986 scope.go:117] "RemoveContainer" containerID="93ae57c6bace61a7816945df2efcbd89c23e6582d3cc27ef7bf7fe740f35a072" Dec 03 13:30:43 crc kubenswrapper[4986]: E1203 13:30:43.645684 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93ae57c6bace61a7816945df2efcbd89c23e6582d3cc27ef7bf7fe740f35a072\": container with ID starting with 93ae57c6bace61a7816945df2efcbd89c23e6582d3cc27ef7bf7fe740f35a072 not found: ID does not exist" containerID="93ae57c6bace61a7816945df2efcbd89c23e6582d3cc27ef7bf7fe740f35a072" Dec 03 13:30:43 crc kubenswrapper[4986]: I1203 13:30:43.645764 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93ae57c6bace61a7816945df2efcbd89c23e6582d3cc27ef7bf7fe740f35a072"} err="failed to get container status \"93ae57c6bace61a7816945df2efcbd89c23e6582d3cc27ef7bf7fe740f35a072\": rpc error: code = NotFound desc = could not find container \"93ae57c6bace61a7816945df2efcbd89c23e6582d3cc27ef7bf7fe740f35a072\": container with ID starting with 93ae57c6bace61a7816945df2efcbd89c23e6582d3cc27ef7bf7fe740f35a072 not found: ID does not exist" Dec 03 13:30:43 crc kubenswrapper[4986]: I1203 13:30:43.645818 4986 scope.go:117] "RemoveContainer" containerID="f6cb268bff79d9a5ea2bced76e9b66846035e997dec50de46734543e9fcf7ac2" Dec 03 13:30:43 crc kubenswrapper[4986]: E1203 13:30:43.646457 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6cb268bff79d9a5ea2bced76e9b66846035e997dec50de46734543e9fcf7ac2\": container with ID starting with f6cb268bff79d9a5ea2bced76e9b66846035e997dec50de46734543e9fcf7ac2 not found: ID does not exist" containerID="f6cb268bff79d9a5ea2bced76e9b66846035e997dec50de46734543e9fcf7ac2" Dec 03 13:30:43 crc kubenswrapper[4986]: I1203 13:30:43.646512 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6cb268bff79d9a5ea2bced76e9b66846035e997dec50de46734543e9fcf7ac2"} err="failed to get container status \"f6cb268bff79d9a5ea2bced76e9b66846035e997dec50de46734543e9fcf7ac2\": rpc error: code = NotFound desc = could not find container \"f6cb268bff79d9a5ea2bced76e9b66846035e997dec50de46734543e9fcf7ac2\": container with ID starting with f6cb268bff79d9a5ea2bced76e9b66846035e997dec50de46734543e9fcf7ac2 not found: ID does not exist" Dec 03 13:30:44 crc kubenswrapper[4986]: I1203 13:30:44.962740 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8bee8b3-4740-48b7-86d2-5c621389c577" path="/var/lib/kubelet/pods/a8bee8b3-4740-48b7-86d2-5c621389c577/volumes" Dec 03 13:30:46 crc kubenswrapper[4986]: I1203 13:30:46.588881 4986 generic.go:334] "Generic (PLEG): container finished" podID="5436a47f-49ff-42be-b125-e3d98fbce1e9" containerID="1521aa9cf2a0583cf6d83c15c869c55fb51641b95b9815bb3d1d0b16a1b33775" exitCode=0 Dec 03 13:30:46 crc kubenswrapper[4986]: I1203 13:30:46.588986 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-gpv76" event={"ID":"5436a47f-49ff-42be-b125-e3d98fbce1e9","Type":"ContainerDied","Data":"1521aa9cf2a0583cf6d83c15c869c55fb51641b95b9815bb3d1d0b16a1b33775"} Dec 03 13:30:48 crc kubenswrapper[4986]: I1203 13:30:48.048976 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-gpv76" Dec 03 13:30:48 crc kubenswrapper[4986]: I1203 13:30:48.229263 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5436a47f-49ff-42be-b125-e3d98fbce1e9-inventory\") pod \"5436a47f-49ff-42be-b125-e3d98fbce1e9\" (UID: \"5436a47f-49ff-42be-b125-e3d98fbce1e9\") " Dec 03 13:30:48 crc kubenswrapper[4986]: I1203 13:30:48.229389 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7b7n8\" (UniqueName: \"kubernetes.io/projected/5436a47f-49ff-42be-b125-e3d98fbce1e9-kube-api-access-7b7n8\") pod \"5436a47f-49ff-42be-b125-e3d98fbce1e9\" (UID: \"5436a47f-49ff-42be-b125-e3d98fbce1e9\") " Dec 03 13:30:48 crc kubenswrapper[4986]: I1203 13:30:48.229482 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5436a47f-49ff-42be-b125-e3d98fbce1e9-ssh-key\") pod \"5436a47f-49ff-42be-b125-e3d98fbce1e9\" (UID: \"5436a47f-49ff-42be-b125-e3d98fbce1e9\") " Dec 03 13:30:48 crc kubenswrapper[4986]: I1203 13:30:48.235569 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5436a47f-49ff-42be-b125-e3d98fbce1e9-kube-api-access-7b7n8" (OuterVolumeSpecName: "kube-api-access-7b7n8") pod "5436a47f-49ff-42be-b125-e3d98fbce1e9" (UID: "5436a47f-49ff-42be-b125-e3d98fbce1e9"). InnerVolumeSpecName "kube-api-access-7b7n8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:30:48 crc kubenswrapper[4986]: I1203 13:30:48.262188 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5436a47f-49ff-42be-b125-e3d98fbce1e9-inventory" (OuterVolumeSpecName: "inventory") pod "5436a47f-49ff-42be-b125-e3d98fbce1e9" (UID: "5436a47f-49ff-42be-b125-e3d98fbce1e9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:30:48 crc kubenswrapper[4986]: I1203 13:30:48.265253 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5436a47f-49ff-42be-b125-e3d98fbce1e9-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "5436a47f-49ff-42be-b125-e3d98fbce1e9" (UID: "5436a47f-49ff-42be-b125-e3d98fbce1e9"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:30:48 crc kubenswrapper[4986]: I1203 13:30:48.332008 4986 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5436a47f-49ff-42be-b125-e3d98fbce1e9-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 03 13:30:48 crc kubenswrapper[4986]: I1203 13:30:48.332060 4986 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5436a47f-49ff-42be-b125-e3d98fbce1e9-inventory\") on node \"crc\" DevicePath \"\"" Dec 03 13:30:48 crc kubenswrapper[4986]: I1203 13:30:48.332079 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7b7n8\" (UniqueName: \"kubernetes.io/projected/5436a47f-49ff-42be-b125-e3d98fbce1e9-kube-api-access-7b7n8\") on node \"crc\" DevicePath \"\"" Dec 03 13:30:48 crc kubenswrapper[4986]: I1203 13:30:48.613554 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-gpv76" event={"ID":"5436a47f-49ff-42be-b125-e3d98fbce1e9","Type":"ContainerDied","Data":"3c8604a0c31dc7c4255c86645208245905ef8ddec8bc0357586f72c3b50c1619"} Dec 03 13:30:48 crc kubenswrapper[4986]: I1203 13:30:48.613732 4986 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c8604a0c31dc7c4255c86645208245905ef8ddec8bc0357586f72c3b50c1619" Dec 03 13:30:48 crc kubenswrapper[4986]: I1203 13:30:48.613631 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-gpv76" Dec 03 13:30:48 crc kubenswrapper[4986]: I1203 13:30:48.792491 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-plm9h"] Dec 03 13:30:48 crc kubenswrapper[4986]: E1203 13:30:48.792980 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8bee8b3-4740-48b7-86d2-5c621389c577" containerName="extract-utilities" Dec 03 13:30:48 crc kubenswrapper[4986]: I1203 13:30:48.793012 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8bee8b3-4740-48b7-86d2-5c621389c577" containerName="extract-utilities" Dec 03 13:30:48 crc kubenswrapper[4986]: E1203 13:30:48.793029 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8bee8b3-4740-48b7-86d2-5c621389c577" containerName="extract-content" Dec 03 13:30:48 crc kubenswrapper[4986]: I1203 13:30:48.793040 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8bee8b3-4740-48b7-86d2-5c621389c577" containerName="extract-content" Dec 03 13:30:48 crc kubenswrapper[4986]: E1203 13:30:48.793061 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5436a47f-49ff-42be-b125-e3d98fbce1e9" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Dec 03 13:30:48 crc kubenswrapper[4986]: I1203 13:30:48.793073 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="5436a47f-49ff-42be-b125-e3d98fbce1e9" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Dec 03 13:30:48 crc kubenswrapper[4986]: E1203 13:30:48.793095 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8bee8b3-4740-48b7-86d2-5c621389c577" containerName="registry-server" Dec 03 13:30:48 crc kubenswrapper[4986]: I1203 13:30:48.793106 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8bee8b3-4740-48b7-86d2-5c621389c577" containerName="registry-server" Dec 03 13:30:48 crc kubenswrapper[4986]: I1203 13:30:48.793445 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8bee8b3-4740-48b7-86d2-5c621389c577" containerName="registry-server" Dec 03 13:30:48 crc kubenswrapper[4986]: I1203 13:30:48.793480 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="5436a47f-49ff-42be-b125-e3d98fbce1e9" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Dec 03 13:30:48 crc kubenswrapper[4986]: I1203 13:30:48.794365 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-plm9h" Dec 03 13:30:48 crc kubenswrapper[4986]: I1203 13:30:48.796565 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jr75v" Dec 03 13:30:48 crc kubenswrapper[4986]: I1203 13:30:48.796735 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 03 13:30:48 crc kubenswrapper[4986]: I1203 13:30:48.796969 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 03 13:30:48 crc kubenswrapper[4986]: I1203 13:30:48.797487 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 03 13:30:48 crc kubenswrapper[4986]: I1203 13:30:48.810114 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-plm9h"] Dec 03 13:30:48 crc kubenswrapper[4986]: I1203 13:30:48.841102 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwvvk\" (UniqueName: \"kubernetes.io/projected/1d933f9e-e44c-4d6b-ab8b-0186020b5b28-kube-api-access-nwvvk\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-plm9h\" (UID: \"1d933f9e-e44c-4d6b-ab8b-0186020b5b28\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-plm9h" Dec 03 13:30:48 crc kubenswrapper[4986]: I1203 13:30:48.841272 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1d933f9e-e44c-4d6b-ab8b-0186020b5b28-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-plm9h\" (UID: \"1d933f9e-e44c-4d6b-ab8b-0186020b5b28\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-plm9h" Dec 03 13:30:48 crc kubenswrapper[4986]: I1203 13:30:48.841507 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1d933f9e-e44c-4d6b-ab8b-0186020b5b28-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-plm9h\" (UID: \"1d933f9e-e44c-4d6b-ab8b-0186020b5b28\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-plm9h" Dec 03 13:30:48 crc kubenswrapper[4986]: I1203 13:30:48.943183 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1d933f9e-e44c-4d6b-ab8b-0186020b5b28-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-plm9h\" (UID: \"1d933f9e-e44c-4d6b-ab8b-0186020b5b28\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-plm9h" Dec 03 13:30:48 crc kubenswrapper[4986]: I1203 13:30:48.943324 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1d933f9e-e44c-4d6b-ab8b-0186020b5b28-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-plm9h\" (UID: \"1d933f9e-e44c-4d6b-ab8b-0186020b5b28\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-plm9h" Dec 03 13:30:48 crc kubenswrapper[4986]: I1203 13:30:48.943388 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwvvk\" (UniqueName: \"kubernetes.io/projected/1d933f9e-e44c-4d6b-ab8b-0186020b5b28-kube-api-access-nwvvk\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-plm9h\" (UID: \"1d933f9e-e44c-4d6b-ab8b-0186020b5b28\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-plm9h" Dec 03 13:30:48 crc kubenswrapper[4986]: I1203 13:30:48.947760 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1d933f9e-e44c-4d6b-ab8b-0186020b5b28-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-plm9h\" (UID: \"1d933f9e-e44c-4d6b-ab8b-0186020b5b28\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-plm9h" Dec 03 13:30:48 crc kubenswrapper[4986]: I1203 13:30:48.947782 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1d933f9e-e44c-4d6b-ab8b-0186020b5b28-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-plm9h\" (UID: \"1d933f9e-e44c-4d6b-ab8b-0186020b5b28\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-plm9h" Dec 03 13:30:48 crc kubenswrapper[4986]: I1203 13:30:48.969915 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwvvk\" (UniqueName: \"kubernetes.io/projected/1d933f9e-e44c-4d6b-ab8b-0186020b5b28-kube-api-access-nwvvk\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-plm9h\" (UID: \"1d933f9e-e44c-4d6b-ab8b-0186020b5b28\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-plm9h" Dec 03 13:30:49 crc kubenswrapper[4986]: I1203 13:30:49.116199 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-plm9h" Dec 03 13:30:49 crc kubenswrapper[4986]: I1203 13:30:49.656129 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-plm9h"] Dec 03 13:30:50 crc kubenswrapper[4986]: I1203 13:30:50.632428 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-plm9h" event={"ID":"1d933f9e-e44c-4d6b-ab8b-0186020b5b28","Type":"ContainerStarted","Data":"95bd64f5a2728ff46c10a75543942be946792cad0c0e0eeced7a0721a2338325"} Dec 03 13:30:50 crc kubenswrapper[4986]: I1203 13:30:50.632814 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-plm9h" event={"ID":"1d933f9e-e44c-4d6b-ab8b-0186020b5b28","Type":"ContainerStarted","Data":"fd6113ca8eb915bcca9129cebc8c7d299226d795ac53507bc2ad7a1350210762"} Dec 03 13:30:50 crc kubenswrapper[4986]: I1203 13:30:50.650851 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-plm9h" podStartSLOduration=2.093898881 podStartE2EDuration="2.650831681s" podCreationTimestamp="2025-12-03 13:30:48 +0000 UTC" firstStartedPulling="2025-12-03 13:30:49.660412821 +0000 UTC m=+2109.126844012" lastFinishedPulling="2025-12-03 13:30:50.217345621 +0000 UTC m=+2109.683776812" observedRunningTime="2025-12-03 13:30:50.649821603 +0000 UTC m=+2110.116252794" watchObservedRunningTime="2025-12-03 13:30:50.650831681 +0000 UTC m=+2110.117262872" Dec 03 13:30:56 crc kubenswrapper[4986]: I1203 13:30:56.949918 4986 scope.go:117] "RemoveContainer" containerID="e9e5afb51ce3239b75fca2b2016d9fd12ca14cb8e23518ccf5f77e5f63265b67" Dec 03 13:30:56 crc kubenswrapper[4986]: I1203 13:30:56.993224 4986 scope.go:117] "RemoveContainer" containerID="39f52a6451ce714e952e4e69311ae115a7b27864d53b175c0ff574328f8ae3fb" Dec 03 13:30:57 crc kubenswrapper[4986]: I1203 13:30:57.035991 4986 scope.go:117] "RemoveContainer" containerID="068ac7536234da0a7a4f4b655aea2c95e576ab15ae3a6933a9e16f518c503a4b" Dec 03 13:30:57 crc kubenswrapper[4986]: I1203 13:30:57.081260 4986 scope.go:117] "RemoveContainer" containerID="cb33e8b1845714e4103b5ef1ded54eac8e97493881d1aa910f7df5593c874962" Dec 03 13:30:59 crc kubenswrapper[4986]: I1203 13:30:59.478532 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wnxzg"] Dec 03 13:30:59 crc kubenswrapper[4986]: I1203 13:30:59.493057 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wnxzg" Dec 03 13:30:59 crc kubenswrapper[4986]: I1203 13:30:59.498251 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wnxzg"] Dec 03 13:30:59 crc kubenswrapper[4986]: I1203 13:30:59.664844 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhjj4\" (UniqueName: \"kubernetes.io/projected/d7a0acb7-8724-4f04-ad48-841c36c8f5f3-kube-api-access-dhjj4\") pod \"redhat-marketplace-wnxzg\" (UID: \"d7a0acb7-8724-4f04-ad48-841c36c8f5f3\") " pod="openshift-marketplace/redhat-marketplace-wnxzg" Dec 03 13:30:59 crc kubenswrapper[4986]: I1203 13:30:59.665055 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7a0acb7-8724-4f04-ad48-841c36c8f5f3-catalog-content\") pod \"redhat-marketplace-wnxzg\" (UID: \"d7a0acb7-8724-4f04-ad48-841c36c8f5f3\") " pod="openshift-marketplace/redhat-marketplace-wnxzg" Dec 03 13:30:59 crc kubenswrapper[4986]: I1203 13:30:59.665109 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7a0acb7-8724-4f04-ad48-841c36c8f5f3-utilities\") pod \"redhat-marketplace-wnxzg\" (UID: \"d7a0acb7-8724-4f04-ad48-841c36c8f5f3\") " pod="openshift-marketplace/redhat-marketplace-wnxzg" Dec 03 13:30:59 crc kubenswrapper[4986]: I1203 13:30:59.767696 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7a0acb7-8724-4f04-ad48-841c36c8f5f3-catalog-content\") pod \"redhat-marketplace-wnxzg\" (UID: \"d7a0acb7-8724-4f04-ad48-841c36c8f5f3\") " pod="openshift-marketplace/redhat-marketplace-wnxzg" Dec 03 13:30:59 crc kubenswrapper[4986]: I1203 13:30:59.767749 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7a0acb7-8724-4f04-ad48-841c36c8f5f3-utilities\") pod \"redhat-marketplace-wnxzg\" (UID: \"d7a0acb7-8724-4f04-ad48-841c36c8f5f3\") " pod="openshift-marketplace/redhat-marketplace-wnxzg" Dec 03 13:30:59 crc kubenswrapper[4986]: I1203 13:30:59.767896 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhjj4\" (UniqueName: \"kubernetes.io/projected/d7a0acb7-8724-4f04-ad48-841c36c8f5f3-kube-api-access-dhjj4\") pod \"redhat-marketplace-wnxzg\" (UID: \"d7a0acb7-8724-4f04-ad48-841c36c8f5f3\") " pod="openshift-marketplace/redhat-marketplace-wnxzg" Dec 03 13:30:59 crc kubenswrapper[4986]: I1203 13:30:59.768244 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7a0acb7-8724-4f04-ad48-841c36c8f5f3-catalog-content\") pod \"redhat-marketplace-wnxzg\" (UID: \"d7a0acb7-8724-4f04-ad48-841c36c8f5f3\") " pod="openshift-marketplace/redhat-marketplace-wnxzg" Dec 03 13:30:59 crc kubenswrapper[4986]: I1203 13:30:59.768530 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7a0acb7-8724-4f04-ad48-841c36c8f5f3-utilities\") pod \"redhat-marketplace-wnxzg\" (UID: \"d7a0acb7-8724-4f04-ad48-841c36c8f5f3\") " pod="openshift-marketplace/redhat-marketplace-wnxzg" Dec 03 13:30:59 crc kubenswrapper[4986]: I1203 13:30:59.792358 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhjj4\" (UniqueName: \"kubernetes.io/projected/d7a0acb7-8724-4f04-ad48-841c36c8f5f3-kube-api-access-dhjj4\") pod \"redhat-marketplace-wnxzg\" (UID: \"d7a0acb7-8724-4f04-ad48-841c36c8f5f3\") " pod="openshift-marketplace/redhat-marketplace-wnxzg" Dec 03 13:30:59 crc kubenswrapper[4986]: I1203 13:30:59.826339 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wnxzg" Dec 03 13:31:00 crc kubenswrapper[4986]: I1203 13:31:00.328158 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wnxzg"] Dec 03 13:31:00 crc kubenswrapper[4986]: I1203 13:31:00.734156 4986 generic.go:334] "Generic (PLEG): container finished" podID="d7a0acb7-8724-4f04-ad48-841c36c8f5f3" containerID="33febff1ad6c0e5d414f2e76e0373e126f4c91564d51f86995fabf559cf7afb2" exitCode=0 Dec 03 13:31:00 crc kubenswrapper[4986]: I1203 13:31:00.734474 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wnxzg" event={"ID":"d7a0acb7-8724-4f04-ad48-841c36c8f5f3","Type":"ContainerDied","Data":"33febff1ad6c0e5d414f2e76e0373e126f4c91564d51f86995fabf559cf7afb2"} Dec 03 13:31:00 crc kubenswrapper[4986]: I1203 13:31:00.734510 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wnxzg" event={"ID":"d7a0acb7-8724-4f04-ad48-841c36c8f5f3","Type":"ContainerStarted","Data":"ad8fcff04ae805b03da8b6d2c5b105bf75071937ed22d81a188537513f92ee2e"} Dec 03 13:31:01 crc kubenswrapper[4986]: I1203 13:31:01.744265 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wnxzg" event={"ID":"d7a0acb7-8724-4f04-ad48-841c36c8f5f3","Type":"ContainerStarted","Data":"2d35ecec218b9f8c9bcaa50c324c8242c8923864816ce4c4abc025aef8e9dbd6"} Dec 03 13:31:02 crc kubenswrapper[4986]: I1203 13:31:02.762273 4986 generic.go:334] "Generic (PLEG): container finished" podID="d7a0acb7-8724-4f04-ad48-841c36c8f5f3" containerID="2d35ecec218b9f8c9bcaa50c324c8242c8923864816ce4c4abc025aef8e9dbd6" exitCode=0 Dec 03 13:31:02 crc kubenswrapper[4986]: I1203 13:31:02.762382 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wnxzg" event={"ID":"d7a0acb7-8724-4f04-ad48-841c36c8f5f3","Type":"ContainerDied","Data":"2d35ecec218b9f8c9bcaa50c324c8242c8923864816ce4c4abc025aef8e9dbd6"} Dec 03 13:31:03 crc kubenswrapper[4986]: I1203 13:31:03.774894 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wnxzg" event={"ID":"d7a0acb7-8724-4f04-ad48-841c36c8f5f3","Type":"ContainerStarted","Data":"9945f4897063230721632419dd343503c589662133517a514616a8fa6c0925b3"} Dec 03 13:31:03 crc kubenswrapper[4986]: I1203 13:31:03.795446 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wnxzg" podStartSLOduration=2.197434976 podStartE2EDuration="4.795429191s" podCreationTimestamp="2025-12-03 13:30:59 +0000 UTC" firstStartedPulling="2025-12-03 13:31:00.740589662 +0000 UTC m=+2120.207020863" lastFinishedPulling="2025-12-03 13:31:03.338583887 +0000 UTC m=+2122.805015078" observedRunningTime="2025-12-03 13:31:03.794669751 +0000 UTC m=+2123.261100942" watchObservedRunningTime="2025-12-03 13:31:03.795429191 +0000 UTC m=+2123.261860392" Dec 03 13:31:09 crc kubenswrapper[4986]: I1203 13:31:09.826773 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wnxzg" Dec 03 13:31:09 crc kubenswrapper[4986]: I1203 13:31:09.827336 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wnxzg" Dec 03 13:31:09 crc kubenswrapper[4986]: I1203 13:31:09.895213 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wnxzg" Dec 03 13:31:10 crc kubenswrapper[4986]: I1203 13:31:10.879450 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wnxzg" Dec 03 13:31:10 crc kubenswrapper[4986]: I1203 13:31:10.928833 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wnxzg"] Dec 03 13:31:12 crc kubenswrapper[4986]: I1203 13:31:12.878962 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wnxzg" podUID="d7a0acb7-8724-4f04-ad48-841c36c8f5f3" containerName="registry-server" containerID="cri-o://9945f4897063230721632419dd343503c589662133517a514616a8fa6c0925b3" gracePeriod=2 Dec 03 13:31:13 crc kubenswrapper[4986]: I1203 13:31:13.339900 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wnxzg" Dec 03 13:31:13 crc kubenswrapper[4986]: I1203 13:31:13.432066 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7a0acb7-8724-4f04-ad48-841c36c8f5f3-utilities\") pod \"d7a0acb7-8724-4f04-ad48-841c36c8f5f3\" (UID: \"d7a0acb7-8724-4f04-ad48-841c36c8f5f3\") " Dec 03 13:31:13 crc kubenswrapper[4986]: I1203 13:31:13.432169 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7a0acb7-8724-4f04-ad48-841c36c8f5f3-catalog-content\") pod \"d7a0acb7-8724-4f04-ad48-841c36c8f5f3\" (UID: \"d7a0acb7-8724-4f04-ad48-841c36c8f5f3\") " Dec 03 13:31:13 crc kubenswrapper[4986]: I1203 13:31:13.432398 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dhjj4\" (UniqueName: \"kubernetes.io/projected/d7a0acb7-8724-4f04-ad48-841c36c8f5f3-kube-api-access-dhjj4\") pod \"d7a0acb7-8724-4f04-ad48-841c36c8f5f3\" (UID: \"d7a0acb7-8724-4f04-ad48-841c36c8f5f3\") " Dec 03 13:31:13 crc kubenswrapper[4986]: I1203 13:31:13.433189 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7a0acb7-8724-4f04-ad48-841c36c8f5f3-utilities" (OuterVolumeSpecName: "utilities") pod "d7a0acb7-8724-4f04-ad48-841c36c8f5f3" (UID: "d7a0acb7-8724-4f04-ad48-841c36c8f5f3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 13:31:13 crc kubenswrapper[4986]: I1203 13:31:13.438831 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7a0acb7-8724-4f04-ad48-841c36c8f5f3-kube-api-access-dhjj4" (OuterVolumeSpecName: "kube-api-access-dhjj4") pod "d7a0acb7-8724-4f04-ad48-841c36c8f5f3" (UID: "d7a0acb7-8724-4f04-ad48-841c36c8f5f3"). InnerVolumeSpecName "kube-api-access-dhjj4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:31:13 crc kubenswrapper[4986]: I1203 13:31:13.457417 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7a0acb7-8724-4f04-ad48-841c36c8f5f3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d7a0acb7-8724-4f04-ad48-841c36c8f5f3" (UID: "d7a0acb7-8724-4f04-ad48-841c36c8f5f3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 13:31:13 crc kubenswrapper[4986]: I1203 13:31:13.534868 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dhjj4\" (UniqueName: \"kubernetes.io/projected/d7a0acb7-8724-4f04-ad48-841c36c8f5f3-kube-api-access-dhjj4\") on node \"crc\" DevicePath \"\"" Dec 03 13:31:13 crc kubenswrapper[4986]: I1203 13:31:13.534918 4986 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7a0acb7-8724-4f04-ad48-841c36c8f5f3-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 13:31:13 crc kubenswrapper[4986]: I1203 13:31:13.534936 4986 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7a0acb7-8724-4f04-ad48-841c36c8f5f3-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 13:31:13 crc kubenswrapper[4986]: I1203 13:31:13.896510 4986 generic.go:334] "Generic (PLEG): container finished" podID="d7a0acb7-8724-4f04-ad48-841c36c8f5f3" containerID="9945f4897063230721632419dd343503c589662133517a514616a8fa6c0925b3" exitCode=0 Dec 03 13:31:13 crc kubenswrapper[4986]: I1203 13:31:13.896571 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wnxzg" Dec 03 13:31:13 crc kubenswrapper[4986]: I1203 13:31:13.896588 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wnxzg" event={"ID":"d7a0acb7-8724-4f04-ad48-841c36c8f5f3","Type":"ContainerDied","Data":"9945f4897063230721632419dd343503c589662133517a514616a8fa6c0925b3"} Dec 03 13:31:13 crc kubenswrapper[4986]: I1203 13:31:13.896649 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wnxzg" event={"ID":"d7a0acb7-8724-4f04-ad48-841c36c8f5f3","Type":"ContainerDied","Data":"ad8fcff04ae805b03da8b6d2c5b105bf75071937ed22d81a188537513f92ee2e"} Dec 03 13:31:13 crc kubenswrapper[4986]: I1203 13:31:13.896685 4986 scope.go:117] "RemoveContainer" containerID="9945f4897063230721632419dd343503c589662133517a514616a8fa6c0925b3" Dec 03 13:31:13 crc kubenswrapper[4986]: I1203 13:31:13.933975 4986 scope.go:117] "RemoveContainer" containerID="2d35ecec218b9f8c9bcaa50c324c8242c8923864816ce4c4abc025aef8e9dbd6" Dec 03 13:31:13 crc kubenswrapper[4986]: I1203 13:31:13.939740 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wnxzg"] Dec 03 13:31:13 crc kubenswrapper[4986]: I1203 13:31:13.950837 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wnxzg"] Dec 03 13:31:13 crc kubenswrapper[4986]: I1203 13:31:13.954192 4986 scope.go:117] "RemoveContainer" containerID="33febff1ad6c0e5d414f2e76e0373e126f4c91564d51f86995fabf559cf7afb2" Dec 03 13:31:14 crc kubenswrapper[4986]: I1203 13:31:14.010100 4986 scope.go:117] "RemoveContainer" containerID="9945f4897063230721632419dd343503c589662133517a514616a8fa6c0925b3" Dec 03 13:31:14 crc kubenswrapper[4986]: E1203 13:31:14.010815 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9945f4897063230721632419dd343503c589662133517a514616a8fa6c0925b3\": container with ID starting with 9945f4897063230721632419dd343503c589662133517a514616a8fa6c0925b3 not found: ID does not exist" containerID="9945f4897063230721632419dd343503c589662133517a514616a8fa6c0925b3" Dec 03 13:31:14 crc kubenswrapper[4986]: I1203 13:31:14.010853 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9945f4897063230721632419dd343503c589662133517a514616a8fa6c0925b3"} err="failed to get container status \"9945f4897063230721632419dd343503c589662133517a514616a8fa6c0925b3\": rpc error: code = NotFound desc = could not find container \"9945f4897063230721632419dd343503c589662133517a514616a8fa6c0925b3\": container with ID starting with 9945f4897063230721632419dd343503c589662133517a514616a8fa6c0925b3 not found: ID does not exist" Dec 03 13:31:14 crc kubenswrapper[4986]: I1203 13:31:14.010878 4986 scope.go:117] "RemoveContainer" containerID="2d35ecec218b9f8c9bcaa50c324c8242c8923864816ce4c4abc025aef8e9dbd6" Dec 03 13:31:14 crc kubenswrapper[4986]: E1203 13:31:14.011377 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d35ecec218b9f8c9bcaa50c324c8242c8923864816ce4c4abc025aef8e9dbd6\": container with ID starting with 2d35ecec218b9f8c9bcaa50c324c8242c8923864816ce4c4abc025aef8e9dbd6 not found: ID does not exist" containerID="2d35ecec218b9f8c9bcaa50c324c8242c8923864816ce4c4abc025aef8e9dbd6" Dec 03 13:31:14 crc kubenswrapper[4986]: I1203 13:31:14.011403 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d35ecec218b9f8c9bcaa50c324c8242c8923864816ce4c4abc025aef8e9dbd6"} err="failed to get container status \"2d35ecec218b9f8c9bcaa50c324c8242c8923864816ce4c4abc025aef8e9dbd6\": rpc error: code = NotFound desc = could not find container \"2d35ecec218b9f8c9bcaa50c324c8242c8923864816ce4c4abc025aef8e9dbd6\": container with ID starting with 2d35ecec218b9f8c9bcaa50c324c8242c8923864816ce4c4abc025aef8e9dbd6 not found: ID does not exist" Dec 03 13:31:14 crc kubenswrapper[4986]: I1203 13:31:14.011418 4986 scope.go:117] "RemoveContainer" containerID="33febff1ad6c0e5d414f2e76e0373e126f4c91564d51f86995fabf559cf7afb2" Dec 03 13:31:14 crc kubenswrapper[4986]: E1203 13:31:14.012113 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33febff1ad6c0e5d414f2e76e0373e126f4c91564d51f86995fabf559cf7afb2\": container with ID starting with 33febff1ad6c0e5d414f2e76e0373e126f4c91564d51f86995fabf559cf7afb2 not found: ID does not exist" containerID="33febff1ad6c0e5d414f2e76e0373e126f4c91564d51f86995fabf559cf7afb2" Dec 03 13:31:14 crc kubenswrapper[4986]: I1203 13:31:14.012180 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33febff1ad6c0e5d414f2e76e0373e126f4c91564d51f86995fabf559cf7afb2"} err="failed to get container status \"33febff1ad6c0e5d414f2e76e0373e126f4c91564d51f86995fabf559cf7afb2\": rpc error: code = NotFound desc = could not find container \"33febff1ad6c0e5d414f2e76e0373e126f4c91564d51f86995fabf559cf7afb2\": container with ID starting with 33febff1ad6c0e5d414f2e76e0373e126f4c91564d51f86995fabf559cf7afb2 not found: ID does not exist" Dec 03 13:31:14 crc kubenswrapper[4986]: I1203 13:31:14.953749 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7a0acb7-8724-4f04-ad48-841c36c8f5f3" path="/var/lib/kubelet/pods/d7a0acb7-8724-4f04-ad48-841c36c8f5f3/volumes" Dec 03 13:31:15 crc kubenswrapper[4986]: I1203 13:31:15.041919 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-d48lr"] Dec 03 13:31:15 crc kubenswrapper[4986]: I1203 13:31:15.073526 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-d48lr"] Dec 03 13:31:16 crc kubenswrapper[4986]: I1203 13:31:16.954406 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0a9f7fb-9224-43cb-bfee-86074845c01e" path="/var/lib/kubelet/pods/c0a9f7fb-9224-43cb-bfee-86074845c01e/volumes" Dec 03 13:31:42 crc kubenswrapper[4986]: I1203 13:31:42.179499 4986 generic.go:334] "Generic (PLEG): container finished" podID="1d933f9e-e44c-4d6b-ab8b-0186020b5b28" containerID="95bd64f5a2728ff46c10a75543942be946792cad0c0e0eeced7a0721a2338325" exitCode=0 Dec 03 13:31:42 crc kubenswrapper[4986]: I1203 13:31:42.179577 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-plm9h" event={"ID":"1d933f9e-e44c-4d6b-ab8b-0186020b5b28","Type":"ContainerDied","Data":"95bd64f5a2728ff46c10a75543942be946792cad0c0e0eeced7a0721a2338325"} Dec 03 13:31:43 crc kubenswrapper[4986]: I1203 13:31:43.666623 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-plm9h" Dec 03 13:31:43 crc kubenswrapper[4986]: I1203 13:31:43.828408 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1d933f9e-e44c-4d6b-ab8b-0186020b5b28-ssh-key\") pod \"1d933f9e-e44c-4d6b-ab8b-0186020b5b28\" (UID: \"1d933f9e-e44c-4d6b-ab8b-0186020b5b28\") " Dec 03 13:31:43 crc kubenswrapper[4986]: I1203 13:31:43.828481 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwvvk\" (UniqueName: \"kubernetes.io/projected/1d933f9e-e44c-4d6b-ab8b-0186020b5b28-kube-api-access-nwvvk\") pod \"1d933f9e-e44c-4d6b-ab8b-0186020b5b28\" (UID: \"1d933f9e-e44c-4d6b-ab8b-0186020b5b28\") " Dec 03 13:31:43 crc kubenswrapper[4986]: I1203 13:31:43.828796 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1d933f9e-e44c-4d6b-ab8b-0186020b5b28-inventory\") pod \"1d933f9e-e44c-4d6b-ab8b-0186020b5b28\" (UID: \"1d933f9e-e44c-4d6b-ab8b-0186020b5b28\") " Dec 03 13:31:43 crc kubenswrapper[4986]: I1203 13:31:43.834940 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d933f9e-e44c-4d6b-ab8b-0186020b5b28-kube-api-access-nwvvk" (OuterVolumeSpecName: "kube-api-access-nwvvk") pod "1d933f9e-e44c-4d6b-ab8b-0186020b5b28" (UID: "1d933f9e-e44c-4d6b-ab8b-0186020b5b28"). InnerVolumeSpecName "kube-api-access-nwvvk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:31:43 crc kubenswrapper[4986]: I1203 13:31:43.862186 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d933f9e-e44c-4d6b-ab8b-0186020b5b28-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "1d933f9e-e44c-4d6b-ab8b-0186020b5b28" (UID: "1d933f9e-e44c-4d6b-ab8b-0186020b5b28"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:31:43 crc kubenswrapper[4986]: I1203 13:31:43.862762 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d933f9e-e44c-4d6b-ab8b-0186020b5b28-inventory" (OuterVolumeSpecName: "inventory") pod "1d933f9e-e44c-4d6b-ab8b-0186020b5b28" (UID: "1d933f9e-e44c-4d6b-ab8b-0186020b5b28"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:31:43 crc kubenswrapper[4986]: I1203 13:31:43.931605 4986 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1d933f9e-e44c-4d6b-ab8b-0186020b5b28-inventory\") on node \"crc\" DevicePath \"\"" Dec 03 13:31:43 crc kubenswrapper[4986]: I1203 13:31:43.931639 4986 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1d933f9e-e44c-4d6b-ab8b-0186020b5b28-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 03 13:31:43 crc kubenswrapper[4986]: I1203 13:31:43.931653 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nwvvk\" (UniqueName: \"kubernetes.io/projected/1d933f9e-e44c-4d6b-ab8b-0186020b5b28-kube-api-access-nwvvk\") on node \"crc\" DevicePath \"\"" Dec 03 13:31:44 crc kubenswrapper[4986]: I1203 13:31:44.199907 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-plm9h" event={"ID":"1d933f9e-e44c-4d6b-ab8b-0186020b5b28","Type":"ContainerDied","Data":"fd6113ca8eb915bcca9129cebc8c7d299226d795ac53507bc2ad7a1350210762"} Dec 03 13:31:44 crc kubenswrapper[4986]: I1203 13:31:44.199942 4986 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd6113ca8eb915bcca9129cebc8c7d299226d795ac53507bc2ad7a1350210762" Dec 03 13:31:44 crc kubenswrapper[4986]: I1203 13:31:44.199999 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-plm9h" Dec 03 13:31:44 crc kubenswrapper[4986]: I1203 13:31:44.306161 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-5x8bl"] Dec 03 13:31:44 crc kubenswrapper[4986]: E1203 13:31:44.306861 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7a0acb7-8724-4f04-ad48-841c36c8f5f3" containerName="extract-utilities" Dec 03 13:31:44 crc kubenswrapper[4986]: I1203 13:31:44.306955 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7a0acb7-8724-4f04-ad48-841c36c8f5f3" containerName="extract-utilities" Dec 03 13:31:44 crc kubenswrapper[4986]: E1203 13:31:44.307035 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7a0acb7-8724-4f04-ad48-841c36c8f5f3" containerName="registry-server" Dec 03 13:31:44 crc kubenswrapper[4986]: I1203 13:31:44.307099 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7a0acb7-8724-4f04-ad48-841c36c8f5f3" containerName="registry-server" Dec 03 13:31:44 crc kubenswrapper[4986]: E1203 13:31:44.307190 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7a0acb7-8724-4f04-ad48-841c36c8f5f3" containerName="extract-content" Dec 03 13:31:44 crc kubenswrapper[4986]: I1203 13:31:44.307268 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7a0acb7-8724-4f04-ad48-841c36c8f5f3" containerName="extract-content" Dec 03 13:31:44 crc kubenswrapper[4986]: E1203 13:31:44.307369 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d933f9e-e44c-4d6b-ab8b-0186020b5b28" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Dec 03 13:31:44 crc kubenswrapper[4986]: I1203 13:31:44.307459 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d933f9e-e44c-4d6b-ab8b-0186020b5b28" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Dec 03 13:31:44 crc kubenswrapper[4986]: I1203 13:31:44.307756 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d933f9e-e44c-4d6b-ab8b-0186020b5b28" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Dec 03 13:31:44 crc kubenswrapper[4986]: I1203 13:31:44.307840 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7a0acb7-8724-4f04-ad48-841c36c8f5f3" containerName="registry-server" Dec 03 13:31:44 crc kubenswrapper[4986]: I1203 13:31:44.308688 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-5x8bl" Dec 03 13:31:44 crc kubenswrapper[4986]: I1203 13:31:44.311971 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jr75v" Dec 03 13:31:44 crc kubenswrapper[4986]: I1203 13:31:44.312006 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 03 13:31:44 crc kubenswrapper[4986]: I1203 13:31:44.312905 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 03 13:31:44 crc kubenswrapper[4986]: I1203 13:31:44.315806 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-5x8bl"] Dec 03 13:31:44 crc kubenswrapper[4986]: I1203 13:31:44.322596 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 03 13:31:44 crc kubenswrapper[4986]: I1203 13:31:44.440108 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtnq9\" (UniqueName: \"kubernetes.io/projected/614b1cb6-38ce-43ac-a5f3-abc66d1dd088-kube-api-access-qtnq9\") pod \"ssh-known-hosts-edpm-deployment-5x8bl\" (UID: \"614b1cb6-38ce-43ac-a5f3-abc66d1dd088\") " pod="openstack/ssh-known-hosts-edpm-deployment-5x8bl" Dec 03 13:31:44 crc kubenswrapper[4986]: I1203 13:31:44.440262 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/614b1cb6-38ce-43ac-a5f3-abc66d1dd088-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-5x8bl\" (UID: \"614b1cb6-38ce-43ac-a5f3-abc66d1dd088\") " pod="openstack/ssh-known-hosts-edpm-deployment-5x8bl" Dec 03 13:31:44 crc kubenswrapper[4986]: I1203 13:31:44.440352 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/614b1cb6-38ce-43ac-a5f3-abc66d1dd088-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-5x8bl\" (UID: \"614b1cb6-38ce-43ac-a5f3-abc66d1dd088\") " pod="openstack/ssh-known-hosts-edpm-deployment-5x8bl" Dec 03 13:31:44 crc kubenswrapper[4986]: I1203 13:31:44.542241 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtnq9\" (UniqueName: \"kubernetes.io/projected/614b1cb6-38ce-43ac-a5f3-abc66d1dd088-kube-api-access-qtnq9\") pod \"ssh-known-hosts-edpm-deployment-5x8bl\" (UID: \"614b1cb6-38ce-43ac-a5f3-abc66d1dd088\") " pod="openstack/ssh-known-hosts-edpm-deployment-5x8bl" Dec 03 13:31:44 crc kubenswrapper[4986]: I1203 13:31:44.542741 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/614b1cb6-38ce-43ac-a5f3-abc66d1dd088-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-5x8bl\" (UID: \"614b1cb6-38ce-43ac-a5f3-abc66d1dd088\") " pod="openstack/ssh-known-hosts-edpm-deployment-5x8bl" Dec 03 13:31:44 crc kubenswrapper[4986]: I1203 13:31:44.542903 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/614b1cb6-38ce-43ac-a5f3-abc66d1dd088-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-5x8bl\" (UID: \"614b1cb6-38ce-43ac-a5f3-abc66d1dd088\") " pod="openstack/ssh-known-hosts-edpm-deployment-5x8bl" Dec 03 13:31:44 crc kubenswrapper[4986]: I1203 13:31:44.547002 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/614b1cb6-38ce-43ac-a5f3-abc66d1dd088-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-5x8bl\" (UID: \"614b1cb6-38ce-43ac-a5f3-abc66d1dd088\") " pod="openstack/ssh-known-hosts-edpm-deployment-5x8bl" Dec 03 13:31:44 crc kubenswrapper[4986]: I1203 13:31:44.549092 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/614b1cb6-38ce-43ac-a5f3-abc66d1dd088-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-5x8bl\" (UID: \"614b1cb6-38ce-43ac-a5f3-abc66d1dd088\") " pod="openstack/ssh-known-hosts-edpm-deployment-5x8bl" Dec 03 13:31:44 crc kubenswrapper[4986]: I1203 13:31:44.558929 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtnq9\" (UniqueName: \"kubernetes.io/projected/614b1cb6-38ce-43ac-a5f3-abc66d1dd088-kube-api-access-qtnq9\") pod \"ssh-known-hosts-edpm-deployment-5x8bl\" (UID: \"614b1cb6-38ce-43ac-a5f3-abc66d1dd088\") " pod="openstack/ssh-known-hosts-edpm-deployment-5x8bl" Dec 03 13:31:44 crc kubenswrapper[4986]: I1203 13:31:44.627108 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-5x8bl" Dec 03 13:31:45 crc kubenswrapper[4986]: I1203 13:31:45.242922 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-5x8bl"] Dec 03 13:31:45 crc kubenswrapper[4986]: I1203 13:31:45.258791 4986 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 03 13:31:46 crc kubenswrapper[4986]: I1203 13:31:46.216697 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-5x8bl" event={"ID":"614b1cb6-38ce-43ac-a5f3-abc66d1dd088","Type":"ContainerStarted","Data":"d55d6f1e8de9ae75057e7db13bff6561d1f674bffd571d640d72eb3ffb1e302b"} Dec 03 13:31:47 crc kubenswrapper[4986]: I1203 13:31:47.228273 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-5x8bl" event={"ID":"614b1cb6-38ce-43ac-a5f3-abc66d1dd088","Type":"ContainerStarted","Data":"94b116768170ecb26e27f201f640af6b12ea64e4850833f38c47867331b04c20"} Dec 03 13:31:47 crc kubenswrapper[4986]: I1203 13:31:47.249841 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-5x8bl" podStartSLOduration=1.9167403360000002 podStartE2EDuration="3.249819273s" podCreationTimestamp="2025-12-03 13:31:44 +0000 UTC" firstStartedPulling="2025-12-03 13:31:45.258557379 +0000 UTC m=+2164.724988570" lastFinishedPulling="2025-12-03 13:31:46.591636316 +0000 UTC m=+2166.058067507" observedRunningTime="2025-12-03 13:31:47.241114507 +0000 UTC m=+2166.707545738" watchObservedRunningTime="2025-12-03 13:31:47.249819273 +0000 UTC m=+2166.716250474" Dec 03 13:31:54 crc kubenswrapper[4986]: I1203 13:31:54.295866 4986 generic.go:334] "Generic (PLEG): container finished" podID="614b1cb6-38ce-43ac-a5f3-abc66d1dd088" containerID="94b116768170ecb26e27f201f640af6b12ea64e4850833f38c47867331b04c20" exitCode=0 Dec 03 13:31:54 crc kubenswrapper[4986]: I1203 13:31:54.296204 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-5x8bl" event={"ID":"614b1cb6-38ce-43ac-a5f3-abc66d1dd088","Type":"ContainerDied","Data":"94b116768170ecb26e27f201f640af6b12ea64e4850833f38c47867331b04c20"} Dec 03 13:31:55 crc kubenswrapper[4986]: I1203 13:31:55.096958 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qftdt"] Dec 03 13:31:55 crc kubenswrapper[4986]: I1203 13:31:55.100054 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qftdt" Dec 03 13:31:55 crc kubenswrapper[4986]: I1203 13:31:55.110252 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qftdt"] Dec 03 13:31:55 crc kubenswrapper[4986]: I1203 13:31:55.291536 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gg8z2\" (UniqueName: \"kubernetes.io/projected/b10bf2da-2a56-4c79-936b-9247d85391c0-kube-api-access-gg8z2\") pod \"community-operators-qftdt\" (UID: \"b10bf2da-2a56-4c79-936b-9247d85391c0\") " pod="openshift-marketplace/community-operators-qftdt" Dec 03 13:31:55 crc kubenswrapper[4986]: I1203 13:31:55.291639 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b10bf2da-2a56-4c79-936b-9247d85391c0-utilities\") pod \"community-operators-qftdt\" (UID: \"b10bf2da-2a56-4c79-936b-9247d85391c0\") " pod="openshift-marketplace/community-operators-qftdt" Dec 03 13:31:55 crc kubenswrapper[4986]: I1203 13:31:55.291871 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b10bf2da-2a56-4c79-936b-9247d85391c0-catalog-content\") pod \"community-operators-qftdt\" (UID: \"b10bf2da-2a56-4c79-936b-9247d85391c0\") " pod="openshift-marketplace/community-operators-qftdt" Dec 03 13:31:55 crc kubenswrapper[4986]: I1203 13:31:55.393461 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b10bf2da-2a56-4c79-936b-9247d85391c0-catalog-content\") pod \"community-operators-qftdt\" (UID: \"b10bf2da-2a56-4c79-936b-9247d85391c0\") " pod="openshift-marketplace/community-operators-qftdt" Dec 03 13:31:55 crc kubenswrapper[4986]: I1203 13:31:55.393801 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gg8z2\" (UniqueName: \"kubernetes.io/projected/b10bf2da-2a56-4c79-936b-9247d85391c0-kube-api-access-gg8z2\") pod \"community-operators-qftdt\" (UID: \"b10bf2da-2a56-4c79-936b-9247d85391c0\") " pod="openshift-marketplace/community-operators-qftdt" Dec 03 13:31:55 crc kubenswrapper[4986]: I1203 13:31:55.393829 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b10bf2da-2a56-4c79-936b-9247d85391c0-utilities\") pod \"community-operators-qftdt\" (UID: \"b10bf2da-2a56-4c79-936b-9247d85391c0\") " pod="openshift-marketplace/community-operators-qftdt" Dec 03 13:31:55 crc kubenswrapper[4986]: I1203 13:31:55.394015 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b10bf2da-2a56-4c79-936b-9247d85391c0-catalog-content\") pod \"community-operators-qftdt\" (UID: \"b10bf2da-2a56-4c79-936b-9247d85391c0\") " pod="openshift-marketplace/community-operators-qftdt" Dec 03 13:31:55 crc kubenswrapper[4986]: I1203 13:31:55.394094 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b10bf2da-2a56-4c79-936b-9247d85391c0-utilities\") pod \"community-operators-qftdt\" (UID: \"b10bf2da-2a56-4c79-936b-9247d85391c0\") " pod="openshift-marketplace/community-operators-qftdt" Dec 03 13:31:55 crc kubenswrapper[4986]: I1203 13:31:55.429002 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gg8z2\" (UniqueName: \"kubernetes.io/projected/b10bf2da-2a56-4c79-936b-9247d85391c0-kube-api-access-gg8z2\") pod \"community-operators-qftdt\" (UID: \"b10bf2da-2a56-4c79-936b-9247d85391c0\") " pod="openshift-marketplace/community-operators-qftdt" Dec 03 13:31:55 crc kubenswrapper[4986]: I1203 13:31:55.719389 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qftdt" Dec 03 13:31:55 crc kubenswrapper[4986]: I1203 13:31:55.828230 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-5x8bl" Dec 03 13:31:55 crc kubenswrapper[4986]: I1203 13:31:55.906483 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/614b1cb6-38ce-43ac-a5f3-abc66d1dd088-inventory-0\") pod \"614b1cb6-38ce-43ac-a5f3-abc66d1dd088\" (UID: \"614b1cb6-38ce-43ac-a5f3-abc66d1dd088\") " Dec 03 13:31:55 crc kubenswrapper[4986]: I1203 13:31:55.906828 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/614b1cb6-38ce-43ac-a5f3-abc66d1dd088-ssh-key-openstack-edpm-ipam\") pod \"614b1cb6-38ce-43ac-a5f3-abc66d1dd088\" (UID: \"614b1cb6-38ce-43ac-a5f3-abc66d1dd088\") " Dec 03 13:31:55 crc kubenswrapper[4986]: I1203 13:31:55.906996 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qtnq9\" (UniqueName: \"kubernetes.io/projected/614b1cb6-38ce-43ac-a5f3-abc66d1dd088-kube-api-access-qtnq9\") pod \"614b1cb6-38ce-43ac-a5f3-abc66d1dd088\" (UID: \"614b1cb6-38ce-43ac-a5f3-abc66d1dd088\") " Dec 03 13:31:55 crc kubenswrapper[4986]: I1203 13:31:55.912773 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/614b1cb6-38ce-43ac-a5f3-abc66d1dd088-kube-api-access-qtnq9" (OuterVolumeSpecName: "kube-api-access-qtnq9") pod "614b1cb6-38ce-43ac-a5f3-abc66d1dd088" (UID: "614b1cb6-38ce-43ac-a5f3-abc66d1dd088"). InnerVolumeSpecName "kube-api-access-qtnq9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:31:55 crc kubenswrapper[4986]: I1203 13:31:55.935423 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/614b1cb6-38ce-43ac-a5f3-abc66d1dd088-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "614b1cb6-38ce-43ac-a5f3-abc66d1dd088" (UID: "614b1cb6-38ce-43ac-a5f3-abc66d1dd088"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:31:55 crc kubenswrapper[4986]: I1203 13:31:55.952503 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/614b1cb6-38ce-43ac-a5f3-abc66d1dd088-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "614b1cb6-38ce-43ac-a5f3-abc66d1dd088" (UID: "614b1cb6-38ce-43ac-a5f3-abc66d1dd088"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:31:56 crc kubenswrapper[4986]: I1203 13:31:56.009728 4986 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/614b1cb6-38ce-43ac-a5f3-abc66d1dd088-inventory-0\") on node \"crc\" DevicePath \"\"" Dec 03 13:31:56 crc kubenswrapper[4986]: I1203 13:31:56.009776 4986 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/614b1cb6-38ce-43ac-a5f3-abc66d1dd088-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Dec 03 13:31:56 crc kubenswrapper[4986]: I1203 13:31:56.009791 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qtnq9\" (UniqueName: \"kubernetes.io/projected/614b1cb6-38ce-43ac-a5f3-abc66d1dd088-kube-api-access-qtnq9\") on node \"crc\" DevicePath \"\"" Dec 03 13:31:56 crc kubenswrapper[4986]: I1203 13:31:56.186010 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qftdt"] Dec 03 13:31:56 crc kubenswrapper[4986]: I1203 13:31:56.313251 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-5x8bl" event={"ID":"614b1cb6-38ce-43ac-a5f3-abc66d1dd088","Type":"ContainerDied","Data":"d55d6f1e8de9ae75057e7db13bff6561d1f674bffd571d640d72eb3ffb1e302b"} Dec 03 13:31:56 crc kubenswrapper[4986]: I1203 13:31:56.313639 4986 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d55d6f1e8de9ae75057e7db13bff6561d1f674bffd571d640d72eb3ffb1e302b" Dec 03 13:31:56 crc kubenswrapper[4986]: I1203 13:31:56.313333 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-5x8bl" Dec 03 13:31:56 crc kubenswrapper[4986]: I1203 13:31:56.314355 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qftdt" event={"ID":"b10bf2da-2a56-4c79-936b-9247d85391c0","Type":"ContainerStarted","Data":"f8aa6cbdf7f95d2e79aa11f3cbbfff28f9dafc7ad60bec8c074682395dba5b1b"} Dec 03 13:31:56 crc kubenswrapper[4986]: I1203 13:31:56.500146 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-n9vwn"] Dec 03 13:31:56 crc kubenswrapper[4986]: E1203 13:31:56.500683 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="614b1cb6-38ce-43ac-a5f3-abc66d1dd088" containerName="ssh-known-hosts-edpm-deployment" Dec 03 13:31:56 crc kubenswrapper[4986]: I1203 13:31:56.500701 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="614b1cb6-38ce-43ac-a5f3-abc66d1dd088" containerName="ssh-known-hosts-edpm-deployment" Dec 03 13:31:56 crc kubenswrapper[4986]: I1203 13:31:56.505218 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="614b1cb6-38ce-43ac-a5f3-abc66d1dd088" containerName="ssh-known-hosts-edpm-deployment" Dec 03 13:31:56 crc kubenswrapper[4986]: I1203 13:31:56.505931 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-n9vwn" Dec 03 13:31:56 crc kubenswrapper[4986]: I1203 13:31:56.509602 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 03 13:31:56 crc kubenswrapper[4986]: I1203 13:31:56.509834 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 03 13:31:56 crc kubenswrapper[4986]: I1203 13:31:56.510017 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 03 13:31:56 crc kubenswrapper[4986]: I1203 13:31:56.510156 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jr75v" Dec 03 13:31:56 crc kubenswrapper[4986]: I1203 13:31:56.539817 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8c21eccd-73c5-4d10-9bfe-ff9530e7627b-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-n9vwn\" (UID: \"8c21eccd-73c5-4d10-9bfe-ff9530e7627b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-n9vwn" Dec 03 13:31:56 crc kubenswrapper[4986]: I1203 13:31:56.539913 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62h9q\" (UniqueName: \"kubernetes.io/projected/8c21eccd-73c5-4d10-9bfe-ff9530e7627b-kube-api-access-62h9q\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-n9vwn\" (UID: \"8c21eccd-73c5-4d10-9bfe-ff9530e7627b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-n9vwn" Dec 03 13:31:56 crc kubenswrapper[4986]: I1203 13:31:56.539964 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8c21eccd-73c5-4d10-9bfe-ff9530e7627b-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-n9vwn\" (UID: \"8c21eccd-73c5-4d10-9bfe-ff9530e7627b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-n9vwn" Dec 03 13:31:56 crc kubenswrapper[4986]: I1203 13:31:56.551497 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-n9vwn"] Dec 03 13:31:56 crc kubenswrapper[4986]: I1203 13:31:56.641744 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8c21eccd-73c5-4d10-9bfe-ff9530e7627b-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-n9vwn\" (UID: \"8c21eccd-73c5-4d10-9bfe-ff9530e7627b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-n9vwn" Dec 03 13:31:56 crc kubenswrapper[4986]: I1203 13:31:56.641878 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8c21eccd-73c5-4d10-9bfe-ff9530e7627b-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-n9vwn\" (UID: \"8c21eccd-73c5-4d10-9bfe-ff9530e7627b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-n9vwn" Dec 03 13:31:56 crc kubenswrapper[4986]: I1203 13:31:56.641935 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62h9q\" (UniqueName: \"kubernetes.io/projected/8c21eccd-73c5-4d10-9bfe-ff9530e7627b-kube-api-access-62h9q\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-n9vwn\" (UID: \"8c21eccd-73c5-4d10-9bfe-ff9530e7627b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-n9vwn" Dec 03 13:31:56 crc kubenswrapper[4986]: I1203 13:31:56.647536 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8c21eccd-73c5-4d10-9bfe-ff9530e7627b-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-n9vwn\" (UID: \"8c21eccd-73c5-4d10-9bfe-ff9530e7627b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-n9vwn" Dec 03 13:31:56 crc kubenswrapper[4986]: I1203 13:31:56.647734 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8c21eccd-73c5-4d10-9bfe-ff9530e7627b-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-n9vwn\" (UID: \"8c21eccd-73c5-4d10-9bfe-ff9530e7627b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-n9vwn" Dec 03 13:31:56 crc kubenswrapper[4986]: I1203 13:31:56.657104 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62h9q\" (UniqueName: \"kubernetes.io/projected/8c21eccd-73c5-4d10-9bfe-ff9530e7627b-kube-api-access-62h9q\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-n9vwn\" (UID: \"8c21eccd-73c5-4d10-9bfe-ff9530e7627b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-n9vwn" Dec 03 13:31:56 crc kubenswrapper[4986]: I1203 13:31:56.728167 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-n9vwn" Dec 03 13:31:57 crc kubenswrapper[4986]: I1203 13:31:57.213695 4986 scope.go:117] "RemoveContainer" containerID="3e7f0d3cda8cd5ce16a068b6deedc145ea1343ab88db1faae1d4c55643dc2d08" Dec 03 13:31:57 crc kubenswrapper[4986]: I1203 13:31:57.305683 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-n9vwn"] Dec 03 13:31:57 crc kubenswrapper[4986]: W1203 13:31:57.320257 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8c21eccd_73c5_4d10_9bfe_ff9530e7627b.slice/crio-725f6ccca26f95b57416e012cd1f1a818a2dc7db46f52696225ec7717e842418 WatchSource:0}: Error finding container 725f6ccca26f95b57416e012cd1f1a818a2dc7db46f52696225ec7717e842418: Status 404 returned error can't find the container with id 725f6ccca26f95b57416e012cd1f1a818a2dc7db46f52696225ec7717e842418 Dec 03 13:31:57 crc kubenswrapper[4986]: I1203 13:31:57.327129 4986 generic.go:334] "Generic (PLEG): container finished" podID="b10bf2da-2a56-4c79-936b-9247d85391c0" containerID="dacbe82e77672bc139d7f4e87a00907069fa92fa8558d8b69333a10b2e2ba875" exitCode=0 Dec 03 13:31:57 crc kubenswrapper[4986]: I1203 13:31:57.327215 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qftdt" event={"ID":"b10bf2da-2a56-4c79-936b-9247d85391c0","Type":"ContainerDied","Data":"dacbe82e77672bc139d7f4e87a00907069fa92fa8558d8b69333a10b2e2ba875"} Dec 03 13:31:58 crc kubenswrapper[4986]: I1203 13:31:58.338234 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-n9vwn" event={"ID":"8c21eccd-73c5-4d10-9bfe-ff9530e7627b","Type":"ContainerStarted","Data":"725f6ccca26f95b57416e012cd1f1a818a2dc7db46f52696225ec7717e842418"} Dec 03 13:31:59 crc kubenswrapper[4986]: I1203 13:31:59.350157 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-n9vwn" event={"ID":"8c21eccd-73c5-4d10-9bfe-ff9530e7627b","Type":"ContainerStarted","Data":"0bf001dee4f62dece945fdd382177e802acda7fd7cabbeec19b40266c00de7f9"} Dec 03 13:31:59 crc kubenswrapper[4986]: I1203 13:31:59.352909 4986 generic.go:334] "Generic (PLEG): container finished" podID="b10bf2da-2a56-4c79-936b-9247d85391c0" containerID="de6264b44327b4b247aa09f84035134f42671596cdbbb1d7f7ee31e65cef67c3" exitCode=0 Dec 03 13:31:59 crc kubenswrapper[4986]: I1203 13:31:59.352970 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qftdt" event={"ID":"b10bf2da-2a56-4c79-936b-9247d85391c0","Type":"ContainerDied","Data":"de6264b44327b4b247aa09f84035134f42671596cdbbb1d7f7ee31e65cef67c3"} Dec 03 13:31:59 crc kubenswrapper[4986]: I1203 13:31:59.385084 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-n9vwn" podStartSLOduration=2.462266434 podStartE2EDuration="3.3850634s" podCreationTimestamp="2025-12-03 13:31:56 +0000 UTC" firstStartedPulling="2025-12-03 13:31:57.32319747 +0000 UTC m=+2176.789628671" lastFinishedPulling="2025-12-03 13:31:58.245994406 +0000 UTC m=+2177.712425637" observedRunningTime="2025-12-03 13:31:59.377400582 +0000 UTC m=+2178.843831813" watchObservedRunningTime="2025-12-03 13:31:59.3850634 +0000 UTC m=+2178.851494601" Dec 03 13:32:00 crc kubenswrapper[4986]: I1203 13:32:00.365452 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qftdt" event={"ID":"b10bf2da-2a56-4c79-936b-9247d85391c0","Type":"ContainerStarted","Data":"2d1e5a3ca0ea343ccf25d329620f3a7f649ed7f98615a5a620d9b4f7205e7dce"} Dec 03 13:32:05 crc kubenswrapper[4986]: I1203 13:32:05.719818 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qftdt" Dec 03 13:32:05 crc kubenswrapper[4986]: I1203 13:32:05.720443 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qftdt" Dec 03 13:32:05 crc kubenswrapper[4986]: I1203 13:32:05.781969 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qftdt" Dec 03 13:32:05 crc kubenswrapper[4986]: I1203 13:32:05.808228 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qftdt" podStartSLOduration=8.316863771 podStartE2EDuration="10.808207023s" podCreationTimestamp="2025-12-03 13:31:55 +0000 UTC" firstStartedPulling="2025-12-03 13:31:57.328640788 +0000 UTC m=+2176.795072019" lastFinishedPulling="2025-12-03 13:31:59.81998408 +0000 UTC m=+2179.286415271" observedRunningTime="2025-12-03 13:32:00.396792579 +0000 UTC m=+2179.863223790" watchObservedRunningTime="2025-12-03 13:32:05.808207023 +0000 UTC m=+2185.274638224" Dec 03 13:32:06 crc kubenswrapper[4986]: I1203 13:32:06.454903 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qftdt" Dec 03 13:32:06 crc kubenswrapper[4986]: I1203 13:32:06.507906 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qftdt"] Dec 03 13:32:07 crc kubenswrapper[4986]: I1203 13:32:07.419017 4986 generic.go:334] "Generic (PLEG): container finished" podID="8c21eccd-73c5-4d10-9bfe-ff9530e7627b" containerID="0bf001dee4f62dece945fdd382177e802acda7fd7cabbeec19b40266c00de7f9" exitCode=0 Dec 03 13:32:07 crc kubenswrapper[4986]: I1203 13:32:07.419096 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-n9vwn" event={"ID":"8c21eccd-73c5-4d10-9bfe-ff9530e7627b","Type":"ContainerDied","Data":"0bf001dee4f62dece945fdd382177e802acda7fd7cabbeec19b40266c00de7f9"} Dec 03 13:32:08 crc kubenswrapper[4986]: I1203 13:32:08.428156 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qftdt" podUID="b10bf2da-2a56-4c79-936b-9247d85391c0" containerName="registry-server" containerID="cri-o://2d1e5a3ca0ea343ccf25d329620f3a7f649ed7f98615a5a620d9b4f7205e7dce" gracePeriod=2 Dec 03 13:32:08 crc kubenswrapper[4986]: I1203 13:32:08.975713 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-n9vwn" Dec 03 13:32:08 crc kubenswrapper[4986]: I1203 13:32:08.990507 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qftdt" Dec 03 13:32:09 crc kubenswrapper[4986]: I1203 13:32:09.179676 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8c21eccd-73c5-4d10-9bfe-ff9530e7627b-inventory\") pod \"8c21eccd-73c5-4d10-9bfe-ff9530e7627b\" (UID: \"8c21eccd-73c5-4d10-9bfe-ff9530e7627b\") " Dec 03 13:32:09 crc kubenswrapper[4986]: I1203 13:32:09.179760 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b10bf2da-2a56-4c79-936b-9247d85391c0-utilities\") pod \"b10bf2da-2a56-4c79-936b-9247d85391c0\" (UID: \"b10bf2da-2a56-4c79-936b-9247d85391c0\") " Dec 03 13:32:09 crc kubenswrapper[4986]: I1203 13:32:09.179790 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gg8z2\" (UniqueName: \"kubernetes.io/projected/b10bf2da-2a56-4c79-936b-9247d85391c0-kube-api-access-gg8z2\") pod \"b10bf2da-2a56-4c79-936b-9247d85391c0\" (UID: \"b10bf2da-2a56-4c79-936b-9247d85391c0\") " Dec 03 13:32:09 crc kubenswrapper[4986]: I1203 13:32:09.179946 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8c21eccd-73c5-4d10-9bfe-ff9530e7627b-ssh-key\") pod \"8c21eccd-73c5-4d10-9bfe-ff9530e7627b\" (UID: \"8c21eccd-73c5-4d10-9bfe-ff9530e7627b\") " Dec 03 13:32:09 crc kubenswrapper[4986]: I1203 13:32:09.180261 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b10bf2da-2a56-4c79-936b-9247d85391c0-catalog-content\") pod \"b10bf2da-2a56-4c79-936b-9247d85391c0\" (UID: \"b10bf2da-2a56-4c79-936b-9247d85391c0\") " Dec 03 13:32:09 crc kubenswrapper[4986]: I1203 13:32:09.180592 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-62h9q\" (UniqueName: \"kubernetes.io/projected/8c21eccd-73c5-4d10-9bfe-ff9530e7627b-kube-api-access-62h9q\") pod \"8c21eccd-73c5-4d10-9bfe-ff9530e7627b\" (UID: \"8c21eccd-73c5-4d10-9bfe-ff9530e7627b\") " Dec 03 13:32:09 crc kubenswrapper[4986]: I1203 13:32:09.181893 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b10bf2da-2a56-4c79-936b-9247d85391c0-utilities" (OuterVolumeSpecName: "utilities") pod "b10bf2da-2a56-4c79-936b-9247d85391c0" (UID: "b10bf2da-2a56-4c79-936b-9247d85391c0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 13:32:09 crc kubenswrapper[4986]: I1203 13:32:09.182270 4986 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b10bf2da-2a56-4c79-936b-9247d85391c0-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 13:32:09 crc kubenswrapper[4986]: I1203 13:32:09.187052 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b10bf2da-2a56-4c79-936b-9247d85391c0-kube-api-access-gg8z2" (OuterVolumeSpecName: "kube-api-access-gg8z2") pod "b10bf2da-2a56-4c79-936b-9247d85391c0" (UID: "b10bf2da-2a56-4c79-936b-9247d85391c0"). InnerVolumeSpecName "kube-api-access-gg8z2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:32:09 crc kubenswrapper[4986]: I1203 13:32:09.187163 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c21eccd-73c5-4d10-9bfe-ff9530e7627b-kube-api-access-62h9q" (OuterVolumeSpecName: "kube-api-access-62h9q") pod "8c21eccd-73c5-4d10-9bfe-ff9530e7627b" (UID: "8c21eccd-73c5-4d10-9bfe-ff9530e7627b"). InnerVolumeSpecName "kube-api-access-62h9q". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:32:09 crc kubenswrapper[4986]: I1203 13:32:09.213211 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c21eccd-73c5-4d10-9bfe-ff9530e7627b-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "8c21eccd-73c5-4d10-9bfe-ff9530e7627b" (UID: "8c21eccd-73c5-4d10-9bfe-ff9530e7627b"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:32:09 crc kubenswrapper[4986]: I1203 13:32:09.217070 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c21eccd-73c5-4d10-9bfe-ff9530e7627b-inventory" (OuterVolumeSpecName: "inventory") pod "8c21eccd-73c5-4d10-9bfe-ff9530e7627b" (UID: "8c21eccd-73c5-4d10-9bfe-ff9530e7627b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:32:09 crc kubenswrapper[4986]: I1203 13:32:09.239938 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b10bf2da-2a56-4c79-936b-9247d85391c0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b10bf2da-2a56-4c79-936b-9247d85391c0" (UID: "b10bf2da-2a56-4c79-936b-9247d85391c0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 13:32:09 crc kubenswrapper[4986]: I1203 13:32:09.285675 4986 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8c21eccd-73c5-4d10-9bfe-ff9530e7627b-inventory\") on node \"crc\" DevicePath \"\"" Dec 03 13:32:09 crc kubenswrapper[4986]: I1203 13:32:09.285717 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gg8z2\" (UniqueName: \"kubernetes.io/projected/b10bf2da-2a56-4c79-936b-9247d85391c0-kube-api-access-gg8z2\") on node \"crc\" DevicePath \"\"" Dec 03 13:32:09 crc kubenswrapper[4986]: I1203 13:32:09.285729 4986 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8c21eccd-73c5-4d10-9bfe-ff9530e7627b-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 03 13:32:09 crc kubenswrapper[4986]: I1203 13:32:09.285741 4986 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b10bf2da-2a56-4c79-936b-9247d85391c0-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 13:32:09 crc kubenswrapper[4986]: I1203 13:32:09.285752 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-62h9q\" (UniqueName: \"kubernetes.io/projected/8c21eccd-73c5-4d10-9bfe-ff9530e7627b-kube-api-access-62h9q\") on node \"crc\" DevicePath \"\"" Dec 03 13:32:09 crc kubenswrapper[4986]: I1203 13:32:09.438185 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-n9vwn" event={"ID":"8c21eccd-73c5-4d10-9bfe-ff9530e7627b","Type":"ContainerDied","Data":"725f6ccca26f95b57416e012cd1f1a818a2dc7db46f52696225ec7717e842418"} Dec 03 13:32:09 crc kubenswrapper[4986]: I1203 13:32:09.438549 4986 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="725f6ccca26f95b57416e012cd1f1a818a2dc7db46f52696225ec7717e842418" Dec 03 13:32:09 crc kubenswrapper[4986]: I1203 13:32:09.438325 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-n9vwn" Dec 03 13:32:09 crc kubenswrapper[4986]: I1203 13:32:09.440969 4986 generic.go:334] "Generic (PLEG): container finished" podID="b10bf2da-2a56-4c79-936b-9247d85391c0" containerID="2d1e5a3ca0ea343ccf25d329620f3a7f649ed7f98615a5a620d9b4f7205e7dce" exitCode=0 Dec 03 13:32:09 crc kubenswrapper[4986]: I1203 13:32:09.441002 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qftdt" event={"ID":"b10bf2da-2a56-4c79-936b-9247d85391c0","Type":"ContainerDied","Data":"2d1e5a3ca0ea343ccf25d329620f3a7f649ed7f98615a5a620d9b4f7205e7dce"} Dec 03 13:32:09 crc kubenswrapper[4986]: I1203 13:32:09.441025 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qftdt" event={"ID":"b10bf2da-2a56-4c79-936b-9247d85391c0","Type":"ContainerDied","Data":"f8aa6cbdf7f95d2e79aa11f3cbbfff28f9dafc7ad60bec8c074682395dba5b1b"} Dec 03 13:32:09 crc kubenswrapper[4986]: I1203 13:32:09.441043 4986 scope.go:117] "RemoveContainer" containerID="2d1e5a3ca0ea343ccf25d329620f3a7f649ed7f98615a5a620d9b4f7205e7dce" Dec 03 13:32:09 crc kubenswrapper[4986]: I1203 13:32:09.441100 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qftdt" Dec 03 13:32:09 crc kubenswrapper[4986]: I1203 13:32:09.472997 4986 scope.go:117] "RemoveContainer" containerID="de6264b44327b4b247aa09f84035134f42671596cdbbb1d7f7ee31e65cef67c3" Dec 03 13:32:09 crc kubenswrapper[4986]: I1203 13:32:09.509486 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qftdt"] Dec 03 13:32:09 crc kubenswrapper[4986]: I1203 13:32:09.518357 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qftdt"] Dec 03 13:32:09 crc kubenswrapper[4986]: I1203 13:32:09.521707 4986 scope.go:117] "RemoveContainer" containerID="dacbe82e77672bc139d7f4e87a00907069fa92fa8558d8b69333a10b2e2ba875" Dec 03 13:32:09 crc kubenswrapper[4986]: I1203 13:32:09.555542 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qjdgt"] Dec 03 13:32:09 crc kubenswrapper[4986]: E1203 13:32:09.556067 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b10bf2da-2a56-4c79-936b-9247d85391c0" containerName="registry-server" Dec 03 13:32:09 crc kubenswrapper[4986]: I1203 13:32:09.556090 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="b10bf2da-2a56-4c79-936b-9247d85391c0" containerName="registry-server" Dec 03 13:32:09 crc kubenswrapper[4986]: E1203 13:32:09.556112 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b10bf2da-2a56-4c79-936b-9247d85391c0" containerName="extract-utilities" Dec 03 13:32:09 crc kubenswrapper[4986]: I1203 13:32:09.556122 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="b10bf2da-2a56-4c79-936b-9247d85391c0" containerName="extract-utilities" Dec 03 13:32:09 crc kubenswrapper[4986]: E1203 13:32:09.556156 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b10bf2da-2a56-4c79-936b-9247d85391c0" containerName="extract-content" Dec 03 13:32:09 crc kubenswrapper[4986]: I1203 13:32:09.556164 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="b10bf2da-2a56-4c79-936b-9247d85391c0" containerName="extract-content" Dec 03 13:32:09 crc kubenswrapper[4986]: E1203 13:32:09.556188 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c21eccd-73c5-4d10-9bfe-ff9530e7627b" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Dec 03 13:32:09 crc kubenswrapper[4986]: I1203 13:32:09.556197 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c21eccd-73c5-4d10-9bfe-ff9530e7627b" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Dec 03 13:32:09 crc kubenswrapper[4986]: I1203 13:32:09.556417 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="b10bf2da-2a56-4c79-936b-9247d85391c0" containerName="registry-server" Dec 03 13:32:09 crc kubenswrapper[4986]: I1203 13:32:09.556445 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c21eccd-73c5-4d10-9bfe-ff9530e7627b" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Dec 03 13:32:09 crc kubenswrapper[4986]: I1203 13:32:09.557263 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qjdgt" Dec 03 13:32:09 crc kubenswrapper[4986]: I1203 13:32:09.562725 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 03 13:32:09 crc kubenswrapper[4986]: I1203 13:32:09.562912 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 03 13:32:09 crc kubenswrapper[4986]: I1203 13:32:09.563067 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 03 13:32:09 crc kubenswrapper[4986]: I1203 13:32:09.566589 4986 scope.go:117] "RemoveContainer" containerID="2d1e5a3ca0ea343ccf25d329620f3a7f649ed7f98615a5a620d9b4f7205e7dce" Dec 03 13:32:09 crc kubenswrapper[4986]: I1203 13:32:09.566867 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jr75v" Dec 03 13:32:09 crc kubenswrapper[4986]: E1203 13:32:09.573187 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d1e5a3ca0ea343ccf25d329620f3a7f649ed7f98615a5a620d9b4f7205e7dce\": container with ID starting with 2d1e5a3ca0ea343ccf25d329620f3a7f649ed7f98615a5a620d9b4f7205e7dce not found: ID does not exist" containerID="2d1e5a3ca0ea343ccf25d329620f3a7f649ed7f98615a5a620d9b4f7205e7dce" Dec 03 13:32:09 crc kubenswrapper[4986]: I1203 13:32:09.573233 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d1e5a3ca0ea343ccf25d329620f3a7f649ed7f98615a5a620d9b4f7205e7dce"} err="failed to get container status \"2d1e5a3ca0ea343ccf25d329620f3a7f649ed7f98615a5a620d9b4f7205e7dce\": rpc error: code = NotFound desc = could not find container \"2d1e5a3ca0ea343ccf25d329620f3a7f649ed7f98615a5a620d9b4f7205e7dce\": container with ID starting with 2d1e5a3ca0ea343ccf25d329620f3a7f649ed7f98615a5a620d9b4f7205e7dce not found: ID does not exist" Dec 03 13:32:09 crc kubenswrapper[4986]: I1203 13:32:09.573263 4986 scope.go:117] "RemoveContainer" containerID="de6264b44327b4b247aa09f84035134f42671596cdbbb1d7f7ee31e65cef67c3" Dec 03 13:32:09 crc kubenswrapper[4986]: I1203 13:32:09.576305 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qjdgt"] Dec 03 13:32:09 crc kubenswrapper[4986]: E1203 13:32:09.579594 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de6264b44327b4b247aa09f84035134f42671596cdbbb1d7f7ee31e65cef67c3\": container with ID starting with de6264b44327b4b247aa09f84035134f42671596cdbbb1d7f7ee31e65cef67c3 not found: ID does not exist" containerID="de6264b44327b4b247aa09f84035134f42671596cdbbb1d7f7ee31e65cef67c3" Dec 03 13:32:09 crc kubenswrapper[4986]: I1203 13:32:09.579636 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de6264b44327b4b247aa09f84035134f42671596cdbbb1d7f7ee31e65cef67c3"} err="failed to get container status \"de6264b44327b4b247aa09f84035134f42671596cdbbb1d7f7ee31e65cef67c3\": rpc error: code = NotFound desc = could not find container \"de6264b44327b4b247aa09f84035134f42671596cdbbb1d7f7ee31e65cef67c3\": container with ID starting with de6264b44327b4b247aa09f84035134f42671596cdbbb1d7f7ee31e65cef67c3 not found: ID does not exist" Dec 03 13:32:09 crc kubenswrapper[4986]: I1203 13:32:09.579660 4986 scope.go:117] "RemoveContainer" containerID="dacbe82e77672bc139d7f4e87a00907069fa92fa8558d8b69333a10b2e2ba875" Dec 03 13:32:09 crc kubenswrapper[4986]: E1203 13:32:09.580124 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dacbe82e77672bc139d7f4e87a00907069fa92fa8558d8b69333a10b2e2ba875\": container with ID starting with dacbe82e77672bc139d7f4e87a00907069fa92fa8558d8b69333a10b2e2ba875 not found: ID does not exist" containerID="dacbe82e77672bc139d7f4e87a00907069fa92fa8558d8b69333a10b2e2ba875" Dec 03 13:32:09 crc kubenswrapper[4986]: I1203 13:32:09.580172 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dacbe82e77672bc139d7f4e87a00907069fa92fa8558d8b69333a10b2e2ba875"} err="failed to get container status \"dacbe82e77672bc139d7f4e87a00907069fa92fa8558d8b69333a10b2e2ba875\": rpc error: code = NotFound desc = could not find container \"dacbe82e77672bc139d7f4e87a00907069fa92fa8558d8b69333a10b2e2ba875\": container with ID starting with dacbe82e77672bc139d7f4e87a00907069fa92fa8558d8b69333a10b2e2ba875 not found: ID does not exist" Dec 03 13:32:09 crc kubenswrapper[4986]: I1203 13:32:09.694747 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/859dd2e9-8a4b-4b51-8718-9d8b5837d098-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-qjdgt\" (UID: \"859dd2e9-8a4b-4b51-8718-9d8b5837d098\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qjdgt" Dec 03 13:32:09 crc kubenswrapper[4986]: I1203 13:32:09.694813 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/859dd2e9-8a4b-4b51-8718-9d8b5837d098-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-qjdgt\" (UID: \"859dd2e9-8a4b-4b51-8718-9d8b5837d098\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qjdgt" Dec 03 13:32:09 crc kubenswrapper[4986]: I1203 13:32:09.694890 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28qn5\" (UniqueName: \"kubernetes.io/projected/859dd2e9-8a4b-4b51-8718-9d8b5837d098-kube-api-access-28qn5\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-qjdgt\" (UID: \"859dd2e9-8a4b-4b51-8718-9d8b5837d098\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qjdgt" Dec 03 13:32:09 crc kubenswrapper[4986]: I1203 13:32:09.796230 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/859dd2e9-8a4b-4b51-8718-9d8b5837d098-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-qjdgt\" (UID: \"859dd2e9-8a4b-4b51-8718-9d8b5837d098\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qjdgt" Dec 03 13:32:09 crc kubenswrapper[4986]: I1203 13:32:09.796386 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28qn5\" (UniqueName: \"kubernetes.io/projected/859dd2e9-8a4b-4b51-8718-9d8b5837d098-kube-api-access-28qn5\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-qjdgt\" (UID: \"859dd2e9-8a4b-4b51-8718-9d8b5837d098\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qjdgt" Dec 03 13:32:09 crc kubenswrapper[4986]: I1203 13:32:09.796524 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/859dd2e9-8a4b-4b51-8718-9d8b5837d098-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-qjdgt\" (UID: \"859dd2e9-8a4b-4b51-8718-9d8b5837d098\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qjdgt" Dec 03 13:32:09 crc kubenswrapper[4986]: I1203 13:32:09.801094 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/859dd2e9-8a4b-4b51-8718-9d8b5837d098-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-qjdgt\" (UID: \"859dd2e9-8a4b-4b51-8718-9d8b5837d098\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qjdgt" Dec 03 13:32:09 crc kubenswrapper[4986]: I1203 13:32:09.801771 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/859dd2e9-8a4b-4b51-8718-9d8b5837d098-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-qjdgt\" (UID: \"859dd2e9-8a4b-4b51-8718-9d8b5837d098\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qjdgt" Dec 03 13:32:09 crc kubenswrapper[4986]: I1203 13:32:09.813752 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28qn5\" (UniqueName: \"kubernetes.io/projected/859dd2e9-8a4b-4b51-8718-9d8b5837d098-kube-api-access-28qn5\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-qjdgt\" (UID: \"859dd2e9-8a4b-4b51-8718-9d8b5837d098\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qjdgt" Dec 03 13:32:09 crc kubenswrapper[4986]: I1203 13:32:09.911091 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qjdgt" Dec 03 13:32:10 crc kubenswrapper[4986]: I1203 13:32:10.495839 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qjdgt"] Dec 03 13:32:10 crc kubenswrapper[4986]: I1203 13:32:10.953755 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b10bf2da-2a56-4c79-936b-9247d85391c0" path="/var/lib/kubelet/pods/b10bf2da-2a56-4c79-936b-9247d85391c0/volumes" Dec 03 13:32:11 crc kubenswrapper[4986]: I1203 13:32:11.459174 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qjdgt" event={"ID":"859dd2e9-8a4b-4b51-8718-9d8b5837d098","Type":"ContainerStarted","Data":"e71ca7cff8a01c6e23f2faa98718f01210993e11a50f6006a625d21b2fabca58"} Dec 03 13:32:11 crc kubenswrapper[4986]: I1203 13:32:11.459521 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qjdgt" event={"ID":"859dd2e9-8a4b-4b51-8718-9d8b5837d098","Type":"ContainerStarted","Data":"1712d759fd895f0d99d1264da3d25c16ecfbbfeb8bebfcbe4286b0aea802104d"} Dec 03 13:32:11 crc kubenswrapper[4986]: I1203 13:32:11.486177 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qjdgt" podStartSLOduration=2.001908022 podStartE2EDuration="2.48615853s" podCreationTimestamp="2025-12-03 13:32:09 +0000 UTC" firstStartedPulling="2025-12-03 13:32:10.500641752 +0000 UTC m=+2189.967072943" lastFinishedPulling="2025-12-03 13:32:10.98489226 +0000 UTC m=+2190.451323451" observedRunningTime="2025-12-03 13:32:11.478363998 +0000 UTC m=+2190.944795199" watchObservedRunningTime="2025-12-03 13:32:11.48615853 +0000 UTC m=+2190.952589731" Dec 03 13:32:21 crc kubenswrapper[4986]: I1203 13:32:21.601158 4986 generic.go:334] "Generic (PLEG): container finished" podID="859dd2e9-8a4b-4b51-8718-9d8b5837d098" containerID="e71ca7cff8a01c6e23f2faa98718f01210993e11a50f6006a625d21b2fabca58" exitCode=0 Dec 03 13:32:21 crc kubenswrapper[4986]: I1203 13:32:21.601327 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qjdgt" event={"ID":"859dd2e9-8a4b-4b51-8718-9d8b5837d098","Type":"ContainerDied","Data":"e71ca7cff8a01c6e23f2faa98718f01210993e11a50f6006a625d21b2fabca58"} Dec 03 13:32:23 crc kubenswrapper[4986]: I1203 13:32:23.093217 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qjdgt" Dec 03 13:32:23 crc kubenswrapper[4986]: I1203 13:32:23.117763 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/859dd2e9-8a4b-4b51-8718-9d8b5837d098-ssh-key\") pod \"859dd2e9-8a4b-4b51-8718-9d8b5837d098\" (UID: \"859dd2e9-8a4b-4b51-8718-9d8b5837d098\") " Dec 03 13:32:23 crc kubenswrapper[4986]: I1203 13:32:23.117889 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/859dd2e9-8a4b-4b51-8718-9d8b5837d098-inventory\") pod \"859dd2e9-8a4b-4b51-8718-9d8b5837d098\" (UID: \"859dd2e9-8a4b-4b51-8718-9d8b5837d098\") " Dec 03 13:32:23 crc kubenswrapper[4986]: I1203 13:32:23.118026 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28qn5\" (UniqueName: \"kubernetes.io/projected/859dd2e9-8a4b-4b51-8718-9d8b5837d098-kube-api-access-28qn5\") pod \"859dd2e9-8a4b-4b51-8718-9d8b5837d098\" (UID: \"859dd2e9-8a4b-4b51-8718-9d8b5837d098\") " Dec 03 13:32:23 crc kubenswrapper[4986]: I1203 13:32:23.124573 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/859dd2e9-8a4b-4b51-8718-9d8b5837d098-kube-api-access-28qn5" (OuterVolumeSpecName: "kube-api-access-28qn5") pod "859dd2e9-8a4b-4b51-8718-9d8b5837d098" (UID: "859dd2e9-8a4b-4b51-8718-9d8b5837d098"). InnerVolumeSpecName "kube-api-access-28qn5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:32:23 crc kubenswrapper[4986]: I1203 13:32:23.149541 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/859dd2e9-8a4b-4b51-8718-9d8b5837d098-inventory" (OuterVolumeSpecName: "inventory") pod "859dd2e9-8a4b-4b51-8718-9d8b5837d098" (UID: "859dd2e9-8a4b-4b51-8718-9d8b5837d098"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:32:23 crc kubenswrapper[4986]: I1203 13:32:23.161573 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/859dd2e9-8a4b-4b51-8718-9d8b5837d098-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "859dd2e9-8a4b-4b51-8718-9d8b5837d098" (UID: "859dd2e9-8a4b-4b51-8718-9d8b5837d098"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:32:23 crc kubenswrapper[4986]: I1203 13:32:23.219948 4986 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/859dd2e9-8a4b-4b51-8718-9d8b5837d098-inventory\") on node \"crc\" DevicePath \"\"" Dec 03 13:32:23 crc kubenswrapper[4986]: I1203 13:32:23.220013 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-28qn5\" (UniqueName: \"kubernetes.io/projected/859dd2e9-8a4b-4b51-8718-9d8b5837d098-kube-api-access-28qn5\") on node \"crc\" DevicePath \"\"" Dec 03 13:32:23 crc kubenswrapper[4986]: I1203 13:32:23.220034 4986 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/859dd2e9-8a4b-4b51-8718-9d8b5837d098-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 03 13:32:23 crc kubenswrapper[4986]: I1203 13:32:23.618975 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qjdgt" event={"ID":"859dd2e9-8a4b-4b51-8718-9d8b5837d098","Type":"ContainerDied","Data":"1712d759fd895f0d99d1264da3d25c16ecfbbfeb8bebfcbe4286b0aea802104d"} Dec 03 13:32:23 crc kubenswrapper[4986]: I1203 13:32:23.619274 4986 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1712d759fd895f0d99d1264da3d25c16ecfbbfeb8bebfcbe4286b0aea802104d" Dec 03 13:32:23 crc kubenswrapper[4986]: I1203 13:32:23.619014 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qjdgt" Dec 03 13:32:23 crc kubenswrapper[4986]: I1203 13:32:23.740274 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l29sb"] Dec 03 13:32:23 crc kubenswrapper[4986]: E1203 13:32:23.740676 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="859dd2e9-8a4b-4b51-8718-9d8b5837d098" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Dec 03 13:32:23 crc kubenswrapper[4986]: I1203 13:32:23.740692 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="859dd2e9-8a4b-4b51-8718-9d8b5837d098" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Dec 03 13:32:23 crc kubenswrapper[4986]: I1203 13:32:23.740865 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="859dd2e9-8a4b-4b51-8718-9d8b5837d098" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Dec 03 13:32:23 crc kubenswrapper[4986]: I1203 13:32:23.741476 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l29sb" Dec 03 13:32:23 crc kubenswrapper[4986]: I1203 13:32:23.745018 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Dec 03 13:32:23 crc kubenswrapper[4986]: I1203 13:32:23.745038 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 03 13:32:23 crc kubenswrapper[4986]: I1203 13:32:23.745537 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 03 13:32:23 crc kubenswrapper[4986]: I1203 13:32:23.745588 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Dec 03 13:32:23 crc kubenswrapper[4986]: I1203 13:32:23.748903 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 03 13:32:23 crc kubenswrapper[4986]: I1203 13:32:23.748951 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jr75v" Dec 03 13:32:23 crc kubenswrapper[4986]: I1203 13:32:23.749331 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Dec 03 13:32:23 crc kubenswrapper[4986]: I1203 13:32:23.749401 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Dec 03 13:32:23 crc kubenswrapper[4986]: I1203 13:32:23.767116 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l29sb"] Dec 03 13:32:23 crc kubenswrapper[4986]: I1203 13:32:23.831588 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e34b649-2740-4d76-9aff-598b66d301b7-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l29sb\" (UID: \"7e34b649-2740-4d76-9aff-598b66d301b7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l29sb" Dec 03 13:32:23 crc kubenswrapper[4986]: I1203 13:32:23.831643 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e34b649-2740-4d76-9aff-598b66d301b7-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l29sb\" (UID: \"7e34b649-2740-4d76-9aff-598b66d301b7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l29sb" Dec 03 13:32:23 crc kubenswrapper[4986]: I1203 13:32:23.831664 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7e34b649-2740-4d76-9aff-598b66d301b7-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l29sb\" (UID: \"7e34b649-2740-4d76-9aff-598b66d301b7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l29sb" Dec 03 13:32:23 crc kubenswrapper[4986]: I1203 13:32:23.831686 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7e34b649-2740-4d76-9aff-598b66d301b7-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l29sb\" (UID: \"7e34b649-2740-4d76-9aff-598b66d301b7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l29sb" Dec 03 13:32:23 crc kubenswrapper[4986]: I1203 13:32:23.831714 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e34b649-2740-4d76-9aff-598b66d301b7-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l29sb\" (UID: \"7e34b649-2740-4d76-9aff-598b66d301b7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l29sb" Dec 03 13:32:23 crc kubenswrapper[4986]: I1203 13:32:23.831751 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7e34b649-2740-4d76-9aff-598b66d301b7-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l29sb\" (UID: \"7e34b649-2740-4d76-9aff-598b66d301b7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l29sb" Dec 03 13:32:23 crc kubenswrapper[4986]: I1203 13:32:23.831775 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e34b649-2740-4d76-9aff-598b66d301b7-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l29sb\" (UID: \"7e34b649-2740-4d76-9aff-598b66d301b7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l29sb" Dec 03 13:32:23 crc kubenswrapper[4986]: I1203 13:32:23.831812 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7e34b649-2740-4d76-9aff-598b66d301b7-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l29sb\" (UID: \"7e34b649-2740-4d76-9aff-598b66d301b7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l29sb" Dec 03 13:32:23 crc kubenswrapper[4986]: I1203 13:32:23.831850 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e34b649-2740-4d76-9aff-598b66d301b7-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l29sb\" (UID: \"7e34b649-2740-4d76-9aff-598b66d301b7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l29sb" Dec 03 13:32:23 crc kubenswrapper[4986]: I1203 13:32:23.831887 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7e34b649-2740-4d76-9aff-598b66d301b7-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l29sb\" (UID: \"7e34b649-2740-4d76-9aff-598b66d301b7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l29sb" Dec 03 13:32:23 crc kubenswrapper[4986]: I1203 13:32:23.831919 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7e34b649-2740-4d76-9aff-598b66d301b7-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l29sb\" (UID: \"7e34b649-2740-4d76-9aff-598b66d301b7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l29sb" Dec 03 13:32:23 crc kubenswrapper[4986]: I1203 13:32:23.831985 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e34b649-2740-4d76-9aff-598b66d301b7-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l29sb\" (UID: \"7e34b649-2740-4d76-9aff-598b66d301b7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l29sb" Dec 03 13:32:23 crc kubenswrapper[4986]: I1203 13:32:23.832016 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e34b649-2740-4d76-9aff-598b66d301b7-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l29sb\" (UID: \"7e34b649-2740-4d76-9aff-598b66d301b7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l29sb" Dec 03 13:32:23 crc kubenswrapper[4986]: I1203 13:32:23.832065 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgqgt\" (UniqueName: \"kubernetes.io/projected/7e34b649-2740-4d76-9aff-598b66d301b7-kube-api-access-dgqgt\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l29sb\" (UID: \"7e34b649-2740-4d76-9aff-598b66d301b7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l29sb" Dec 03 13:32:23 crc kubenswrapper[4986]: I1203 13:32:23.933122 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7e34b649-2740-4d76-9aff-598b66d301b7-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l29sb\" (UID: \"7e34b649-2740-4d76-9aff-598b66d301b7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l29sb" Dec 03 13:32:23 crc kubenswrapper[4986]: I1203 13:32:23.933171 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e34b649-2740-4d76-9aff-598b66d301b7-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l29sb\" (UID: \"7e34b649-2740-4d76-9aff-598b66d301b7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l29sb" Dec 03 13:32:23 crc kubenswrapper[4986]: I1203 13:32:23.933198 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7e34b649-2740-4d76-9aff-598b66d301b7-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l29sb\" (UID: \"7e34b649-2740-4d76-9aff-598b66d301b7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l29sb" Dec 03 13:32:23 crc kubenswrapper[4986]: I1203 13:32:23.933231 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e34b649-2740-4d76-9aff-598b66d301b7-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l29sb\" (UID: \"7e34b649-2740-4d76-9aff-598b66d301b7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l29sb" Dec 03 13:32:23 crc kubenswrapper[4986]: I1203 13:32:23.933257 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7e34b649-2740-4d76-9aff-598b66d301b7-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l29sb\" (UID: \"7e34b649-2740-4d76-9aff-598b66d301b7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l29sb" Dec 03 13:32:23 crc kubenswrapper[4986]: I1203 13:32:23.933296 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7e34b649-2740-4d76-9aff-598b66d301b7-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l29sb\" (UID: \"7e34b649-2740-4d76-9aff-598b66d301b7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l29sb" Dec 03 13:32:23 crc kubenswrapper[4986]: I1203 13:32:23.933345 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e34b649-2740-4d76-9aff-598b66d301b7-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l29sb\" (UID: \"7e34b649-2740-4d76-9aff-598b66d301b7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l29sb" Dec 03 13:32:23 crc kubenswrapper[4986]: I1203 13:32:23.933369 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e34b649-2740-4d76-9aff-598b66d301b7-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l29sb\" (UID: \"7e34b649-2740-4d76-9aff-598b66d301b7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l29sb" Dec 03 13:32:23 crc kubenswrapper[4986]: I1203 13:32:23.933391 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgqgt\" (UniqueName: \"kubernetes.io/projected/7e34b649-2740-4d76-9aff-598b66d301b7-kube-api-access-dgqgt\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l29sb\" (UID: \"7e34b649-2740-4d76-9aff-598b66d301b7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l29sb" Dec 03 13:32:23 crc kubenswrapper[4986]: I1203 13:32:23.933436 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e34b649-2740-4d76-9aff-598b66d301b7-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l29sb\" (UID: \"7e34b649-2740-4d76-9aff-598b66d301b7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l29sb" Dec 03 13:32:23 crc kubenswrapper[4986]: I1203 13:32:23.933458 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e34b649-2740-4d76-9aff-598b66d301b7-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l29sb\" (UID: \"7e34b649-2740-4d76-9aff-598b66d301b7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l29sb" Dec 03 13:32:23 crc kubenswrapper[4986]: I1203 13:32:23.933478 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7e34b649-2740-4d76-9aff-598b66d301b7-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l29sb\" (UID: \"7e34b649-2740-4d76-9aff-598b66d301b7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l29sb" Dec 03 13:32:23 crc kubenswrapper[4986]: I1203 13:32:23.933495 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7e34b649-2740-4d76-9aff-598b66d301b7-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l29sb\" (UID: \"7e34b649-2740-4d76-9aff-598b66d301b7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l29sb" Dec 03 13:32:23 crc kubenswrapper[4986]: I1203 13:32:23.933519 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e34b649-2740-4d76-9aff-598b66d301b7-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l29sb\" (UID: \"7e34b649-2740-4d76-9aff-598b66d301b7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l29sb" Dec 03 13:32:23 crc kubenswrapper[4986]: I1203 13:32:23.938876 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7e34b649-2740-4d76-9aff-598b66d301b7-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l29sb\" (UID: \"7e34b649-2740-4d76-9aff-598b66d301b7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l29sb" Dec 03 13:32:23 crc kubenswrapper[4986]: I1203 13:32:23.938920 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7e34b649-2740-4d76-9aff-598b66d301b7-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l29sb\" (UID: \"7e34b649-2740-4d76-9aff-598b66d301b7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l29sb" Dec 03 13:32:23 crc kubenswrapper[4986]: I1203 13:32:23.939949 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e34b649-2740-4d76-9aff-598b66d301b7-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l29sb\" (UID: \"7e34b649-2740-4d76-9aff-598b66d301b7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l29sb" Dec 03 13:32:23 crc kubenswrapper[4986]: I1203 13:32:23.940453 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7e34b649-2740-4d76-9aff-598b66d301b7-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l29sb\" (UID: \"7e34b649-2740-4d76-9aff-598b66d301b7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l29sb" Dec 03 13:32:23 crc kubenswrapper[4986]: I1203 13:32:23.940527 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7e34b649-2740-4d76-9aff-598b66d301b7-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l29sb\" (UID: \"7e34b649-2740-4d76-9aff-598b66d301b7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l29sb" Dec 03 13:32:23 crc kubenswrapper[4986]: I1203 13:32:23.940621 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e34b649-2740-4d76-9aff-598b66d301b7-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l29sb\" (UID: \"7e34b649-2740-4d76-9aff-598b66d301b7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l29sb" Dec 03 13:32:23 crc kubenswrapper[4986]: I1203 13:32:23.940723 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7e34b649-2740-4d76-9aff-598b66d301b7-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l29sb\" (UID: \"7e34b649-2740-4d76-9aff-598b66d301b7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l29sb" Dec 03 13:32:23 crc kubenswrapper[4986]: I1203 13:32:23.941813 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e34b649-2740-4d76-9aff-598b66d301b7-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l29sb\" (UID: \"7e34b649-2740-4d76-9aff-598b66d301b7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l29sb" Dec 03 13:32:23 crc kubenswrapper[4986]: I1203 13:32:23.942099 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7e34b649-2740-4d76-9aff-598b66d301b7-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l29sb\" (UID: \"7e34b649-2740-4d76-9aff-598b66d301b7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l29sb" Dec 03 13:32:23 crc kubenswrapper[4986]: I1203 13:32:23.942461 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e34b649-2740-4d76-9aff-598b66d301b7-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l29sb\" (UID: \"7e34b649-2740-4d76-9aff-598b66d301b7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l29sb" Dec 03 13:32:23 crc kubenswrapper[4986]: I1203 13:32:23.943221 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e34b649-2740-4d76-9aff-598b66d301b7-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l29sb\" (UID: \"7e34b649-2740-4d76-9aff-598b66d301b7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l29sb" Dec 03 13:32:23 crc kubenswrapper[4986]: I1203 13:32:23.943735 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e34b649-2740-4d76-9aff-598b66d301b7-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l29sb\" (UID: \"7e34b649-2740-4d76-9aff-598b66d301b7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l29sb" Dec 03 13:32:23 crc kubenswrapper[4986]: I1203 13:32:23.949424 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e34b649-2740-4d76-9aff-598b66d301b7-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l29sb\" (UID: \"7e34b649-2740-4d76-9aff-598b66d301b7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l29sb" Dec 03 13:32:23 crc kubenswrapper[4986]: I1203 13:32:23.950173 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgqgt\" (UniqueName: \"kubernetes.io/projected/7e34b649-2740-4d76-9aff-598b66d301b7-kube-api-access-dgqgt\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l29sb\" (UID: \"7e34b649-2740-4d76-9aff-598b66d301b7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l29sb" Dec 03 13:32:24 crc kubenswrapper[4986]: I1203 13:32:24.056255 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l29sb" Dec 03 13:32:24 crc kubenswrapper[4986]: I1203 13:32:24.609217 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l29sb"] Dec 03 13:32:24 crc kubenswrapper[4986]: I1203 13:32:24.646700 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l29sb" event={"ID":"7e34b649-2740-4d76-9aff-598b66d301b7","Type":"ContainerStarted","Data":"9b961847fd3fbe5fe7d08f86e2f5e49092dfc9b8f9566ee9625dc9c149d3d70d"} Dec 03 13:32:26 crc kubenswrapper[4986]: I1203 13:32:26.667605 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l29sb" event={"ID":"7e34b649-2740-4d76-9aff-598b66d301b7","Type":"ContainerStarted","Data":"c4f8f4d0457d2ec6c442c1392765d4a22d228ade613fafff656976e8a3c4fab9"} Dec 03 13:32:26 crc kubenswrapper[4986]: I1203 13:32:26.698599 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l29sb" podStartSLOduration=2.673773968 podStartE2EDuration="3.698583662s" podCreationTimestamp="2025-12-03 13:32:23 +0000 UTC" firstStartedPulling="2025-12-03 13:32:24.605814684 +0000 UTC m=+2204.072245875" lastFinishedPulling="2025-12-03 13:32:25.630624368 +0000 UTC m=+2205.097055569" observedRunningTime="2025-12-03 13:32:26.697874442 +0000 UTC m=+2206.164305663" watchObservedRunningTime="2025-12-03 13:32:26.698583662 +0000 UTC m=+2206.165014853" Dec 03 13:32:33 crc kubenswrapper[4986]: I1203 13:32:33.490980 4986 patch_prober.go:28] interesting pod/machine-config-daemon-xggpj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 13:32:33 crc kubenswrapper[4986]: I1203 13:32:33.491642 4986 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 13:33:03 crc kubenswrapper[4986]: I1203 13:33:03.491931 4986 patch_prober.go:28] interesting pod/machine-config-daemon-xggpj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 13:33:03 crc kubenswrapper[4986]: I1203 13:33:03.493055 4986 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 13:33:07 crc kubenswrapper[4986]: I1203 13:33:07.059973 4986 generic.go:334] "Generic (PLEG): container finished" podID="7e34b649-2740-4d76-9aff-598b66d301b7" containerID="c4f8f4d0457d2ec6c442c1392765d4a22d228ade613fafff656976e8a3c4fab9" exitCode=0 Dec 03 13:33:07 crc kubenswrapper[4986]: I1203 13:33:07.060042 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l29sb" event={"ID":"7e34b649-2740-4d76-9aff-598b66d301b7","Type":"ContainerDied","Data":"c4f8f4d0457d2ec6c442c1392765d4a22d228ade613fafff656976e8a3c4fab9"} Dec 03 13:33:08 crc kubenswrapper[4986]: I1203 13:33:08.576494 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l29sb" Dec 03 13:33:08 crc kubenswrapper[4986]: I1203 13:33:08.675737 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7e34b649-2740-4d76-9aff-598b66d301b7-inventory\") pod \"7e34b649-2740-4d76-9aff-598b66d301b7\" (UID: \"7e34b649-2740-4d76-9aff-598b66d301b7\") " Dec 03 13:33:08 crc kubenswrapper[4986]: I1203 13:33:08.675800 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e34b649-2740-4d76-9aff-598b66d301b7-nova-combined-ca-bundle\") pod \"7e34b649-2740-4d76-9aff-598b66d301b7\" (UID: \"7e34b649-2740-4d76-9aff-598b66d301b7\") " Dec 03 13:33:08 crc kubenswrapper[4986]: I1203 13:33:08.675870 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7e34b649-2740-4d76-9aff-598b66d301b7-ssh-key\") pod \"7e34b649-2740-4d76-9aff-598b66d301b7\" (UID: \"7e34b649-2740-4d76-9aff-598b66d301b7\") " Dec 03 13:33:08 crc kubenswrapper[4986]: I1203 13:33:08.675908 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7e34b649-2740-4d76-9aff-598b66d301b7-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"7e34b649-2740-4d76-9aff-598b66d301b7\" (UID: \"7e34b649-2740-4d76-9aff-598b66d301b7\") " Dec 03 13:33:08 crc kubenswrapper[4986]: I1203 13:33:08.675954 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e34b649-2740-4d76-9aff-598b66d301b7-libvirt-combined-ca-bundle\") pod \"7e34b649-2740-4d76-9aff-598b66d301b7\" (UID: \"7e34b649-2740-4d76-9aff-598b66d301b7\") " Dec 03 13:33:08 crc kubenswrapper[4986]: I1203 13:33:08.675993 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e34b649-2740-4d76-9aff-598b66d301b7-neutron-metadata-combined-ca-bundle\") pod \"7e34b649-2740-4d76-9aff-598b66d301b7\" (UID: \"7e34b649-2740-4d76-9aff-598b66d301b7\") " Dec 03 13:33:08 crc kubenswrapper[4986]: I1203 13:33:08.676017 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e34b649-2740-4d76-9aff-598b66d301b7-bootstrap-combined-ca-bundle\") pod \"7e34b649-2740-4d76-9aff-598b66d301b7\" (UID: \"7e34b649-2740-4d76-9aff-598b66d301b7\") " Dec 03 13:33:08 crc kubenswrapper[4986]: I1203 13:33:08.676050 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7e34b649-2740-4d76-9aff-598b66d301b7-openstack-edpm-ipam-ovn-default-certs-0\") pod \"7e34b649-2740-4d76-9aff-598b66d301b7\" (UID: \"7e34b649-2740-4d76-9aff-598b66d301b7\") " Dec 03 13:33:08 crc kubenswrapper[4986]: I1203 13:33:08.676075 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7e34b649-2740-4d76-9aff-598b66d301b7-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"7e34b649-2740-4d76-9aff-598b66d301b7\" (UID: \"7e34b649-2740-4d76-9aff-598b66d301b7\") " Dec 03 13:33:08 crc kubenswrapper[4986]: I1203 13:33:08.676101 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7e34b649-2740-4d76-9aff-598b66d301b7-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"7e34b649-2740-4d76-9aff-598b66d301b7\" (UID: \"7e34b649-2740-4d76-9aff-598b66d301b7\") " Dec 03 13:33:08 crc kubenswrapper[4986]: I1203 13:33:08.676128 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e34b649-2740-4d76-9aff-598b66d301b7-telemetry-combined-ca-bundle\") pod \"7e34b649-2740-4d76-9aff-598b66d301b7\" (UID: \"7e34b649-2740-4d76-9aff-598b66d301b7\") " Dec 03 13:33:08 crc kubenswrapper[4986]: I1203 13:33:08.676169 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e34b649-2740-4d76-9aff-598b66d301b7-repo-setup-combined-ca-bundle\") pod \"7e34b649-2740-4d76-9aff-598b66d301b7\" (UID: \"7e34b649-2740-4d76-9aff-598b66d301b7\") " Dec 03 13:33:08 crc kubenswrapper[4986]: I1203 13:33:08.676203 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dgqgt\" (UniqueName: \"kubernetes.io/projected/7e34b649-2740-4d76-9aff-598b66d301b7-kube-api-access-dgqgt\") pod \"7e34b649-2740-4d76-9aff-598b66d301b7\" (UID: \"7e34b649-2740-4d76-9aff-598b66d301b7\") " Dec 03 13:33:08 crc kubenswrapper[4986]: I1203 13:33:08.676224 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e34b649-2740-4d76-9aff-598b66d301b7-ovn-combined-ca-bundle\") pod \"7e34b649-2740-4d76-9aff-598b66d301b7\" (UID: \"7e34b649-2740-4d76-9aff-598b66d301b7\") " Dec 03 13:33:08 crc kubenswrapper[4986]: I1203 13:33:08.682199 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e34b649-2740-4d76-9aff-598b66d301b7-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "7e34b649-2740-4d76-9aff-598b66d301b7" (UID: "7e34b649-2740-4d76-9aff-598b66d301b7"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:33:08 crc kubenswrapper[4986]: I1203 13:33:08.683030 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e34b649-2740-4d76-9aff-598b66d301b7-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "7e34b649-2740-4d76-9aff-598b66d301b7" (UID: "7e34b649-2740-4d76-9aff-598b66d301b7"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:33:08 crc kubenswrapper[4986]: I1203 13:33:08.683717 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e34b649-2740-4d76-9aff-598b66d301b7-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "7e34b649-2740-4d76-9aff-598b66d301b7" (UID: "7e34b649-2740-4d76-9aff-598b66d301b7"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:33:08 crc kubenswrapper[4986]: I1203 13:33:08.684318 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e34b649-2740-4d76-9aff-598b66d301b7-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "7e34b649-2740-4d76-9aff-598b66d301b7" (UID: "7e34b649-2740-4d76-9aff-598b66d301b7"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:33:08 crc kubenswrapper[4986]: I1203 13:33:08.684671 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e34b649-2740-4d76-9aff-598b66d301b7-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "7e34b649-2740-4d76-9aff-598b66d301b7" (UID: "7e34b649-2740-4d76-9aff-598b66d301b7"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:33:08 crc kubenswrapper[4986]: I1203 13:33:08.684699 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e34b649-2740-4d76-9aff-598b66d301b7-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "7e34b649-2740-4d76-9aff-598b66d301b7" (UID: "7e34b649-2740-4d76-9aff-598b66d301b7"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:33:08 crc kubenswrapper[4986]: I1203 13:33:08.685399 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e34b649-2740-4d76-9aff-598b66d301b7-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "7e34b649-2740-4d76-9aff-598b66d301b7" (UID: "7e34b649-2740-4d76-9aff-598b66d301b7"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:33:08 crc kubenswrapper[4986]: I1203 13:33:08.685539 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e34b649-2740-4d76-9aff-598b66d301b7-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "7e34b649-2740-4d76-9aff-598b66d301b7" (UID: "7e34b649-2740-4d76-9aff-598b66d301b7"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:33:08 crc kubenswrapper[4986]: I1203 13:33:08.688100 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e34b649-2740-4d76-9aff-598b66d301b7-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "7e34b649-2740-4d76-9aff-598b66d301b7" (UID: "7e34b649-2740-4d76-9aff-598b66d301b7"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:33:08 crc kubenswrapper[4986]: I1203 13:33:08.688203 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e34b649-2740-4d76-9aff-598b66d301b7-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "7e34b649-2740-4d76-9aff-598b66d301b7" (UID: "7e34b649-2740-4d76-9aff-598b66d301b7"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:33:08 crc kubenswrapper[4986]: I1203 13:33:08.688705 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e34b649-2740-4d76-9aff-598b66d301b7-kube-api-access-dgqgt" (OuterVolumeSpecName: "kube-api-access-dgqgt") pod "7e34b649-2740-4d76-9aff-598b66d301b7" (UID: "7e34b649-2740-4d76-9aff-598b66d301b7"). InnerVolumeSpecName "kube-api-access-dgqgt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:33:08 crc kubenswrapper[4986]: I1203 13:33:08.701096 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e34b649-2740-4d76-9aff-598b66d301b7-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "7e34b649-2740-4d76-9aff-598b66d301b7" (UID: "7e34b649-2740-4d76-9aff-598b66d301b7"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:33:08 crc kubenswrapper[4986]: I1203 13:33:08.708374 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e34b649-2740-4d76-9aff-598b66d301b7-inventory" (OuterVolumeSpecName: "inventory") pod "7e34b649-2740-4d76-9aff-598b66d301b7" (UID: "7e34b649-2740-4d76-9aff-598b66d301b7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:33:08 crc kubenswrapper[4986]: I1203 13:33:08.722262 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e34b649-2740-4d76-9aff-598b66d301b7-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "7e34b649-2740-4d76-9aff-598b66d301b7" (UID: "7e34b649-2740-4d76-9aff-598b66d301b7"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:33:08 crc kubenswrapper[4986]: I1203 13:33:08.779917 4986 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e34b649-2740-4d76-9aff-598b66d301b7-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 13:33:08 crc kubenswrapper[4986]: I1203 13:33:08.779966 4986 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e34b649-2740-4d76-9aff-598b66d301b7-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 13:33:08 crc kubenswrapper[4986]: I1203 13:33:08.779989 4986 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7e34b649-2740-4d76-9aff-598b66d301b7-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 03 13:33:08 crc kubenswrapper[4986]: I1203 13:33:08.780009 4986 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7e34b649-2740-4d76-9aff-598b66d301b7-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 03 13:33:08 crc kubenswrapper[4986]: I1203 13:33:08.780029 4986 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7e34b649-2740-4d76-9aff-598b66d301b7-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 03 13:33:08 crc kubenswrapper[4986]: I1203 13:33:08.780050 4986 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e34b649-2740-4d76-9aff-598b66d301b7-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 13:33:08 crc kubenswrapper[4986]: I1203 13:33:08.780070 4986 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e34b649-2740-4d76-9aff-598b66d301b7-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 13:33:08 crc kubenswrapper[4986]: I1203 13:33:08.780089 4986 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e34b649-2740-4d76-9aff-598b66d301b7-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 13:33:08 crc kubenswrapper[4986]: I1203 13:33:08.780107 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dgqgt\" (UniqueName: \"kubernetes.io/projected/7e34b649-2740-4d76-9aff-598b66d301b7-kube-api-access-dgqgt\") on node \"crc\" DevicePath \"\"" Dec 03 13:33:08 crc kubenswrapper[4986]: I1203 13:33:08.780127 4986 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7e34b649-2740-4d76-9aff-598b66d301b7-inventory\") on node \"crc\" DevicePath \"\"" Dec 03 13:33:08 crc kubenswrapper[4986]: I1203 13:33:08.780143 4986 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e34b649-2740-4d76-9aff-598b66d301b7-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 13:33:08 crc kubenswrapper[4986]: I1203 13:33:08.780159 4986 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7e34b649-2740-4d76-9aff-598b66d301b7-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 03 13:33:08 crc kubenswrapper[4986]: I1203 13:33:08.780176 4986 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7e34b649-2740-4d76-9aff-598b66d301b7-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 03 13:33:08 crc kubenswrapper[4986]: I1203 13:33:08.780193 4986 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e34b649-2740-4d76-9aff-598b66d301b7-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 13:33:09 crc kubenswrapper[4986]: I1203 13:33:09.088839 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l29sb" event={"ID":"7e34b649-2740-4d76-9aff-598b66d301b7","Type":"ContainerDied","Data":"9b961847fd3fbe5fe7d08f86e2f5e49092dfc9b8f9566ee9625dc9c149d3d70d"} Dec 03 13:33:09 crc kubenswrapper[4986]: I1203 13:33:09.089171 4986 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b961847fd3fbe5fe7d08f86e2f5e49092dfc9b8f9566ee9625dc9c149d3d70d" Dec 03 13:33:09 crc kubenswrapper[4986]: I1203 13:33:09.088968 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l29sb" Dec 03 13:33:09 crc kubenswrapper[4986]: I1203 13:33:09.189571 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-c9nd5"] Dec 03 13:33:09 crc kubenswrapper[4986]: E1203 13:33:09.190002 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e34b649-2740-4d76-9aff-598b66d301b7" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Dec 03 13:33:09 crc kubenswrapper[4986]: I1203 13:33:09.190017 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e34b649-2740-4d76-9aff-598b66d301b7" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Dec 03 13:33:09 crc kubenswrapper[4986]: I1203 13:33:09.190223 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e34b649-2740-4d76-9aff-598b66d301b7" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Dec 03 13:33:09 crc kubenswrapper[4986]: I1203 13:33:09.190939 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-c9nd5" Dec 03 13:33:09 crc kubenswrapper[4986]: I1203 13:33:09.193442 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 03 13:33:09 crc kubenswrapper[4986]: I1203 13:33:09.193646 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 03 13:33:09 crc kubenswrapper[4986]: I1203 13:33:09.194936 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Dec 03 13:33:09 crc kubenswrapper[4986]: I1203 13:33:09.195094 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jr75v" Dec 03 13:33:09 crc kubenswrapper[4986]: I1203 13:33:09.199428 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-c9nd5"] Dec 03 13:33:09 crc kubenswrapper[4986]: I1203 13:33:09.211178 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 03 13:33:09 crc kubenswrapper[4986]: I1203 13:33:09.291415 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff750414-499f-4652-9627-3e45a82b6cf3-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-c9nd5\" (UID: \"ff750414-499f-4652-9627-3e45a82b6cf3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-c9nd5" Dec 03 13:33:09 crc kubenswrapper[4986]: I1203 13:33:09.291476 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdphq\" (UniqueName: \"kubernetes.io/projected/ff750414-499f-4652-9627-3e45a82b6cf3-kube-api-access-vdphq\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-c9nd5\" (UID: \"ff750414-499f-4652-9627-3e45a82b6cf3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-c9nd5" Dec 03 13:33:09 crc kubenswrapper[4986]: I1203 13:33:09.291498 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/ff750414-499f-4652-9627-3e45a82b6cf3-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-c9nd5\" (UID: \"ff750414-499f-4652-9627-3e45a82b6cf3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-c9nd5" Dec 03 13:33:09 crc kubenswrapper[4986]: I1203 13:33:09.291545 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ff750414-499f-4652-9627-3e45a82b6cf3-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-c9nd5\" (UID: \"ff750414-499f-4652-9627-3e45a82b6cf3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-c9nd5" Dec 03 13:33:09 crc kubenswrapper[4986]: I1203 13:33:09.291603 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ff750414-499f-4652-9627-3e45a82b6cf3-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-c9nd5\" (UID: \"ff750414-499f-4652-9627-3e45a82b6cf3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-c9nd5" Dec 03 13:33:09 crc kubenswrapper[4986]: I1203 13:33:09.393881 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff750414-499f-4652-9627-3e45a82b6cf3-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-c9nd5\" (UID: \"ff750414-499f-4652-9627-3e45a82b6cf3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-c9nd5" Dec 03 13:33:09 crc kubenswrapper[4986]: I1203 13:33:09.393966 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdphq\" (UniqueName: \"kubernetes.io/projected/ff750414-499f-4652-9627-3e45a82b6cf3-kube-api-access-vdphq\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-c9nd5\" (UID: \"ff750414-499f-4652-9627-3e45a82b6cf3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-c9nd5" Dec 03 13:33:09 crc kubenswrapper[4986]: I1203 13:33:09.393994 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/ff750414-499f-4652-9627-3e45a82b6cf3-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-c9nd5\" (UID: \"ff750414-499f-4652-9627-3e45a82b6cf3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-c9nd5" Dec 03 13:33:09 crc kubenswrapper[4986]: I1203 13:33:09.394060 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ff750414-499f-4652-9627-3e45a82b6cf3-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-c9nd5\" (UID: \"ff750414-499f-4652-9627-3e45a82b6cf3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-c9nd5" Dec 03 13:33:09 crc kubenswrapper[4986]: I1203 13:33:09.394129 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ff750414-499f-4652-9627-3e45a82b6cf3-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-c9nd5\" (UID: \"ff750414-499f-4652-9627-3e45a82b6cf3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-c9nd5" Dec 03 13:33:09 crc kubenswrapper[4986]: I1203 13:33:09.395717 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/ff750414-499f-4652-9627-3e45a82b6cf3-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-c9nd5\" (UID: \"ff750414-499f-4652-9627-3e45a82b6cf3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-c9nd5" Dec 03 13:33:09 crc kubenswrapper[4986]: I1203 13:33:09.399963 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ff750414-499f-4652-9627-3e45a82b6cf3-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-c9nd5\" (UID: \"ff750414-499f-4652-9627-3e45a82b6cf3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-c9nd5" Dec 03 13:33:09 crc kubenswrapper[4986]: I1203 13:33:09.400105 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ff750414-499f-4652-9627-3e45a82b6cf3-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-c9nd5\" (UID: \"ff750414-499f-4652-9627-3e45a82b6cf3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-c9nd5" Dec 03 13:33:09 crc kubenswrapper[4986]: I1203 13:33:09.400194 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff750414-499f-4652-9627-3e45a82b6cf3-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-c9nd5\" (UID: \"ff750414-499f-4652-9627-3e45a82b6cf3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-c9nd5" Dec 03 13:33:09 crc kubenswrapper[4986]: I1203 13:33:09.415238 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdphq\" (UniqueName: \"kubernetes.io/projected/ff750414-499f-4652-9627-3e45a82b6cf3-kube-api-access-vdphq\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-c9nd5\" (UID: \"ff750414-499f-4652-9627-3e45a82b6cf3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-c9nd5" Dec 03 13:33:09 crc kubenswrapper[4986]: I1203 13:33:09.509922 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-c9nd5" Dec 03 13:33:10 crc kubenswrapper[4986]: I1203 13:33:10.097132 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-c9nd5"] Dec 03 13:33:11 crc kubenswrapper[4986]: I1203 13:33:11.108420 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-c9nd5" event={"ID":"ff750414-499f-4652-9627-3e45a82b6cf3","Type":"ContainerStarted","Data":"8a8bfe8520ffb28bdb78b72f04de9baf8ac5ae93f0979968acc63e36546dc96d"} Dec 03 13:33:12 crc kubenswrapper[4986]: I1203 13:33:12.118415 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-c9nd5" event={"ID":"ff750414-499f-4652-9627-3e45a82b6cf3","Type":"ContainerStarted","Data":"533c27661331253aa916c73686595249681642bce68b5ea1aa15c06aef94b579"} Dec 03 13:33:12 crc kubenswrapper[4986]: I1203 13:33:12.134233 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-c9nd5" podStartSLOduration=2.201462409 podStartE2EDuration="3.134213537s" podCreationTimestamp="2025-12-03 13:33:09 +0000 UTC" firstStartedPulling="2025-12-03 13:33:10.10710146 +0000 UTC m=+2249.573532651" lastFinishedPulling="2025-12-03 13:33:11.039852548 +0000 UTC m=+2250.506283779" observedRunningTime="2025-12-03 13:33:12.1317198 +0000 UTC m=+2251.598150991" watchObservedRunningTime="2025-12-03 13:33:12.134213537 +0000 UTC m=+2251.600644728" Dec 03 13:33:33 crc kubenswrapper[4986]: I1203 13:33:33.491720 4986 patch_prober.go:28] interesting pod/machine-config-daemon-xggpj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 13:33:33 crc kubenswrapper[4986]: I1203 13:33:33.492705 4986 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 13:33:33 crc kubenswrapper[4986]: I1203 13:33:33.492796 4986 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" Dec 03 13:33:33 crc kubenswrapper[4986]: I1203 13:33:33.494175 4986 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4069b648601fa33b27bda022b1a72851b773bff566b310c1ebd23beaecd8cde7"} pod="openshift-machine-config-operator/machine-config-daemon-xggpj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 13:33:33 crc kubenswrapper[4986]: I1203 13:33:33.494317 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" containerName="machine-config-daemon" containerID="cri-o://4069b648601fa33b27bda022b1a72851b773bff566b310c1ebd23beaecd8cde7" gracePeriod=600 Dec 03 13:33:33 crc kubenswrapper[4986]: E1203 13:33:33.666056 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 13:33:34 crc kubenswrapper[4986]: I1203 13:33:34.348940 4986 generic.go:334] "Generic (PLEG): container finished" podID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" containerID="4069b648601fa33b27bda022b1a72851b773bff566b310c1ebd23beaecd8cde7" exitCode=0 Dec 03 13:33:34 crc kubenswrapper[4986]: I1203 13:33:34.348994 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" event={"ID":"f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c","Type":"ContainerDied","Data":"4069b648601fa33b27bda022b1a72851b773bff566b310c1ebd23beaecd8cde7"} Dec 03 13:33:34 crc kubenswrapper[4986]: I1203 13:33:34.349093 4986 scope.go:117] "RemoveContainer" containerID="8e39db31bf59f71d1f97b2921b1e19e08a20fea403196bf76d65bd1aeb3c113e" Dec 03 13:33:34 crc kubenswrapper[4986]: I1203 13:33:34.349896 4986 scope.go:117] "RemoveContainer" containerID="4069b648601fa33b27bda022b1a72851b773bff566b310c1ebd23beaecd8cde7" Dec 03 13:33:34 crc kubenswrapper[4986]: E1203 13:33:34.350355 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 13:33:43 crc kubenswrapper[4986]: I1203 13:33:43.270175 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fpwdf"] Dec 03 13:33:43 crc kubenswrapper[4986]: I1203 13:33:43.272886 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fpwdf" Dec 03 13:33:43 crc kubenswrapper[4986]: I1203 13:33:43.281367 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fpwdf"] Dec 03 13:33:43 crc kubenswrapper[4986]: I1203 13:33:43.458932 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/220d7631-1041-43a0-8db1-d59392c4bb87-catalog-content\") pod \"redhat-operators-fpwdf\" (UID: \"220d7631-1041-43a0-8db1-d59392c4bb87\") " pod="openshift-marketplace/redhat-operators-fpwdf" Dec 03 13:33:43 crc kubenswrapper[4986]: I1203 13:33:43.459376 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wddt\" (UniqueName: \"kubernetes.io/projected/220d7631-1041-43a0-8db1-d59392c4bb87-kube-api-access-7wddt\") pod \"redhat-operators-fpwdf\" (UID: \"220d7631-1041-43a0-8db1-d59392c4bb87\") " pod="openshift-marketplace/redhat-operators-fpwdf" Dec 03 13:33:43 crc kubenswrapper[4986]: I1203 13:33:43.459633 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/220d7631-1041-43a0-8db1-d59392c4bb87-utilities\") pod \"redhat-operators-fpwdf\" (UID: \"220d7631-1041-43a0-8db1-d59392c4bb87\") " pod="openshift-marketplace/redhat-operators-fpwdf" Dec 03 13:33:43 crc kubenswrapper[4986]: I1203 13:33:43.561535 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/220d7631-1041-43a0-8db1-d59392c4bb87-utilities\") pod \"redhat-operators-fpwdf\" (UID: \"220d7631-1041-43a0-8db1-d59392c4bb87\") " pod="openshift-marketplace/redhat-operators-fpwdf" Dec 03 13:33:43 crc kubenswrapper[4986]: I1203 13:33:43.561684 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/220d7631-1041-43a0-8db1-d59392c4bb87-catalog-content\") pod \"redhat-operators-fpwdf\" (UID: \"220d7631-1041-43a0-8db1-d59392c4bb87\") " pod="openshift-marketplace/redhat-operators-fpwdf" Dec 03 13:33:43 crc kubenswrapper[4986]: I1203 13:33:43.561719 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wddt\" (UniqueName: \"kubernetes.io/projected/220d7631-1041-43a0-8db1-d59392c4bb87-kube-api-access-7wddt\") pod \"redhat-operators-fpwdf\" (UID: \"220d7631-1041-43a0-8db1-d59392c4bb87\") " pod="openshift-marketplace/redhat-operators-fpwdf" Dec 03 13:33:43 crc kubenswrapper[4986]: I1203 13:33:43.562101 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/220d7631-1041-43a0-8db1-d59392c4bb87-utilities\") pod \"redhat-operators-fpwdf\" (UID: \"220d7631-1041-43a0-8db1-d59392c4bb87\") " pod="openshift-marketplace/redhat-operators-fpwdf" Dec 03 13:33:43 crc kubenswrapper[4986]: I1203 13:33:43.562246 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/220d7631-1041-43a0-8db1-d59392c4bb87-catalog-content\") pod \"redhat-operators-fpwdf\" (UID: \"220d7631-1041-43a0-8db1-d59392c4bb87\") " pod="openshift-marketplace/redhat-operators-fpwdf" Dec 03 13:33:43 crc kubenswrapper[4986]: I1203 13:33:43.585916 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wddt\" (UniqueName: \"kubernetes.io/projected/220d7631-1041-43a0-8db1-d59392c4bb87-kube-api-access-7wddt\") pod \"redhat-operators-fpwdf\" (UID: \"220d7631-1041-43a0-8db1-d59392c4bb87\") " pod="openshift-marketplace/redhat-operators-fpwdf" Dec 03 13:33:43 crc kubenswrapper[4986]: I1203 13:33:43.599435 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fpwdf" Dec 03 13:33:44 crc kubenswrapper[4986]: I1203 13:33:44.097418 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fpwdf"] Dec 03 13:33:44 crc kubenswrapper[4986]: I1203 13:33:44.456229 4986 generic.go:334] "Generic (PLEG): container finished" podID="220d7631-1041-43a0-8db1-d59392c4bb87" containerID="a7b2ef9f9ebe9ae8548f133b31b83d92ce1ebf7d00b6a081da9f6ba27ee71801" exitCode=0 Dec 03 13:33:44 crc kubenswrapper[4986]: I1203 13:33:44.456342 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fpwdf" event={"ID":"220d7631-1041-43a0-8db1-d59392c4bb87","Type":"ContainerDied","Data":"a7b2ef9f9ebe9ae8548f133b31b83d92ce1ebf7d00b6a081da9f6ba27ee71801"} Dec 03 13:33:44 crc kubenswrapper[4986]: I1203 13:33:44.457687 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fpwdf" event={"ID":"220d7631-1041-43a0-8db1-d59392c4bb87","Type":"ContainerStarted","Data":"3d0a84e525a6ab8b3954ddc70bdc3fe3f3bb3f247ddb631beea60239c9e8b426"} Dec 03 13:33:46 crc kubenswrapper[4986]: I1203 13:33:46.479549 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fpwdf" event={"ID":"220d7631-1041-43a0-8db1-d59392c4bb87","Type":"ContainerStarted","Data":"5300312291ba843761a3ec9b67cda08a04689bb2de43149267d903b0b921d7e4"} Dec 03 13:33:48 crc kubenswrapper[4986]: I1203 13:33:48.944149 4986 scope.go:117] "RemoveContainer" containerID="4069b648601fa33b27bda022b1a72851b773bff566b310c1ebd23beaecd8cde7" Dec 03 13:33:48 crc kubenswrapper[4986]: E1203 13:33:48.944636 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 13:33:49 crc kubenswrapper[4986]: I1203 13:33:49.512025 4986 generic.go:334] "Generic (PLEG): container finished" podID="220d7631-1041-43a0-8db1-d59392c4bb87" containerID="5300312291ba843761a3ec9b67cda08a04689bb2de43149267d903b0b921d7e4" exitCode=0 Dec 03 13:33:49 crc kubenswrapper[4986]: I1203 13:33:49.512088 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fpwdf" event={"ID":"220d7631-1041-43a0-8db1-d59392c4bb87","Type":"ContainerDied","Data":"5300312291ba843761a3ec9b67cda08a04689bb2de43149267d903b0b921d7e4"} Dec 03 13:33:50 crc kubenswrapper[4986]: I1203 13:33:50.524605 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fpwdf" event={"ID":"220d7631-1041-43a0-8db1-d59392c4bb87","Type":"ContainerStarted","Data":"c6a483a3a80c376a31800cabbcc96035f2afc0250614488947bfc6e79df200a7"} Dec 03 13:33:50 crc kubenswrapper[4986]: I1203 13:33:50.549803 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fpwdf" podStartSLOduration=1.83631915 podStartE2EDuration="7.549779291s" podCreationTimestamp="2025-12-03 13:33:43 +0000 UTC" firstStartedPulling="2025-12-03 13:33:44.457869593 +0000 UTC m=+2283.924300784" lastFinishedPulling="2025-12-03 13:33:50.171329724 +0000 UTC m=+2289.637760925" observedRunningTime="2025-12-03 13:33:50.54127802 +0000 UTC m=+2290.007709211" watchObservedRunningTime="2025-12-03 13:33:50.549779291 +0000 UTC m=+2290.016210522" Dec 03 13:33:53 crc kubenswrapper[4986]: I1203 13:33:53.599885 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fpwdf" Dec 03 13:33:53 crc kubenswrapper[4986]: I1203 13:33:53.600261 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fpwdf" Dec 03 13:33:54 crc kubenswrapper[4986]: I1203 13:33:54.659344 4986 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fpwdf" podUID="220d7631-1041-43a0-8db1-d59392c4bb87" containerName="registry-server" probeResult="failure" output=< Dec 03 13:33:54 crc kubenswrapper[4986]: timeout: failed to connect service ":50051" within 1s Dec 03 13:33:54 crc kubenswrapper[4986]: > Dec 03 13:34:00 crc kubenswrapper[4986]: I1203 13:34:00.950538 4986 scope.go:117] "RemoveContainer" containerID="4069b648601fa33b27bda022b1a72851b773bff566b310c1ebd23beaecd8cde7" Dec 03 13:34:00 crc kubenswrapper[4986]: E1203 13:34:00.951602 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 13:34:03 crc kubenswrapper[4986]: I1203 13:34:03.650396 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fpwdf" Dec 03 13:34:03 crc kubenswrapper[4986]: I1203 13:34:03.711194 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fpwdf" Dec 03 13:34:03 crc kubenswrapper[4986]: I1203 13:34:03.885876 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fpwdf"] Dec 03 13:34:05 crc kubenswrapper[4986]: I1203 13:34:05.660394 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-fpwdf" podUID="220d7631-1041-43a0-8db1-d59392c4bb87" containerName="registry-server" containerID="cri-o://c6a483a3a80c376a31800cabbcc96035f2afc0250614488947bfc6e79df200a7" gracePeriod=2 Dec 03 13:34:05 crc kubenswrapper[4986]: E1203 13:34:05.926762 4986 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod220d7631_1041_43a0_8db1_d59392c4bb87.slice/crio-c6a483a3a80c376a31800cabbcc96035f2afc0250614488947bfc6e79df200a7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod220d7631_1041_43a0_8db1_d59392c4bb87.slice/crio-conmon-c6a483a3a80c376a31800cabbcc96035f2afc0250614488947bfc6e79df200a7.scope\": RecentStats: unable to find data in memory cache]" Dec 03 13:34:06 crc kubenswrapper[4986]: I1203 13:34:06.086954 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fpwdf" Dec 03 13:34:06 crc kubenswrapper[4986]: I1203 13:34:06.192993 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/220d7631-1041-43a0-8db1-d59392c4bb87-catalog-content\") pod \"220d7631-1041-43a0-8db1-d59392c4bb87\" (UID: \"220d7631-1041-43a0-8db1-d59392c4bb87\") " Dec 03 13:34:06 crc kubenswrapper[4986]: I1203 13:34:06.193189 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7wddt\" (UniqueName: \"kubernetes.io/projected/220d7631-1041-43a0-8db1-d59392c4bb87-kube-api-access-7wddt\") pod \"220d7631-1041-43a0-8db1-d59392c4bb87\" (UID: \"220d7631-1041-43a0-8db1-d59392c4bb87\") " Dec 03 13:34:06 crc kubenswrapper[4986]: I1203 13:34:06.193380 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/220d7631-1041-43a0-8db1-d59392c4bb87-utilities\") pod \"220d7631-1041-43a0-8db1-d59392c4bb87\" (UID: \"220d7631-1041-43a0-8db1-d59392c4bb87\") " Dec 03 13:34:06 crc kubenswrapper[4986]: I1203 13:34:06.194027 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/220d7631-1041-43a0-8db1-d59392c4bb87-utilities" (OuterVolumeSpecName: "utilities") pod "220d7631-1041-43a0-8db1-d59392c4bb87" (UID: "220d7631-1041-43a0-8db1-d59392c4bb87"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 13:34:06 crc kubenswrapper[4986]: I1203 13:34:06.198539 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/220d7631-1041-43a0-8db1-d59392c4bb87-kube-api-access-7wddt" (OuterVolumeSpecName: "kube-api-access-7wddt") pod "220d7631-1041-43a0-8db1-d59392c4bb87" (UID: "220d7631-1041-43a0-8db1-d59392c4bb87"). InnerVolumeSpecName "kube-api-access-7wddt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:34:06 crc kubenswrapper[4986]: I1203 13:34:06.296181 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7wddt\" (UniqueName: \"kubernetes.io/projected/220d7631-1041-43a0-8db1-d59392c4bb87-kube-api-access-7wddt\") on node \"crc\" DevicePath \"\"" Dec 03 13:34:06 crc kubenswrapper[4986]: I1203 13:34:06.296221 4986 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/220d7631-1041-43a0-8db1-d59392c4bb87-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 13:34:06 crc kubenswrapper[4986]: I1203 13:34:06.301023 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/220d7631-1041-43a0-8db1-d59392c4bb87-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "220d7631-1041-43a0-8db1-d59392c4bb87" (UID: "220d7631-1041-43a0-8db1-d59392c4bb87"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 13:34:06 crc kubenswrapper[4986]: I1203 13:34:06.397638 4986 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/220d7631-1041-43a0-8db1-d59392c4bb87-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 13:34:06 crc kubenswrapper[4986]: I1203 13:34:06.669982 4986 generic.go:334] "Generic (PLEG): container finished" podID="220d7631-1041-43a0-8db1-d59392c4bb87" containerID="c6a483a3a80c376a31800cabbcc96035f2afc0250614488947bfc6e79df200a7" exitCode=0 Dec 03 13:34:06 crc kubenswrapper[4986]: I1203 13:34:06.670018 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fpwdf" event={"ID":"220d7631-1041-43a0-8db1-d59392c4bb87","Type":"ContainerDied","Data":"c6a483a3a80c376a31800cabbcc96035f2afc0250614488947bfc6e79df200a7"} Dec 03 13:34:06 crc kubenswrapper[4986]: I1203 13:34:06.671358 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fpwdf" event={"ID":"220d7631-1041-43a0-8db1-d59392c4bb87","Type":"ContainerDied","Data":"3d0a84e525a6ab8b3954ddc70bdc3fe3f3bb3f247ddb631beea60239c9e8b426"} Dec 03 13:34:06 crc kubenswrapper[4986]: I1203 13:34:06.671384 4986 scope.go:117] "RemoveContainer" containerID="c6a483a3a80c376a31800cabbcc96035f2afc0250614488947bfc6e79df200a7" Dec 03 13:34:06 crc kubenswrapper[4986]: I1203 13:34:06.670060 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fpwdf" Dec 03 13:34:06 crc kubenswrapper[4986]: I1203 13:34:06.691907 4986 scope.go:117] "RemoveContainer" containerID="5300312291ba843761a3ec9b67cda08a04689bb2de43149267d903b0b921d7e4" Dec 03 13:34:06 crc kubenswrapper[4986]: I1203 13:34:06.704210 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fpwdf"] Dec 03 13:34:06 crc kubenswrapper[4986]: I1203 13:34:06.712271 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-fpwdf"] Dec 03 13:34:06 crc kubenswrapper[4986]: I1203 13:34:06.736309 4986 scope.go:117] "RemoveContainer" containerID="a7b2ef9f9ebe9ae8548f133b31b83d92ce1ebf7d00b6a081da9f6ba27ee71801" Dec 03 13:34:06 crc kubenswrapper[4986]: I1203 13:34:06.759394 4986 scope.go:117] "RemoveContainer" containerID="c6a483a3a80c376a31800cabbcc96035f2afc0250614488947bfc6e79df200a7" Dec 03 13:34:06 crc kubenswrapper[4986]: E1203 13:34:06.759762 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6a483a3a80c376a31800cabbcc96035f2afc0250614488947bfc6e79df200a7\": container with ID starting with c6a483a3a80c376a31800cabbcc96035f2afc0250614488947bfc6e79df200a7 not found: ID does not exist" containerID="c6a483a3a80c376a31800cabbcc96035f2afc0250614488947bfc6e79df200a7" Dec 03 13:34:06 crc kubenswrapper[4986]: I1203 13:34:06.759811 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6a483a3a80c376a31800cabbcc96035f2afc0250614488947bfc6e79df200a7"} err="failed to get container status \"c6a483a3a80c376a31800cabbcc96035f2afc0250614488947bfc6e79df200a7\": rpc error: code = NotFound desc = could not find container \"c6a483a3a80c376a31800cabbcc96035f2afc0250614488947bfc6e79df200a7\": container with ID starting with c6a483a3a80c376a31800cabbcc96035f2afc0250614488947bfc6e79df200a7 not found: ID does not exist" Dec 03 13:34:06 crc kubenswrapper[4986]: I1203 13:34:06.759841 4986 scope.go:117] "RemoveContainer" containerID="5300312291ba843761a3ec9b67cda08a04689bb2de43149267d903b0b921d7e4" Dec 03 13:34:06 crc kubenswrapper[4986]: E1203 13:34:06.760140 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5300312291ba843761a3ec9b67cda08a04689bb2de43149267d903b0b921d7e4\": container with ID starting with 5300312291ba843761a3ec9b67cda08a04689bb2de43149267d903b0b921d7e4 not found: ID does not exist" containerID="5300312291ba843761a3ec9b67cda08a04689bb2de43149267d903b0b921d7e4" Dec 03 13:34:06 crc kubenswrapper[4986]: I1203 13:34:06.760165 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5300312291ba843761a3ec9b67cda08a04689bb2de43149267d903b0b921d7e4"} err="failed to get container status \"5300312291ba843761a3ec9b67cda08a04689bb2de43149267d903b0b921d7e4\": rpc error: code = NotFound desc = could not find container \"5300312291ba843761a3ec9b67cda08a04689bb2de43149267d903b0b921d7e4\": container with ID starting with 5300312291ba843761a3ec9b67cda08a04689bb2de43149267d903b0b921d7e4 not found: ID does not exist" Dec 03 13:34:06 crc kubenswrapper[4986]: I1203 13:34:06.760179 4986 scope.go:117] "RemoveContainer" containerID="a7b2ef9f9ebe9ae8548f133b31b83d92ce1ebf7d00b6a081da9f6ba27ee71801" Dec 03 13:34:06 crc kubenswrapper[4986]: E1203 13:34:06.760420 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7b2ef9f9ebe9ae8548f133b31b83d92ce1ebf7d00b6a081da9f6ba27ee71801\": container with ID starting with a7b2ef9f9ebe9ae8548f133b31b83d92ce1ebf7d00b6a081da9f6ba27ee71801 not found: ID does not exist" containerID="a7b2ef9f9ebe9ae8548f133b31b83d92ce1ebf7d00b6a081da9f6ba27ee71801" Dec 03 13:34:06 crc kubenswrapper[4986]: I1203 13:34:06.760456 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7b2ef9f9ebe9ae8548f133b31b83d92ce1ebf7d00b6a081da9f6ba27ee71801"} err="failed to get container status \"a7b2ef9f9ebe9ae8548f133b31b83d92ce1ebf7d00b6a081da9f6ba27ee71801\": rpc error: code = NotFound desc = could not find container \"a7b2ef9f9ebe9ae8548f133b31b83d92ce1ebf7d00b6a081da9f6ba27ee71801\": container with ID starting with a7b2ef9f9ebe9ae8548f133b31b83d92ce1ebf7d00b6a081da9f6ba27ee71801 not found: ID does not exist" Dec 03 13:34:06 crc kubenswrapper[4986]: I1203 13:34:06.955385 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="220d7631-1041-43a0-8db1-d59392c4bb87" path="/var/lib/kubelet/pods/220d7631-1041-43a0-8db1-d59392c4bb87/volumes" Dec 03 13:34:15 crc kubenswrapper[4986]: I1203 13:34:15.751398 4986 generic.go:334] "Generic (PLEG): container finished" podID="ff750414-499f-4652-9627-3e45a82b6cf3" containerID="533c27661331253aa916c73686595249681642bce68b5ea1aa15c06aef94b579" exitCode=0 Dec 03 13:34:15 crc kubenswrapper[4986]: I1203 13:34:15.751479 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-c9nd5" event={"ID":"ff750414-499f-4652-9627-3e45a82b6cf3","Type":"ContainerDied","Data":"533c27661331253aa916c73686595249681642bce68b5ea1aa15c06aef94b579"} Dec 03 13:34:15 crc kubenswrapper[4986]: I1203 13:34:15.943747 4986 scope.go:117] "RemoveContainer" containerID="4069b648601fa33b27bda022b1a72851b773bff566b310c1ebd23beaecd8cde7" Dec 03 13:34:15 crc kubenswrapper[4986]: E1203 13:34:15.943997 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 13:34:17 crc kubenswrapper[4986]: I1203 13:34:17.202773 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-c9nd5" Dec 03 13:34:17 crc kubenswrapper[4986]: I1203 13:34:17.310562 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ff750414-499f-4652-9627-3e45a82b6cf3-inventory\") pod \"ff750414-499f-4652-9627-3e45a82b6cf3\" (UID: \"ff750414-499f-4652-9627-3e45a82b6cf3\") " Dec 03 13:34:17 crc kubenswrapper[4986]: I1203 13:34:17.310668 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ff750414-499f-4652-9627-3e45a82b6cf3-ssh-key\") pod \"ff750414-499f-4652-9627-3e45a82b6cf3\" (UID: \"ff750414-499f-4652-9627-3e45a82b6cf3\") " Dec 03 13:34:17 crc kubenswrapper[4986]: I1203 13:34:17.310694 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff750414-499f-4652-9627-3e45a82b6cf3-ovn-combined-ca-bundle\") pod \"ff750414-499f-4652-9627-3e45a82b6cf3\" (UID: \"ff750414-499f-4652-9627-3e45a82b6cf3\") " Dec 03 13:34:17 crc kubenswrapper[4986]: I1203 13:34:17.310768 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/ff750414-499f-4652-9627-3e45a82b6cf3-ovncontroller-config-0\") pod \"ff750414-499f-4652-9627-3e45a82b6cf3\" (UID: \"ff750414-499f-4652-9627-3e45a82b6cf3\") " Dec 03 13:34:17 crc kubenswrapper[4986]: I1203 13:34:17.310851 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vdphq\" (UniqueName: \"kubernetes.io/projected/ff750414-499f-4652-9627-3e45a82b6cf3-kube-api-access-vdphq\") pod \"ff750414-499f-4652-9627-3e45a82b6cf3\" (UID: \"ff750414-499f-4652-9627-3e45a82b6cf3\") " Dec 03 13:34:17 crc kubenswrapper[4986]: I1203 13:34:17.319904 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff750414-499f-4652-9627-3e45a82b6cf3-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "ff750414-499f-4652-9627-3e45a82b6cf3" (UID: "ff750414-499f-4652-9627-3e45a82b6cf3"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:34:17 crc kubenswrapper[4986]: I1203 13:34:17.333027 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff750414-499f-4652-9627-3e45a82b6cf3-kube-api-access-vdphq" (OuterVolumeSpecName: "kube-api-access-vdphq") pod "ff750414-499f-4652-9627-3e45a82b6cf3" (UID: "ff750414-499f-4652-9627-3e45a82b6cf3"). InnerVolumeSpecName "kube-api-access-vdphq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:34:17 crc kubenswrapper[4986]: I1203 13:34:17.345827 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff750414-499f-4652-9627-3e45a82b6cf3-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "ff750414-499f-4652-9627-3e45a82b6cf3" (UID: "ff750414-499f-4652-9627-3e45a82b6cf3"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:34:17 crc kubenswrapper[4986]: I1203 13:34:17.350484 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff750414-499f-4652-9627-3e45a82b6cf3-inventory" (OuterVolumeSpecName: "inventory") pod "ff750414-499f-4652-9627-3e45a82b6cf3" (UID: "ff750414-499f-4652-9627-3e45a82b6cf3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:34:17 crc kubenswrapper[4986]: I1203 13:34:17.375139 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff750414-499f-4652-9627-3e45a82b6cf3-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "ff750414-499f-4652-9627-3e45a82b6cf3" (UID: "ff750414-499f-4652-9627-3e45a82b6cf3"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:34:17 crc kubenswrapper[4986]: I1203 13:34:17.413659 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vdphq\" (UniqueName: \"kubernetes.io/projected/ff750414-499f-4652-9627-3e45a82b6cf3-kube-api-access-vdphq\") on node \"crc\" DevicePath \"\"" Dec 03 13:34:17 crc kubenswrapper[4986]: I1203 13:34:17.413867 4986 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ff750414-499f-4652-9627-3e45a82b6cf3-inventory\") on node \"crc\" DevicePath \"\"" Dec 03 13:34:17 crc kubenswrapper[4986]: I1203 13:34:17.414000 4986 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ff750414-499f-4652-9627-3e45a82b6cf3-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 03 13:34:17 crc kubenswrapper[4986]: I1203 13:34:17.414087 4986 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff750414-499f-4652-9627-3e45a82b6cf3-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 13:34:17 crc kubenswrapper[4986]: I1203 13:34:17.414161 4986 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/ff750414-499f-4652-9627-3e45a82b6cf3-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Dec 03 13:34:17 crc kubenswrapper[4986]: I1203 13:34:17.772340 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-c9nd5" event={"ID":"ff750414-499f-4652-9627-3e45a82b6cf3","Type":"ContainerDied","Data":"8a8bfe8520ffb28bdb78b72f04de9baf8ac5ae93f0979968acc63e36546dc96d"} Dec 03 13:34:17 crc kubenswrapper[4986]: I1203 13:34:17.772387 4986 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a8bfe8520ffb28bdb78b72f04de9baf8ac5ae93f0979968acc63e36546dc96d" Dec 03 13:34:17 crc kubenswrapper[4986]: I1203 13:34:17.772473 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-c9nd5" Dec 03 13:34:17 crc kubenswrapper[4986]: I1203 13:34:17.888690 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-s5hc5"] Dec 03 13:34:17 crc kubenswrapper[4986]: E1203 13:34:17.889066 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="220d7631-1041-43a0-8db1-d59392c4bb87" containerName="extract-utilities" Dec 03 13:34:17 crc kubenswrapper[4986]: I1203 13:34:17.889082 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="220d7631-1041-43a0-8db1-d59392c4bb87" containerName="extract-utilities" Dec 03 13:34:17 crc kubenswrapper[4986]: E1203 13:34:17.889093 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="220d7631-1041-43a0-8db1-d59392c4bb87" containerName="registry-server" Dec 03 13:34:17 crc kubenswrapper[4986]: I1203 13:34:17.889099 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="220d7631-1041-43a0-8db1-d59392c4bb87" containerName="registry-server" Dec 03 13:34:17 crc kubenswrapper[4986]: E1203 13:34:17.889107 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="220d7631-1041-43a0-8db1-d59392c4bb87" containerName="extract-content" Dec 03 13:34:17 crc kubenswrapper[4986]: I1203 13:34:17.889113 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="220d7631-1041-43a0-8db1-d59392c4bb87" containerName="extract-content" Dec 03 13:34:17 crc kubenswrapper[4986]: E1203 13:34:17.889138 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff750414-499f-4652-9627-3e45a82b6cf3" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Dec 03 13:34:17 crc kubenswrapper[4986]: I1203 13:34:17.889144 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff750414-499f-4652-9627-3e45a82b6cf3" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Dec 03 13:34:17 crc kubenswrapper[4986]: I1203 13:34:17.905434 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="220d7631-1041-43a0-8db1-d59392c4bb87" containerName="registry-server" Dec 03 13:34:17 crc kubenswrapper[4986]: I1203 13:34:17.905475 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff750414-499f-4652-9627-3e45a82b6cf3" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Dec 03 13:34:17 crc kubenswrapper[4986]: I1203 13:34:17.906337 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-s5hc5" Dec 03 13:34:17 crc kubenswrapper[4986]: I1203 13:34:17.910081 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Dec 03 13:34:17 crc kubenswrapper[4986]: I1203 13:34:17.910350 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 03 13:34:17 crc kubenswrapper[4986]: I1203 13:34:17.910520 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jr75v" Dec 03 13:34:17 crc kubenswrapper[4986]: I1203 13:34:17.910628 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 03 13:34:17 crc kubenswrapper[4986]: I1203 13:34:17.910735 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Dec 03 13:34:17 crc kubenswrapper[4986]: I1203 13:34:17.910853 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 03 13:34:17 crc kubenswrapper[4986]: I1203 13:34:17.926945 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-s5hc5"] Dec 03 13:34:18 crc kubenswrapper[4986]: I1203 13:34:18.027804 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/2a01e184-500d-44fe-9561-d971fb030c77-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-s5hc5\" (UID: \"2a01e184-500d-44fe-9561-d971fb030c77\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-s5hc5" Dec 03 13:34:18 crc kubenswrapper[4986]: I1203 13:34:18.027888 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2j76\" (UniqueName: \"kubernetes.io/projected/2a01e184-500d-44fe-9561-d971fb030c77-kube-api-access-v2j76\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-s5hc5\" (UID: \"2a01e184-500d-44fe-9561-d971fb030c77\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-s5hc5" Dec 03 13:34:18 crc kubenswrapper[4986]: I1203 13:34:18.027936 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2a01e184-500d-44fe-9561-d971fb030c77-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-s5hc5\" (UID: \"2a01e184-500d-44fe-9561-d971fb030c77\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-s5hc5" Dec 03 13:34:18 crc kubenswrapper[4986]: I1203 13:34:18.028038 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2a01e184-500d-44fe-9561-d971fb030c77-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-s5hc5\" (UID: \"2a01e184-500d-44fe-9561-d971fb030c77\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-s5hc5" Dec 03 13:34:18 crc kubenswrapper[4986]: I1203 13:34:18.028070 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/2a01e184-500d-44fe-9561-d971fb030c77-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-s5hc5\" (UID: \"2a01e184-500d-44fe-9561-d971fb030c77\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-s5hc5" Dec 03 13:34:18 crc kubenswrapper[4986]: I1203 13:34:18.028318 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a01e184-500d-44fe-9561-d971fb030c77-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-s5hc5\" (UID: \"2a01e184-500d-44fe-9561-d971fb030c77\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-s5hc5" Dec 03 13:34:18 crc kubenswrapper[4986]: I1203 13:34:18.129989 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/2a01e184-500d-44fe-9561-d971fb030c77-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-s5hc5\" (UID: \"2a01e184-500d-44fe-9561-d971fb030c77\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-s5hc5" Dec 03 13:34:18 crc kubenswrapper[4986]: I1203 13:34:18.130396 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2j76\" (UniqueName: \"kubernetes.io/projected/2a01e184-500d-44fe-9561-d971fb030c77-kube-api-access-v2j76\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-s5hc5\" (UID: \"2a01e184-500d-44fe-9561-d971fb030c77\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-s5hc5" Dec 03 13:34:18 crc kubenswrapper[4986]: I1203 13:34:18.130428 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2a01e184-500d-44fe-9561-d971fb030c77-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-s5hc5\" (UID: \"2a01e184-500d-44fe-9561-d971fb030c77\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-s5hc5" Dec 03 13:34:18 crc kubenswrapper[4986]: I1203 13:34:18.130492 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2a01e184-500d-44fe-9561-d971fb030c77-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-s5hc5\" (UID: \"2a01e184-500d-44fe-9561-d971fb030c77\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-s5hc5" Dec 03 13:34:18 crc kubenswrapper[4986]: I1203 13:34:18.130524 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/2a01e184-500d-44fe-9561-d971fb030c77-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-s5hc5\" (UID: \"2a01e184-500d-44fe-9561-d971fb030c77\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-s5hc5" Dec 03 13:34:18 crc kubenswrapper[4986]: I1203 13:34:18.130581 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a01e184-500d-44fe-9561-d971fb030c77-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-s5hc5\" (UID: \"2a01e184-500d-44fe-9561-d971fb030c77\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-s5hc5" Dec 03 13:34:18 crc kubenswrapper[4986]: I1203 13:34:18.136151 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2a01e184-500d-44fe-9561-d971fb030c77-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-s5hc5\" (UID: \"2a01e184-500d-44fe-9561-d971fb030c77\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-s5hc5" Dec 03 13:34:18 crc kubenswrapper[4986]: I1203 13:34:18.136211 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/2a01e184-500d-44fe-9561-d971fb030c77-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-s5hc5\" (UID: \"2a01e184-500d-44fe-9561-d971fb030c77\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-s5hc5" Dec 03 13:34:18 crc kubenswrapper[4986]: I1203 13:34:18.136447 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/2a01e184-500d-44fe-9561-d971fb030c77-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-s5hc5\" (UID: \"2a01e184-500d-44fe-9561-d971fb030c77\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-s5hc5" Dec 03 13:34:18 crc kubenswrapper[4986]: I1203 13:34:18.136974 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a01e184-500d-44fe-9561-d971fb030c77-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-s5hc5\" (UID: \"2a01e184-500d-44fe-9561-d971fb030c77\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-s5hc5" Dec 03 13:34:18 crc kubenswrapper[4986]: I1203 13:34:18.137253 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2a01e184-500d-44fe-9561-d971fb030c77-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-s5hc5\" (UID: \"2a01e184-500d-44fe-9561-d971fb030c77\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-s5hc5" Dec 03 13:34:18 crc kubenswrapper[4986]: I1203 13:34:18.152122 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2j76\" (UniqueName: \"kubernetes.io/projected/2a01e184-500d-44fe-9561-d971fb030c77-kube-api-access-v2j76\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-s5hc5\" (UID: \"2a01e184-500d-44fe-9561-d971fb030c77\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-s5hc5" Dec 03 13:34:18 crc kubenswrapper[4986]: I1203 13:34:18.226486 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-s5hc5" Dec 03 13:34:18 crc kubenswrapper[4986]: I1203 13:34:18.781561 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-s5hc5"] Dec 03 13:34:19 crc kubenswrapper[4986]: I1203 13:34:19.792952 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-s5hc5" event={"ID":"2a01e184-500d-44fe-9561-d971fb030c77","Type":"ContainerStarted","Data":"4c1bde11eee56a0b2f2658034af3268c2424ffa2803e48225f70b2aa4cdba553"} Dec 03 13:34:19 crc kubenswrapper[4986]: I1203 13:34:19.793210 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-s5hc5" event={"ID":"2a01e184-500d-44fe-9561-d971fb030c77","Type":"ContainerStarted","Data":"00d04596bd7028870216506acf87199321c948f1f4afa217cc745b0f2e185237"} Dec 03 13:34:19 crc kubenswrapper[4986]: I1203 13:34:19.812169 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-s5hc5" podStartSLOduration=2.264828507 podStartE2EDuration="2.812149626s" podCreationTimestamp="2025-12-03 13:34:17 +0000 UTC" firstStartedPulling="2025-12-03 13:34:18.791104944 +0000 UTC m=+2318.257536135" lastFinishedPulling="2025-12-03 13:34:19.338426063 +0000 UTC m=+2318.804857254" observedRunningTime="2025-12-03 13:34:19.806583715 +0000 UTC m=+2319.273014916" watchObservedRunningTime="2025-12-03 13:34:19.812149626 +0000 UTC m=+2319.278580827" Dec 03 13:34:29 crc kubenswrapper[4986]: I1203 13:34:29.943721 4986 scope.go:117] "RemoveContainer" containerID="4069b648601fa33b27bda022b1a72851b773bff566b310c1ebd23beaecd8cde7" Dec 03 13:34:29 crc kubenswrapper[4986]: E1203 13:34:29.944432 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 13:34:43 crc kubenswrapper[4986]: I1203 13:34:43.944566 4986 scope.go:117] "RemoveContainer" containerID="4069b648601fa33b27bda022b1a72851b773bff566b310c1ebd23beaecd8cde7" Dec 03 13:34:43 crc kubenswrapper[4986]: E1203 13:34:43.945867 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 13:34:54 crc kubenswrapper[4986]: I1203 13:34:54.944269 4986 scope.go:117] "RemoveContainer" containerID="4069b648601fa33b27bda022b1a72851b773bff566b310c1ebd23beaecd8cde7" Dec 03 13:34:54 crc kubenswrapper[4986]: E1203 13:34:54.945023 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 13:35:05 crc kubenswrapper[4986]: I1203 13:35:05.943912 4986 scope.go:117] "RemoveContainer" containerID="4069b648601fa33b27bda022b1a72851b773bff566b310c1ebd23beaecd8cde7" Dec 03 13:35:05 crc kubenswrapper[4986]: E1203 13:35:05.945369 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 13:35:15 crc kubenswrapper[4986]: I1203 13:35:15.349272 4986 generic.go:334] "Generic (PLEG): container finished" podID="2a01e184-500d-44fe-9561-d971fb030c77" containerID="4c1bde11eee56a0b2f2658034af3268c2424ffa2803e48225f70b2aa4cdba553" exitCode=0 Dec 03 13:35:15 crc kubenswrapper[4986]: I1203 13:35:15.349348 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-s5hc5" event={"ID":"2a01e184-500d-44fe-9561-d971fb030c77","Type":"ContainerDied","Data":"4c1bde11eee56a0b2f2658034af3268c2424ffa2803e48225f70b2aa4cdba553"} Dec 03 13:35:16 crc kubenswrapper[4986]: I1203 13:35:16.918009 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-s5hc5" Dec 03 13:35:17 crc kubenswrapper[4986]: I1203 13:35:17.015103 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/2a01e184-500d-44fe-9561-d971fb030c77-nova-metadata-neutron-config-0\") pod \"2a01e184-500d-44fe-9561-d971fb030c77\" (UID: \"2a01e184-500d-44fe-9561-d971fb030c77\") " Dec 03 13:35:17 crc kubenswrapper[4986]: I1203 13:35:17.015159 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2a01e184-500d-44fe-9561-d971fb030c77-ssh-key\") pod \"2a01e184-500d-44fe-9561-d971fb030c77\" (UID: \"2a01e184-500d-44fe-9561-d971fb030c77\") " Dec 03 13:35:17 crc kubenswrapper[4986]: I1203 13:35:17.015194 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v2j76\" (UniqueName: \"kubernetes.io/projected/2a01e184-500d-44fe-9561-d971fb030c77-kube-api-access-v2j76\") pod \"2a01e184-500d-44fe-9561-d971fb030c77\" (UID: \"2a01e184-500d-44fe-9561-d971fb030c77\") " Dec 03 13:35:17 crc kubenswrapper[4986]: I1203 13:35:17.015243 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a01e184-500d-44fe-9561-d971fb030c77-neutron-metadata-combined-ca-bundle\") pod \"2a01e184-500d-44fe-9561-d971fb030c77\" (UID: \"2a01e184-500d-44fe-9561-d971fb030c77\") " Dec 03 13:35:17 crc kubenswrapper[4986]: I1203 13:35:17.015308 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2a01e184-500d-44fe-9561-d971fb030c77-inventory\") pod \"2a01e184-500d-44fe-9561-d971fb030c77\" (UID: \"2a01e184-500d-44fe-9561-d971fb030c77\") " Dec 03 13:35:17 crc kubenswrapper[4986]: I1203 13:35:17.015365 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/2a01e184-500d-44fe-9561-d971fb030c77-neutron-ovn-metadata-agent-neutron-config-0\") pod \"2a01e184-500d-44fe-9561-d971fb030c77\" (UID: \"2a01e184-500d-44fe-9561-d971fb030c77\") " Dec 03 13:35:17 crc kubenswrapper[4986]: I1203 13:35:17.022666 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a01e184-500d-44fe-9561-d971fb030c77-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "2a01e184-500d-44fe-9561-d971fb030c77" (UID: "2a01e184-500d-44fe-9561-d971fb030c77"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:35:17 crc kubenswrapper[4986]: I1203 13:35:17.027623 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a01e184-500d-44fe-9561-d971fb030c77-kube-api-access-v2j76" (OuterVolumeSpecName: "kube-api-access-v2j76") pod "2a01e184-500d-44fe-9561-d971fb030c77" (UID: "2a01e184-500d-44fe-9561-d971fb030c77"). InnerVolumeSpecName "kube-api-access-v2j76". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:35:17 crc kubenswrapper[4986]: I1203 13:35:17.044343 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a01e184-500d-44fe-9561-d971fb030c77-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "2a01e184-500d-44fe-9561-d971fb030c77" (UID: "2a01e184-500d-44fe-9561-d971fb030c77"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:35:17 crc kubenswrapper[4986]: I1203 13:35:17.047847 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a01e184-500d-44fe-9561-d971fb030c77-inventory" (OuterVolumeSpecName: "inventory") pod "2a01e184-500d-44fe-9561-d971fb030c77" (UID: "2a01e184-500d-44fe-9561-d971fb030c77"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:35:17 crc kubenswrapper[4986]: I1203 13:35:17.062924 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a01e184-500d-44fe-9561-d971fb030c77-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "2a01e184-500d-44fe-9561-d971fb030c77" (UID: "2a01e184-500d-44fe-9561-d971fb030c77"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:35:17 crc kubenswrapper[4986]: I1203 13:35:17.066572 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a01e184-500d-44fe-9561-d971fb030c77-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "2a01e184-500d-44fe-9561-d971fb030c77" (UID: "2a01e184-500d-44fe-9561-d971fb030c77"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:35:17 crc kubenswrapper[4986]: I1203 13:35:17.118420 4986 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/2a01e184-500d-44fe-9561-d971fb030c77-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Dec 03 13:35:17 crc kubenswrapper[4986]: I1203 13:35:17.118471 4986 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2a01e184-500d-44fe-9561-d971fb030c77-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 03 13:35:17 crc kubenswrapper[4986]: I1203 13:35:17.118482 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v2j76\" (UniqueName: \"kubernetes.io/projected/2a01e184-500d-44fe-9561-d971fb030c77-kube-api-access-v2j76\") on node \"crc\" DevicePath \"\"" Dec 03 13:35:17 crc kubenswrapper[4986]: I1203 13:35:17.118497 4986 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a01e184-500d-44fe-9561-d971fb030c77-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 13:35:17 crc kubenswrapper[4986]: I1203 13:35:17.118508 4986 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2a01e184-500d-44fe-9561-d971fb030c77-inventory\") on node \"crc\" DevicePath \"\"" Dec 03 13:35:17 crc kubenswrapper[4986]: I1203 13:35:17.118524 4986 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/2a01e184-500d-44fe-9561-d971fb030c77-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Dec 03 13:35:17 crc kubenswrapper[4986]: I1203 13:35:17.374883 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-s5hc5" event={"ID":"2a01e184-500d-44fe-9561-d971fb030c77","Type":"ContainerDied","Data":"00d04596bd7028870216506acf87199321c948f1f4afa217cc745b0f2e185237"} Dec 03 13:35:17 crc kubenswrapper[4986]: I1203 13:35:17.375500 4986 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="00d04596bd7028870216506acf87199321c948f1f4afa217cc745b0f2e185237" Dec 03 13:35:17 crc kubenswrapper[4986]: I1203 13:35:17.375125 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-s5hc5" Dec 03 13:35:17 crc kubenswrapper[4986]: I1203 13:35:17.498687 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fdb9l"] Dec 03 13:35:17 crc kubenswrapper[4986]: E1203 13:35:17.499069 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a01e184-500d-44fe-9561-d971fb030c77" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Dec 03 13:35:17 crc kubenswrapper[4986]: I1203 13:35:17.499085 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a01e184-500d-44fe-9561-d971fb030c77" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Dec 03 13:35:17 crc kubenswrapper[4986]: I1203 13:35:17.499306 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a01e184-500d-44fe-9561-d971fb030c77" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Dec 03 13:35:17 crc kubenswrapper[4986]: I1203 13:35:17.499892 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fdb9l" Dec 03 13:35:17 crc kubenswrapper[4986]: I1203 13:35:17.501792 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 03 13:35:17 crc kubenswrapper[4986]: I1203 13:35:17.503555 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jr75v" Dec 03 13:35:17 crc kubenswrapper[4986]: I1203 13:35:17.503704 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 03 13:35:17 crc kubenswrapper[4986]: I1203 13:35:17.503802 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 03 13:35:17 crc kubenswrapper[4986]: I1203 13:35:17.503869 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Dec 03 13:35:17 crc kubenswrapper[4986]: I1203 13:35:17.518110 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fdb9l"] Dec 03 13:35:17 crc kubenswrapper[4986]: I1203 13:35:17.625980 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/38870d92-6fb1-40ac-8763-a8c8bfbbdd77-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fdb9l\" (UID: \"38870d92-6fb1-40ac-8763-a8c8bfbbdd77\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fdb9l" Dec 03 13:35:17 crc kubenswrapper[4986]: I1203 13:35:17.626079 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/38870d92-6fb1-40ac-8763-a8c8bfbbdd77-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fdb9l\" (UID: \"38870d92-6fb1-40ac-8763-a8c8bfbbdd77\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fdb9l" Dec 03 13:35:17 crc kubenswrapper[4986]: I1203 13:35:17.626169 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/38870d92-6fb1-40ac-8763-a8c8bfbbdd77-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fdb9l\" (UID: \"38870d92-6fb1-40ac-8763-a8c8bfbbdd77\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fdb9l" Dec 03 13:35:17 crc kubenswrapper[4986]: I1203 13:35:17.626239 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38870d92-6fb1-40ac-8763-a8c8bfbbdd77-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fdb9l\" (UID: \"38870d92-6fb1-40ac-8763-a8c8bfbbdd77\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fdb9l" Dec 03 13:35:17 crc kubenswrapper[4986]: I1203 13:35:17.626312 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nt5r\" (UniqueName: \"kubernetes.io/projected/38870d92-6fb1-40ac-8763-a8c8bfbbdd77-kube-api-access-2nt5r\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fdb9l\" (UID: \"38870d92-6fb1-40ac-8763-a8c8bfbbdd77\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fdb9l" Dec 03 13:35:17 crc kubenswrapper[4986]: I1203 13:35:17.727764 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/38870d92-6fb1-40ac-8763-a8c8bfbbdd77-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fdb9l\" (UID: \"38870d92-6fb1-40ac-8763-a8c8bfbbdd77\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fdb9l" Dec 03 13:35:17 crc kubenswrapper[4986]: I1203 13:35:17.727849 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38870d92-6fb1-40ac-8763-a8c8bfbbdd77-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fdb9l\" (UID: \"38870d92-6fb1-40ac-8763-a8c8bfbbdd77\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fdb9l" Dec 03 13:35:17 crc kubenswrapper[4986]: I1203 13:35:17.727884 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nt5r\" (UniqueName: \"kubernetes.io/projected/38870d92-6fb1-40ac-8763-a8c8bfbbdd77-kube-api-access-2nt5r\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fdb9l\" (UID: \"38870d92-6fb1-40ac-8763-a8c8bfbbdd77\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fdb9l" Dec 03 13:35:17 crc kubenswrapper[4986]: I1203 13:35:17.727951 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/38870d92-6fb1-40ac-8763-a8c8bfbbdd77-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fdb9l\" (UID: \"38870d92-6fb1-40ac-8763-a8c8bfbbdd77\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fdb9l" Dec 03 13:35:17 crc kubenswrapper[4986]: I1203 13:35:17.727999 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/38870d92-6fb1-40ac-8763-a8c8bfbbdd77-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fdb9l\" (UID: \"38870d92-6fb1-40ac-8763-a8c8bfbbdd77\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fdb9l" Dec 03 13:35:17 crc kubenswrapper[4986]: I1203 13:35:17.734052 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/38870d92-6fb1-40ac-8763-a8c8bfbbdd77-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fdb9l\" (UID: \"38870d92-6fb1-40ac-8763-a8c8bfbbdd77\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fdb9l" Dec 03 13:35:17 crc kubenswrapper[4986]: I1203 13:35:17.734897 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38870d92-6fb1-40ac-8763-a8c8bfbbdd77-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fdb9l\" (UID: \"38870d92-6fb1-40ac-8763-a8c8bfbbdd77\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fdb9l" Dec 03 13:35:17 crc kubenswrapper[4986]: I1203 13:35:17.735084 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/38870d92-6fb1-40ac-8763-a8c8bfbbdd77-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fdb9l\" (UID: \"38870d92-6fb1-40ac-8763-a8c8bfbbdd77\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fdb9l" Dec 03 13:35:17 crc kubenswrapper[4986]: I1203 13:35:17.735673 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/38870d92-6fb1-40ac-8763-a8c8bfbbdd77-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fdb9l\" (UID: \"38870d92-6fb1-40ac-8763-a8c8bfbbdd77\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fdb9l" Dec 03 13:35:17 crc kubenswrapper[4986]: I1203 13:35:17.749619 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nt5r\" (UniqueName: \"kubernetes.io/projected/38870d92-6fb1-40ac-8763-a8c8bfbbdd77-kube-api-access-2nt5r\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fdb9l\" (UID: \"38870d92-6fb1-40ac-8763-a8c8bfbbdd77\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fdb9l" Dec 03 13:35:17 crc kubenswrapper[4986]: I1203 13:35:17.831143 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fdb9l" Dec 03 13:35:18 crc kubenswrapper[4986]: I1203 13:35:18.354547 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fdb9l"] Dec 03 13:35:18 crc kubenswrapper[4986]: I1203 13:35:18.382778 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fdb9l" event={"ID":"38870d92-6fb1-40ac-8763-a8c8bfbbdd77","Type":"ContainerStarted","Data":"82ab5d5776e56d08559cd6e487a8fb09ccc4d38070f9b252b8191ebb0b9e941d"} Dec 03 13:35:19 crc kubenswrapper[4986]: I1203 13:35:19.394270 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fdb9l" event={"ID":"38870d92-6fb1-40ac-8763-a8c8bfbbdd77","Type":"ContainerStarted","Data":"9312bacb66a1d63db07dbc3184c5a87ab8e8b7c50e6123b080eaf69553bfdda7"} Dec 03 13:35:20 crc kubenswrapper[4986]: I1203 13:35:20.950426 4986 scope.go:117] "RemoveContainer" containerID="4069b648601fa33b27bda022b1a72851b773bff566b310c1ebd23beaecd8cde7" Dec 03 13:35:20 crc kubenswrapper[4986]: E1203 13:35:20.950944 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 13:35:35 crc kubenswrapper[4986]: I1203 13:35:35.943745 4986 scope.go:117] "RemoveContainer" containerID="4069b648601fa33b27bda022b1a72851b773bff566b310c1ebd23beaecd8cde7" Dec 03 13:35:35 crc kubenswrapper[4986]: E1203 13:35:35.945789 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 13:35:47 crc kubenswrapper[4986]: I1203 13:35:47.943394 4986 scope.go:117] "RemoveContainer" containerID="4069b648601fa33b27bda022b1a72851b773bff566b310c1ebd23beaecd8cde7" Dec 03 13:35:47 crc kubenswrapper[4986]: E1203 13:35:47.944017 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 13:35:58 crc kubenswrapper[4986]: I1203 13:35:58.944152 4986 scope.go:117] "RemoveContainer" containerID="4069b648601fa33b27bda022b1a72851b773bff566b310c1ebd23beaecd8cde7" Dec 03 13:35:58 crc kubenswrapper[4986]: E1203 13:35:58.945215 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 13:36:11 crc kubenswrapper[4986]: I1203 13:36:11.943646 4986 scope.go:117] "RemoveContainer" containerID="4069b648601fa33b27bda022b1a72851b773bff566b310c1ebd23beaecd8cde7" Dec 03 13:36:11 crc kubenswrapper[4986]: E1203 13:36:11.944474 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 13:36:23 crc kubenswrapper[4986]: I1203 13:36:23.943463 4986 scope.go:117] "RemoveContainer" containerID="4069b648601fa33b27bda022b1a72851b773bff566b310c1ebd23beaecd8cde7" Dec 03 13:36:23 crc kubenswrapper[4986]: E1203 13:36:23.944148 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 13:36:36 crc kubenswrapper[4986]: I1203 13:36:36.944204 4986 scope.go:117] "RemoveContainer" containerID="4069b648601fa33b27bda022b1a72851b773bff566b310c1ebd23beaecd8cde7" Dec 03 13:36:36 crc kubenswrapper[4986]: E1203 13:36:36.945412 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 13:36:49 crc kubenswrapper[4986]: I1203 13:36:49.943082 4986 scope.go:117] "RemoveContainer" containerID="4069b648601fa33b27bda022b1a72851b773bff566b310c1ebd23beaecd8cde7" Dec 03 13:36:49 crc kubenswrapper[4986]: E1203 13:36:49.944200 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 13:37:00 crc kubenswrapper[4986]: I1203 13:37:00.951801 4986 scope.go:117] "RemoveContainer" containerID="4069b648601fa33b27bda022b1a72851b773bff566b310c1ebd23beaecd8cde7" Dec 03 13:37:00 crc kubenswrapper[4986]: E1203 13:37:00.952514 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 13:37:14 crc kubenswrapper[4986]: I1203 13:37:14.943841 4986 scope.go:117] "RemoveContainer" containerID="4069b648601fa33b27bda022b1a72851b773bff566b310c1ebd23beaecd8cde7" Dec 03 13:37:14 crc kubenswrapper[4986]: E1203 13:37:14.945562 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 13:37:25 crc kubenswrapper[4986]: I1203 13:37:25.943532 4986 scope.go:117] "RemoveContainer" containerID="4069b648601fa33b27bda022b1a72851b773bff566b310c1ebd23beaecd8cde7" Dec 03 13:37:25 crc kubenswrapper[4986]: E1203 13:37:25.944255 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 13:37:39 crc kubenswrapper[4986]: I1203 13:37:39.943529 4986 scope.go:117] "RemoveContainer" containerID="4069b648601fa33b27bda022b1a72851b773bff566b310c1ebd23beaecd8cde7" Dec 03 13:37:39 crc kubenswrapper[4986]: E1203 13:37:39.944320 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 13:37:52 crc kubenswrapper[4986]: I1203 13:37:52.943660 4986 scope.go:117] "RemoveContainer" containerID="4069b648601fa33b27bda022b1a72851b773bff566b310c1ebd23beaecd8cde7" Dec 03 13:37:52 crc kubenswrapper[4986]: E1203 13:37:52.944482 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 13:38:06 crc kubenswrapper[4986]: I1203 13:38:06.943852 4986 scope.go:117] "RemoveContainer" containerID="4069b648601fa33b27bda022b1a72851b773bff566b310c1ebd23beaecd8cde7" Dec 03 13:38:06 crc kubenswrapper[4986]: E1203 13:38:06.945186 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 13:38:19 crc kubenswrapper[4986]: I1203 13:38:19.943476 4986 scope.go:117] "RemoveContainer" containerID="4069b648601fa33b27bda022b1a72851b773bff566b310c1ebd23beaecd8cde7" Dec 03 13:38:19 crc kubenswrapper[4986]: E1203 13:38:19.945703 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 13:38:34 crc kubenswrapper[4986]: I1203 13:38:34.944191 4986 scope.go:117] "RemoveContainer" containerID="4069b648601fa33b27bda022b1a72851b773bff566b310c1ebd23beaecd8cde7" Dec 03 13:38:35 crc kubenswrapper[4986]: I1203 13:38:35.306295 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" event={"ID":"f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c","Type":"ContainerStarted","Data":"0bddd0a15eacde8641e29df77fc8459eb902921d6b45739e6df93c4ee9eb0dda"} Dec 03 13:38:35 crc kubenswrapper[4986]: I1203 13:38:35.325655 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fdb9l" podStartSLOduration=197.721247365 podStartE2EDuration="3m18.325629503s" podCreationTimestamp="2025-12-03 13:35:17 +0000 UTC" firstStartedPulling="2025-12-03 13:35:18.361252665 +0000 UTC m=+2377.827683856" lastFinishedPulling="2025-12-03 13:35:18.965634793 +0000 UTC m=+2378.432065994" observedRunningTime="2025-12-03 13:35:19.411033967 +0000 UTC m=+2378.877465168" watchObservedRunningTime="2025-12-03 13:38:35.325629503 +0000 UTC m=+2574.792060734" Dec 03 13:39:57 crc kubenswrapper[4986]: I1203 13:39:57.077019 4986 generic.go:334] "Generic (PLEG): container finished" podID="38870d92-6fb1-40ac-8763-a8c8bfbbdd77" containerID="9312bacb66a1d63db07dbc3184c5a87ab8e8b7c50e6123b080eaf69553bfdda7" exitCode=0 Dec 03 13:39:57 crc kubenswrapper[4986]: I1203 13:39:57.077151 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fdb9l" event={"ID":"38870d92-6fb1-40ac-8763-a8c8bfbbdd77","Type":"ContainerDied","Data":"9312bacb66a1d63db07dbc3184c5a87ab8e8b7c50e6123b080eaf69553bfdda7"} Dec 03 13:39:58 crc kubenswrapper[4986]: I1203 13:39:58.498011 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fdb9l" Dec 03 13:39:58 crc kubenswrapper[4986]: I1203 13:39:58.601889 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2nt5r\" (UniqueName: \"kubernetes.io/projected/38870d92-6fb1-40ac-8763-a8c8bfbbdd77-kube-api-access-2nt5r\") pod \"38870d92-6fb1-40ac-8763-a8c8bfbbdd77\" (UID: \"38870d92-6fb1-40ac-8763-a8c8bfbbdd77\") " Dec 03 13:39:58 crc kubenswrapper[4986]: I1203 13:39:58.601996 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/38870d92-6fb1-40ac-8763-a8c8bfbbdd77-libvirt-secret-0\") pod \"38870d92-6fb1-40ac-8763-a8c8bfbbdd77\" (UID: \"38870d92-6fb1-40ac-8763-a8c8bfbbdd77\") " Dec 03 13:39:58 crc kubenswrapper[4986]: I1203 13:39:58.602072 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/38870d92-6fb1-40ac-8763-a8c8bfbbdd77-ssh-key\") pod \"38870d92-6fb1-40ac-8763-a8c8bfbbdd77\" (UID: \"38870d92-6fb1-40ac-8763-a8c8bfbbdd77\") " Dec 03 13:39:58 crc kubenswrapper[4986]: I1203 13:39:58.602098 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/38870d92-6fb1-40ac-8763-a8c8bfbbdd77-inventory\") pod \"38870d92-6fb1-40ac-8763-a8c8bfbbdd77\" (UID: \"38870d92-6fb1-40ac-8763-a8c8bfbbdd77\") " Dec 03 13:39:58 crc kubenswrapper[4986]: I1203 13:39:58.602123 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38870d92-6fb1-40ac-8763-a8c8bfbbdd77-libvirt-combined-ca-bundle\") pod \"38870d92-6fb1-40ac-8763-a8c8bfbbdd77\" (UID: \"38870d92-6fb1-40ac-8763-a8c8bfbbdd77\") " Dec 03 13:39:58 crc kubenswrapper[4986]: I1203 13:39:58.608496 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38870d92-6fb1-40ac-8763-a8c8bfbbdd77-kube-api-access-2nt5r" (OuterVolumeSpecName: "kube-api-access-2nt5r") pod "38870d92-6fb1-40ac-8763-a8c8bfbbdd77" (UID: "38870d92-6fb1-40ac-8763-a8c8bfbbdd77"). InnerVolumeSpecName "kube-api-access-2nt5r". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:39:58 crc kubenswrapper[4986]: I1203 13:39:58.610508 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38870d92-6fb1-40ac-8763-a8c8bfbbdd77-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "38870d92-6fb1-40ac-8763-a8c8bfbbdd77" (UID: "38870d92-6fb1-40ac-8763-a8c8bfbbdd77"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:39:58 crc kubenswrapper[4986]: I1203 13:39:58.632728 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38870d92-6fb1-40ac-8763-a8c8bfbbdd77-inventory" (OuterVolumeSpecName: "inventory") pod "38870d92-6fb1-40ac-8763-a8c8bfbbdd77" (UID: "38870d92-6fb1-40ac-8763-a8c8bfbbdd77"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:39:58 crc kubenswrapper[4986]: I1203 13:39:58.637499 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38870d92-6fb1-40ac-8763-a8c8bfbbdd77-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "38870d92-6fb1-40ac-8763-a8c8bfbbdd77" (UID: "38870d92-6fb1-40ac-8763-a8c8bfbbdd77"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:39:58 crc kubenswrapper[4986]: I1203 13:39:58.647509 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38870d92-6fb1-40ac-8763-a8c8bfbbdd77-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "38870d92-6fb1-40ac-8763-a8c8bfbbdd77" (UID: "38870d92-6fb1-40ac-8763-a8c8bfbbdd77"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:39:58 crc kubenswrapper[4986]: I1203 13:39:58.704232 4986 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/38870d92-6fb1-40ac-8763-a8c8bfbbdd77-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Dec 03 13:39:58 crc kubenswrapper[4986]: I1203 13:39:58.704264 4986 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/38870d92-6fb1-40ac-8763-a8c8bfbbdd77-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 03 13:39:58 crc kubenswrapper[4986]: I1203 13:39:58.704274 4986 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/38870d92-6fb1-40ac-8763-a8c8bfbbdd77-inventory\") on node \"crc\" DevicePath \"\"" Dec 03 13:39:58 crc kubenswrapper[4986]: I1203 13:39:58.704297 4986 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38870d92-6fb1-40ac-8763-a8c8bfbbdd77-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 13:39:58 crc kubenswrapper[4986]: I1203 13:39:58.704308 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2nt5r\" (UniqueName: \"kubernetes.io/projected/38870d92-6fb1-40ac-8763-a8c8bfbbdd77-kube-api-access-2nt5r\") on node \"crc\" DevicePath \"\"" Dec 03 13:39:59 crc kubenswrapper[4986]: I1203 13:39:59.102823 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fdb9l" event={"ID":"38870d92-6fb1-40ac-8763-a8c8bfbbdd77","Type":"ContainerDied","Data":"82ab5d5776e56d08559cd6e487a8fb09ccc4d38070f9b252b8191ebb0b9e941d"} Dec 03 13:39:59 crc kubenswrapper[4986]: I1203 13:39:59.102903 4986 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82ab5d5776e56d08559cd6e487a8fb09ccc4d38070f9b252b8191ebb0b9e941d" Dec 03 13:39:59 crc kubenswrapper[4986]: I1203 13:39:59.103012 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fdb9l" Dec 03 13:39:59 crc kubenswrapper[4986]: I1203 13:39:59.197003 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-g6sdc"] Dec 03 13:39:59 crc kubenswrapper[4986]: E1203 13:39:59.197412 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38870d92-6fb1-40ac-8763-a8c8bfbbdd77" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Dec 03 13:39:59 crc kubenswrapper[4986]: I1203 13:39:59.197428 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="38870d92-6fb1-40ac-8763-a8c8bfbbdd77" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Dec 03 13:39:59 crc kubenswrapper[4986]: I1203 13:39:59.197590 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="38870d92-6fb1-40ac-8763-a8c8bfbbdd77" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Dec 03 13:39:59 crc kubenswrapper[4986]: I1203 13:39:59.198173 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-g6sdc" Dec 03 13:39:59 crc kubenswrapper[4986]: I1203 13:39:59.201587 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 03 13:39:59 crc kubenswrapper[4986]: I1203 13:39:59.201680 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Dec 03 13:39:59 crc kubenswrapper[4986]: I1203 13:39:59.201670 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jr75v" Dec 03 13:39:59 crc kubenswrapper[4986]: I1203 13:39:59.203036 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 03 13:39:59 crc kubenswrapper[4986]: I1203 13:39:59.203131 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 03 13:39:59 crc kubenswrapper[4986]: I1203 13:39:59.203234 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Dec 03 13:39:59 crc kubenswrapper[4986]: I1203 13:39:59.206361 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Dec 03 13:39:59 crc kubenswrapper[4986]: I1203 13:39:59.214535 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-g6sdc"] Dec 03 13:39:59 crc kubenswrapper[4986]: I1203 13:39:59.316158 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/2d43935a-d3d4-4e5e-b92a-dacb88b12f26-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-g6sdc\" (UID: \"2d43935a-d3d4-4e5e-b92a-dacb88b12f26\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-g6sdc" Dec 03 13:39:59 crc kubenswrapper[4986]: I1203 13:39:59.316403 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/2d43935a-d3d4-4e5e-b92a-dacb88b12f26-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-g6sdc\" (UID: \"2d43935a-d3d4-4e5e-b92a-dacb88b12f26\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-g6sdc" Dec 03 13:39:59 crc kubenswrapper[4986]: I1203 13:39:59.316453 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2d43935a-d3d4-4e5e-b92a-dacb88b12f26-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-g6sdc\" (UID: \"2d43935a-d3d4-4e5e-b92a-dacb88b12f26\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-g6sdc" Dec 03 13:39:59 crc kubenswrapper[4986]: I1203 13:39:59.316532 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/2d43935a-d3d4-4e5e-b92a-dacb88b12f26-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-g6sdc\" (UID: \"2d43935a-d3d4-4e5e-b92a-dacb88b12f26\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-g6sdc" Dec 03 13:39:59 crc kubenswrapper[4986]: I1203 13:39:59.316588 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgpv8\" (UniqueName: \"kubernetes.io/projected/2d43935a-d3d4-4e5e-b92a-dacb88b12f26-kube-api-access-kgpv8\") pod \"nova-edpm-deployment-openstack-edpm-ipam-g6sdc\" (UID: \"2d43935a-d3d4-4e5e-b92a-dacb88b12f26\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-g6sdc" Dec 03 13:39:59 crc kubenswrapper[4986]: I1203 13:39:59.316623 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/2d43935a-d3d4-4e5e-b92a-dacb88b12f26-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-g6sdc\" (UID: \"2d43935a-d3d4-4e5e-b92a-dacb88b12f26\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-g6sdc" Dec 03 13:39:59 crc kubenswrapper[4986]: I1203 13:39:59.316677 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d43935a-d3d4-4e5e-b92a-dacb88b12f26-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-g6sdc\" (UID: \"2d43935a-d3d4-4e5e-b92a-dacb88b12f26\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-g6sdc" Dec 03 13:39:59 crc kubenswrapper[4986]: I1203 13:39:59.316725 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/2d43935a-d3d4-4e5e-b92a-dacb88b12f26-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-g6sdc\" (UID: \"2d43935a-d3d4-4e5e-b92a-dacb88b12f26\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-g6sdc" Dec 03 13:39:59 crc kubenswrapper[4986]: I1203 13:39:59.316961 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2d43935a-d3d4-4e5e-b92a-dacb88b12f26-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-g6sdc\" (UID: \"2d43935a-d3d4-4e5e-b92a-dacb88b12f26\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-g6sdc" Dec 03 13:39:59 crc kubenswrapper[4986]: I1203 13:39:59.418206 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/2d43935a-d3d4-4e5e-b92a-dacb88b12f26-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-g6sdc\" (UID: \"2d43935a-d3d4-4e5e-b92a-dacb88b12f26\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-g6sdc" Dec 03 13:39:59 crc kubenswrapper[4986]: I1203 13:39:59.418583 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2d43935a-d3d4-4e5e-b92a-dacb88b12f26-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-g6sdc\" (UID: \"2d43935a-d3d4-4e5e-b92a-dacb88b12f26\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-g6sdc" Dec 03 13:39:59 crc kubenswrapper[4986]: I1203 13:39:59.418638 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/2d43935a-d3d4-4e5e-b92a-dacb88b12f26-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-g6sdc\" (UID: \"2d43935a-d3d4-4e5e-b92a-dacb88b12f26\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-g6sdc" Dec 03 13:39:59 crc kubenswrapper[4986]: I1203 13:39:59.418686 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgpv8\" (UniqueName: \"kubernetes.io/projected/2d43935a-d3d4-4e5e-b92a-dacb88b12f26-kube-api-access-kgpv8\") pod \"nova-edpm-deployment-openstack-edpm-ipam-g6sdc\" (UID: \"2d43935a-d3d4-4e5e-b92a-dacb88b12f26\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-g6sdc" Dec 03 13:39:59 crc kubenswrapper[4986]: I1203 13:39:59.418711 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/2d43935a-d3d4-4e5e-b92a-dacb88b12f26-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-g6sdc\" (UID: \"2d43935a-d3d4-4e5e-b92a-dacb88b12f26\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-g6sdc" Dec 03 13:39:59 crc kubenswrapper[4986]: I1203 13:39:59.418756 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d43935a-d3d4-4e5e-b92a-dacb88b12f26-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-g6sdc\" (UID: \"2d43935a-d3d4-4e5e-b92a-dacb88b12f26\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-g6sdc" Dec 03 13:39:59 crc kubenswrapper[4986]: I1203 13:39:59.418788 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/2d43935a-d3d4-4e5e-b92a-dacb88b12f26-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-g6sdc\" (UID: \"2d43935a-d3d4-4e5e-b92a-dacb88b12f26\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-g6sdc" Dec 03 13:39:59 crc kubenswrapper[4986]: I1203 13:39:59.418828 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2d43935a-d3d4-4e5e-b92a-dacb88b12f26-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-g6sdc\" (UID: \"2d43935a-d3d4-4e5e-b92a-dacb88b12f26\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-g6sdc" Dec 03 13:39:59 crc kubenswrapper[4986]: I1203 13:39:59.418890 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/2d43935a-d3d4-4e5e-b92a-dacb88b12f26-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-g6sdc\" (UID: \"2d43935a-d3d4-4e5e-b92a-dacb88b12f26\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-g6sdc" Dec 03 13:39:59 crc kubenswrapper[4986]: I1203 13:39:59.419606 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/2d43935a-d3d4-4e5e-b92a-dacb88b12f26-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-g6sdc\" (UID: \"2d43935a-d3d4-4e5e-b92a-dacb88b12f26\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-g6sdc" Dec 03 13:39:59 crc kubenswrapper[4986]: I1203 13:39:59.423459 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/2d43935a-d3d4-4e5e-b92a-dacb88b12f26-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-g6sdc\" (UID: \"2d43935a-d3d4-4e5e-b92a-dacb88b12f26\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-g6sdc" Dec 03 13:39:59 crc kubenswrapper[4986]: I1203 13:39:59.424184 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2d43935a-d3d4-4e5e-b92a-dacb88b12f26-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-g6sdc\" (UID: \"2d43935a-d3d4-4e5e-b92a-dacb88b12f26\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-g6sdc" Dec 03 13:39:59 crc kubenswrapper[4986]: I1203 13:39:59.426565 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/2d43935a-d3d4-4e5e-b92a-dacb88b12f26-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-g6sdc\" (UID: \"2d43935a-d3d4-4e5e-b92a-dacb88b12f26\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-g6sdc" Dec 03 13:39:59 crc kubenswrapper[4986]: I1203 13:39:59.429213 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/2d43935a-d3d4-4e5e-b92a-dacb88b12f26-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-g6sdc\" (UID: \"2d43935a-d3d4-4e5e-b92a-dacb88b12f26\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-g6sdc" Dec 03 13:39:59 crc kubenswrapper[4986]: I1203 13:39:59.430071 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/2d43935a-d3d4-4e5e-b92a-dacb88b12f26-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-g6sdc\" (UID: \"2d43935a-d3d4-4e5e-b92a-dacb88b12f26\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-g6sdc" Dec 03 13:39:59 crc kubenswrapper[4986]: I1203 13:39:59.430476 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2d43935a-d3d4-4e5e-b92a-dacb88b12f26-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-g6sdc\" (UID: \"2d43935a-d3d4-4e5e-b92a-dacb88b12f26\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-g6sdc" Dec 03 13:39:59 crc kubenswrapper[4986]: I1203 13:39:59.430781 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d43935a-d3d4-4e5e-b92a-dacb88b12f26-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-g6sdc\" (UID: \"2d43935a-d3d4-4e5e-b92a-dacb88b12f26\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-g6sdc" Dec 03 13:39:59 crc kubenswrapper[4986]: I1203 13:39:59.434774 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgpv8\" (UniqueName: \"kubernetes.io/projected/2d43935a-d3d4-4e5e-b92a-dacb88b12f26-kube-api-access-kgpv8\") pod \"nova-edpm-deployment-openstack-edpm-ipam-g6sdc\" (UID: \"2d43935a-d3d4-4e5e-b92a-dacb88b12f26\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-g6sdc" Dec 03 13:39:59 crc kubenswrapper[4986]: I1203 13:39:59.513911 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-g6sdc" Dec 03 13:40:00 crc kubenswrapper[4986]: I1203 13:40:00.045297 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-g6sdc"] Dec 03 13:40:00 crc kubenswrapper[4986]: I1203 13:40:00.055703 4986 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 03 13:40:00 crc kubenswrapper[4986]: I1203 13:40:00.113852 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-g6sdc" event={"ID":"2d43935a-d3d4-4e5e-b92a-dacb88b12f26","Type":"ContainerStarted","Data":"878521f290906887f01698c9d03243b6939380aa15d5d6afbb6477fbf3bd8dee"} Dec 03 13:40:02 crc kubenswrapper[4986]: I1203 13:40:02.136611 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-g6sdc" event={"ID":"2d43935a-d3d4-4e5e-b92a-dacb88b12f26","Type":"ContainerStarted","Data":"89814b4d4325e8d6a2d425887733f8f8053d3ac5346a3ef3e71a714a663069cb"} Dec 03 13:40:02 crc kubenswrapper[4986]: I1203 13:40:02.157767 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-g6sdc" podStartSLOduration=2.333184497 podStartE2EDuration="3.157748718s" podCreationTimestamp="2025-12-03 13:39:59 +0000 UTC" firstStartedPulling="2025-12-03 13:40:00.055439538 +0000 UTC m=+2659.521870739" lastFinishedPulling="2025-12-03 13:40:00.880003769 +0000 UTC m=+2660.346434960" observedRunningTime="2025-12-03 13:40:02.153887263 +0000 UTC m=+2661.620318454" watchObservedRunningTime="2025-12-03 13:40:02.157748718 +0000 UTC m=+2661.624179909" Dec 03 13:41:03 crc kubenswrapper[4986]: I1203 13:41:03.490560 4986 patch_prober.go:28] interesting pod/machine-config-daemon-xggpj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 13:41:03 crc kubenswrapper[4986]: I1203 13:41:03.491075 4986 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 13:41:33 crc kubenswrapper[4986]: I1203 13:41:33.490605 4986 patch_prober.go:28] interesting pod/machine-config-daemon-xggpj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 13:41:33 crc kubenswrapper[4986]: I1203 13:41:33.491117 4986 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 13:42:03 crc kubenswrapper[4986]: I1203 13:42:03.491067 4986 patch_prober.go:28] interesting pod/machine-config-daemon-xggpj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 13:42:03 crc kubenswrapper[4986]: I1203 13:42:03.491695 4986 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 13:42:03 crc kubenswrapper[4986]: I1203 13:42:03.491747 4986 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" Dec 03 13:42:03 crc kubenswrapper[4986]: I1203 13:42:03.492578 4986 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0bddd0a15eacde8641e29df77fc8459eb902921d6b45739e6df93c4ee9eb0dda"} pod="openshift-machine-config-operator/machine-config-daemon-xggpj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 13:42:03 crc kubenswrapper[4986]: I1203 13:42:03.492650 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" containerName="machine-config-daemon" containerID="cri-o://0bddd0a15eacde8641e29df77fc8459eb902921d6b45739e6df93c4ee9eb0dda" gracePeriod=600 Dec 03 13:42:04 crc kubenswrapper[4986]: I1203 13:42:04.311019 4986 generic.go:334] "Generic (PLEG): container finished" podID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" containerID="0bddd0a15eacde8641e29df77fc8459eb902921d6b45739e6df93c4ee9eb0dda" exitCode=0 Dec 03 13:42:04 crc kubenswrapper[4986]: I1203 13:42:04.311094 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" event={"ID":"f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c","Type":"ContainerDied","Data":"0bddd0a15eacde8641e29df77fc8459eb902921d6b45739e6df93c4ee9eb0dda"} Dec 03 13:42:04 crc kubenswrapper[4986]: I1203 13:42:04.311635 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" event={"ID":"f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c","Type":"ContainerStarted","Data":"bc3d6d2a861df402c588ed3c0b6e28555a58365f9870f007a996075068d0e641"} Dec 03 13:42:04 crc kubenswrapper[4986]: I1203 13:42:04.311655 4986 scope.go:117] "RemoveContainer" containerID="4069b648601fa33b27bda022b1a72851b773bff566b310c1ebd23beaecd8cde7" Dec 03 13:42:27 crc kubenswrapper[4986]: I1203 13:42:27.225646 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-stbr9"] Dec 03 13:42:27 crc kubenswrapper[4986]: I1203 13:42:27.229031 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-stbr9" Dec 03 13:42:27 crc kubenswrapper[4986]: I1203 13:42:27.265075 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-stbr9"] Dec 03 13:42:27 crc kubenswrapper[4986]: I1203 13:42:27.411873 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xlfq\" (UniqueName: \"kubernetes.io/projected/c657e6c7-4b41-4328-8ce8-6a0daa612919-kube-api-access-6xlfq\") pod \"redhat-marketplace-stbr9\" (UID: \"c657e6c7-4b41-4328-8ce8-6a0daa612919\") " pod="openshift-marketplace/redhat-marketplace-stbr9" Dec 03 13:42:27 crc kubenswrapper[4986]: I1203 13:42:27.411989 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c657e6c7-4b41-4328-8ce8-6a0daa612919-utilities\") pod \"redhat-marketplace-stbr9\" (UID: \"c657e6c7-4b41-4328-8ce8-6a0daa612919\") " pod="openshift-marketplace/redhat-marketplace-stbr9" Dec 03 13:42:27 crc kubenswrapper[4986]: I1203 13:42:27.412168 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c657e6c7-4b41-4328-8ce8-6a0daa612919-catalog-content\") pod \"redhat-marketplace-stbr9\" (UID: \"c657e6c7-4b41-4328-8ce8-6a0daa612919\") " pod="openshift-marketplace/redhat-marketplace-stbr9" Dec 03 13:42:27 crc kubenswrapper[4986]: I1203 13:42:27.514047 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c657e6c7-4b41-4328-8ce8-6a0daa612919-catalog-content\") pod \"redhat-marketplace-stbr9\" (UID: \"c657e6c7-4b41-4328-8ce8-6a0daa612919\") " pod="openshift-marketplace/redhat-marketplace-stbr9" Dec 03 13:42:27 crc kubenswrapper[4986]: I1203 13:42:27.514132 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xlfq\" (UniqueName: \"kubernetes.io/projected/c657e6c7-4b41-4328-8ce8-6a0daa612919-kube-api-access-6xlfq\") pod \"redhat-marketplace-stbr9\" (UID: \"c657e6c7-4b41-4328-8ce8-6a0daa612919\") " pod="openshift-marketplace/redhat-marketplace-stbr9" Dec 03 13:42:27 crc kubenswrapper[4986]: I1203 13:42:27.514207 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c657e6c7-4b41-4328-8ce8-6a0daa612919-utilities\") pod \"redhat-marketplace-stbr9\" (UID: \"c657e6c7-4b41-4328-8ce8-6a0daa612919\") " pod="openshift-marketplace/redhat-marketplace-stbr9" Dec 03 13:42:27 crc kubenswrapper[4986]: I1203 13:42:27.514693 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c657e6c7-4b41-4328-8ce8-6a0daa612919-catalog-content\") pod \"redhat-marketplace-stbr9\" (UID: \"c657e6c7-4b41-4328-8ce8-6a0daa612919\") " pod="openshift-marketplace/redhat-marketplace-stbr9" Dec 03 13:42:27 crc kubenswrapper[4986]: I1203 13:42:27.514940 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c657e6c7-4b41-4328-8ce8-6a0daa612919-utilities\") pod \"redhat-marketplace-stbr9\" (UID: \"c657e6c7-4b41-4328-8ce8-6a0daa612919\") " pod="openshift-marketplace/redhat-marketplace-stbr9" Dec 03 13:42:27 crc kubenswrapper[4986]: I1203 13:42:27.546661 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xlfq\" (UniqueName: \"kubernetes.io/projected/c657e6c7-4b41-4328-8ce8-6a0daa612919-kube-api-access-6xlfq\") pod \"redhat-marketplace-stbr9\" (UID: \"c657e6c7-4b41-4328-8ce8-6a0daa612919\") " pod="openshift-marketplace/redhat-marketplace-stbr9" Dec 03 13:42:27 crc kubenswrapper[4986]: I1203 13:42:27.563038 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-stbr9" Dec 03 13:42:28 crc kubenswrapper[4986]: I1203 13:42:28.066920 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-stbr9"] Dec 03 13:42:28 crc kubenswrapper[4986]: I1203 13:42:28.558038 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-stbr9" event={"ID":"c657e6c7-4b41-4328-8ce8-6a0daa612919","Type":"ContainerStarted","Data":"02a1e52144a8a2bfbd1401885fd017dec561d90091aca8b4c4524ff10843dd4e"} Dec 03 13:42:29 crc kubenswrapper[4986]: I1203 13:42:29.568988 4986 generic.go:334] "Generic (PLEG): container finished" podID="c657e6c7-4b41-4328-8ce8-6a0daa612919" containerID="d58a27d5c20c3ae5291ec7dd439163fb33bb3fccec2c423f5a3a960baeea6c08" exitCode=0 Dec 03 13:42:29 crc kubenswrapper[4986]: I1203 13:42:29.569039 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-stbr9" event={"ID":"c657e6c7-4b41-4328-8ce8-6a0daa612919","Type":"ContainerDied","Data":"d58a27d5c20c3ae5291ec7dd439163fb33bb3fccec2c423f5a3a960baeea6c08"} Dec 03 13:42:31 crc kubenswrapper[4986]: I1203 13:42:31.589636 4986 generic.go:334] "Generic (PLEG): container finished" podID="c657e6c7-4b41-4328-8ce8-6a0daa612919" containerID="7bcc92eee2beb700a5baa009ec4da4d675de6723568c519f24923ecec9106dbc" exitCode=0 Dec 03 13:42:31 crc kubenswrapper[4986]: I1203 13:42:31.589706 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-stbr9" event={"ID":"c657e6c7-4b41-4328-8ce8-6a0daa612919","Type":"ContainerDied","Data":"7bcc92eee2beb700a5baa009ec4da4d675de6723568c519f24923ecec9106dbc"} Dec 03 13:42:31 crc kubenswrapper[4986]: I1203 13:42:31.606953 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gsm5c"] Dec 03 13:42:31 crc kubenswrapper[4986]: I1203 13:42:31.612145 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gsm5c" Dec 03 13:42:31 crc kubenswrapper[4986]: I1203 13:42:31.620948 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gsm5c"] Dec 03 13:42:31 crc kubenswrapper[4986]: I1203 13:42:31.709419 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zd7kk\" (UniqueName: \"kubernetes.io/projected/6dddcec0-cd26-489c-88d3-9e132f621145-kube-api-access-zd7kk\") pod \"community-operators-gsm5c\" (UID: \"6dddcec0-cd26-489c-88d3-9e132f621145\") " pod="openshift-marketplace/community-operators-gsm5c" Dec 03 13:42:31 crc kubenswrapper[4986]: I1203 13:42:31.709579 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6dddcec0-cd26-489c-88d3-9e132f621145-utilities\") pod \"community-operators-gsm5c\" (UID: \"6dddcec0-cd26-489c-88d3-9e132f621145\") " pod="openshift-marketplace/community-operators-gsm5c" Dec 03 13:42:31 crc kubenswrapper[4986]: I1203 13:42:31.709840 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6dddcec0-cd26-489c-88d3-9e132f621145-catalog-content\") pod \"community-operators-gsm5c\" (UID: \"6dddcec0-cd26-489c-88d3-9e132f621145\") " pod="openshift-marketplace/community-operators-gsm5c" Dec 03 13:42:31 crc kubenswrapper[4986]: I1203 13:42:31.811810 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6dddcec0-cd26-489c-88d3-9e132f621145-utilities\") pod \"community-operators-gsm5c\" (UID: \"6dddcec0-cd26-489c-88d3-9e132f621145\") " pod="openshift-marketplace/community-operators-gsm5c" Dec 03 13:42:31 crc kubenswrapper[4986]: I1203 13:42:31.811892 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6dddcec0-cd26-489c-88d3-9e132f621145-catalog-content\") pod \"community-operators-gsm5c\" (UID: \"6dddcec0-cd26-489c-88d3-9e132f621145\") " pod="openshift-marketplace/community-operators-gsm5c" Dec 03 13:42:31 crc kubenswrapper[4986]: I1203 13:42:31.811984 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zd7kk\" (UniqueName: \"kubernetes.io/projected/6dddcec0-cd26-489c-88d3-9e132f621145-kube-api-access-zd7kk\") pod \"community-operators-gsm5c\" (UID: \"6dddcec0-cd26-489c-88d3-9e132f621145\") " pod="openshift-marketplace/community-operators-gsm5c" Dec 03 13:42:31 crc kubenswrapper[4986]: I1203 13:42:31.812417 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6dddcec0-cd26-489c-88d3-9e132f621145-utilities\") pod \"community-operators-gsm5c\" (UID: \"6dddcec0-cd26-489c-88d3-9e132f621145\") " pod="openshift-marketplace/community-operators-gsm5c" Dec 03 13:42:31 crc kubenswrapper[4986]: I1203 13:42:31.812500 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6dddcec0-cd26-489c-88d3-9e132f621145-catalog-content\") pod \"community-operators-gsm5c\" (UID: \"6dddcec0-cd26-489c-88d3-9e132f621145\") " pod="openshift-marketplace/community-operators-gsm5c" Dec 03 13:42:31 crc kubenswrapper[4986]: I1203 13:42:31.838622 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zd7kk\" (UniqueName: \"kubernetes.io/projected/6dddcec0-cd26-489c-88d3-9e132f621145-kube-api-access-zd7kk\") pod \"community-operators-gsm5c\" (UID: \"6dddcec0-cd26-489c-88d3-9e132f621145\") " pod="openshift-marketplace/community-operators-gsm5c" Dec 03 13:42:31 crc kubenswrapper[4986]: I1203 13:42:31.943427 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gsm5c" Dec 03 13:42:32 crc kubenswrapper[4986]: I1203 13:42:32.499053 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gsm5c"] Dec 03 13:42:32 crc kubenswrapper[4986]: I1203 13:42:32.599355 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gsm5c" event={"ID":"6dddcec0-cd26-489c-88d3-9e132f621145","Type":"ContainerStarted","Data":"b294bf5ee82bf976d4bc7d463a5d8f57f2901e25baba53f9294ba298e1e0a138"} Dec 03 13:42:33 crc kubenswrapper[4986]: I1203 13:42:33.609594 4986 generic.go:334] "Generic (PLEG): container finished" podID="6dddcec0-cd26-489c-88d3-9e132f621145" containerID="b4a8ac70c8f14be563edbb5fa0995a79b29fd23e8ebc1c11e7ce710548086c6a" exitCode=0 Dec 03 13:42:33 crc kubenswrapper[4986]: I1203 13:42:33.609684 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gsm5c" event={"ID":"6dddcec0-cd26-489c-88d3-9e132f621145","Type":"ContainerDied","Data":"b4a8ac70c8f14be563edbb5fa0995a79b29fd23e8ebc1c11e7ce710548086c6a"} Dec 03 13:42:33 crc kubenswrapper[4986]: I1203 13:42:33.619407 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-stbr9" event={"ID":"c657e6c7-4b41-4328-8ce8-6a0daa612919","Type":"ContainerStarted","Data":"dabd59c17241b007aa3725b01e70dd128b956d71209a5e265b7e369f51bad6c7"} Dec 03 13:42:33 crc kubenswrapper[4986]: I1203 13:42:33.648264 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-stbr9" podStartSLOduration=4.072013851 podStartE2EDuration="6.64824326s" podCreationTimestamp="2025-12-03 13:42:27 +0000 UTC" firstStartedPulling="2025-12-03 13:42:29.571241785 +0000 UTC m=+2809.037673106" lastFinishedPulling="2025-12-03 13:42:32.147471324 +0000 UTC m=+2811.613902515" observedRunningTime="2025-12-03 13:42:33.644643833 +0000 UTC m=+2813.111075024" watchObservedRunningTime="2025-12-03 13:42:33.64824326 +0000 UTC m=+2813.114674461" Dec 03 13:42:34 crc kubenswrapper[4986]: I1203 13:42:34.629647 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gsm5c" event={"ID":"6dddcec0-cd26-489c-88d3-9e132f621145","Type":"ContainerStarted","Data":"4a8bf8d89a4a538400616ffc16a2c1e10fa2de2cfbdd73b2ca0256f45eae4b65"} Dec 03 13:42:35 crc kubenswrapper[4986]: I1203 13:42:35.639134 4986 generic.go:334] "Generic (PLEG): container finished" podID="6dddcec0-cd26-489c-88d3-9e132f621145" containerID="4a8bf8d89a4a538400616ffc16a2c1e10fa2de2cfbdd73b2ca0256f45eae4b65" exitCode=0 Dec 03 13:42:35 crc kubenswrapper[4986]: I1203 13:42:35.639177 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gsm5c" event={"ID":"6dddcec0-cd26-489c-88d3-9e132f621145","Type":"ContainerDied","Data":"4a8bf8d89a4a538400616ffc16a2c1e10fa2de2cfbdd73b2ca0256f45eae4b65"} Dec 03 13:42:37 crc kubenswrapper[4986]: I1203 13:42:37.564916 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-stbr9" Dec 03 13:42:37 crc kubenswrapper[4986]: I1203 13:42:37.565441 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-stbr9" Dec 03 13:42:37 crc kubenswrapper[4986]: I1203 13:42:37.628542 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-stbr9" Dec 03 13:42:37 crc kubenswrapper[4986]: I1203 13:42:37.700058 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-stbr9" Dec 03 13:42:38 crc kubenswrapper[4986]: I1203 13:42:38.664752 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gsm5c" event={"ID":"6dddcec0-cd26-489c-88d3-9e132f621145","Type":"ContainerStarted","Data":"c81b41faf3d92b46c7f4a8134e34440da9b73fa0e7f084b346e81b00dc32c9cb"} Dec 03 13:42:38 crc kubenswrapper[4986]: I1203 13:42:38.687822 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gsm5c" podStartSLOduration=3.221857448 podStartE2EDuration="7.687804052s" podCreationTimestamp="2025-12-03 13:42:31 +0000 UTC" firstStartedPulling="2025-12-03 13:42:33.611835904 +0000 UTC m=+2813.078267095" lastFinishedPulling="2025-12-03 13:42:38.077782508 +0000 UTC m=+2817.544213699" observedRunningTime="2025-12-03 13:42:38.682086426 +0000 UTC m=+2818.148517617" watchObservedRunningTime="2025-12-03 13:42:38.687804052 +0000 UTC m=+2818.154235243" Dec 03 13:42:38 crc kubenswrapper[4986]: I1203 13:42:38.795159 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-stbr9"] Dec 03 13:42:39 crc kubenswrapper[4986]: I1203 13:42:39.676840 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-stbr9" podUID="c657e6c7-4b41-4328-8ce8-6a0daa612919" containerName="registry-server" containerID="cri-o://dabd59c17241b007aa3725b01e70dd128b956d71209a5e265b7e369f51bad6c7" gracePeriod=2 Dec 03 13:42:40 crc kubenswrapper[4986]: I1203 13:42:40.689643 4986 generic.go:334] "Generic (PLEG): container finished" podID="c657e6c7-4b41-4328-8ce8-6a0daa612919" containerID="dabd59c17241b007aa3725b01e70dd128b956d71209a5e265b7e369f51bad6c7" exitCode=0 Dec 03 13:42:40 crc kubenswrapper[4986]: I1203 13:42:40.689829 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-stbr9" event={"ID":"c657e6c7-4b41-4328-8ce8-6a0daa612919","Type":"ContainerDied","Data":"dabd59c17241b007aa3725b01e70dd128b956d71209a5e265b7e369f51bad6c7"} Dec 03 13:42:41 crc kubenswrapper[4986]: I1203 13:42:41.259894 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-stbr9" Dec 03 13:42:41 crc kubenswrapper[4986]: I1203 13:42:41.393590 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c657e6c7-4b41-4328-8ce8-6a0daa612919-utilities\") pod \"c657e6c7-4b41-4328-8ce8-6a0daa612919\" (UID: \"c657e6c7-4b41-4328-8ce8-6a0daa612919\") " Dec 03 13:42:41 crc kubenswrapper[4986]: I1203 13:42:41.393712 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c657e6c7-4b41-4328-8ce8-6a0daa612919-catalog-content\") pod \"c657e6c7-4b41-4328-8ce8-6a0daa612919\" (UID: \"c657e6c7-4b41-4328-8ce8-6a0daa612919\") " Dec 03 13:42:41 crc kubenswrapper[4986]: I1203 13:42:41.393937 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6xlfq\" (UniqueName: \"kubernetes.io/projected/c657e6c7-4b41-4328-8ce8-6a0daa612919-kube-api-access-6xlfq\") pod \"c657e6c7-4b41-4328-8ce8-6a0daa612919\" (UID: \"c657e6c7-4b41-4328-8ce8-6a0daa612919\") " Dec 03 13:42:41 crc kubenswrapper[4986]: I1203 13:42:41.395514 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c657e6c7-4b41-4328-8ce8-6a0daa612919-utilities" (OuterVolumeSpecName: "utilities") pod "c657e6c7-4b41-4328-8ce8-6a0daa612919" (UID: "c657e6c7-4b41-4328-8ce8-6a0daa612919"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 13:42:41 crc kubenswrapper[4986]: I1203 13:42:41.406262 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c657e6c7-4b41-4328-8ce8-6a0daa612919-kube-api-access-6xlfq" (OuterVolumeSpecName: "kube-api-access-6xlfq") pod "c657e6c7-4b41-4328-8ce8-6a0daa612919" (UID: "c657e6c7-4b41-4328-8ce8-6a0daa612919"). InnerVolumeSpecName "kube-api-access-6xlfq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:42:41 crc kubenswrapper[4986]: I1203 13:42:41.413849 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c657e6c7-4b41-4328-8ce8-6a0daa612919-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c657e6c7-4b41-4328-8ce8-6a0daa612919" (UID: "c657e6c7-4b41-4328-8ce8-6a0daa612919"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 13:42:41 crc kubenswrapper[4986]: I1203 13:42:41.496381 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6xlfq\" (UniqueName: \"kubernetes.io/projected/c657e6c7-4b41-4328-8ce8-6a0daa612919-kube-api-access-6xlfq\") on node \"crc\" DevicePath \"\"" Dec 03 13:42:41 crc kubenswrapper[4986]: I1203 13:42:41.496419 4986 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c657e6c7-4b41-4328-8ce8-6a0daa612919-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 13:42:41 crc kubenswrapper[4986]: I1203 13:42:41.496429 4986 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c657e6c7-4b41-4328-8ce8-6a0daa612919-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 13:42:41 crc kubenswrapper[4986]: I1203 13:42:41.701631 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-stbr9" event={"ID":"c657e6c7-4b41-4328-8ce8-6a0daa612919","Type":"ContainerDied","Data":"02a1e52144a8a2bfbd1401885fd017dec561d90091aca8b4c4524ff10843dd4e"} Dec 03 13:42:41 crc kubenswrapper[4986]: I1203 13:42:41.701682 4986 scope.go:117] "RemoveContainer" containerID="dabd59c17241b007aa3725b01e70dd128b956d71209a5e265b7e369f51bad6c7" Dec 03 13:42:41 crc kubenswrapper[4986]: I1203 13:42:41.701734 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-stbr9" Dec 03 13:42:41 crc kubenswrapper[4986]: I1203 13:42:41.735158 4986 scope.go:117] "RemoveContainer" containerID="7bcc92eee2beb700a5baa009ec4da4d675de6723568c519f24923ecec9106dbc" Dec 03 13:42:41 crc kubenswrapper[4986]: I1203 13:42:41.746833 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-stbr9"] Dec 03 13:42:41 crc kubenswrapper[4986]: I1203 13:42:41.759550 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-stbr9"] Dec 03 13:42:41 crc kubenswrapper[4986]: I1203 13:42:41.767259 4986 scope.go:117] "RemoveContainer" containerID="d58a27d5c20c3ae5291ec7dd439163fb33bb3fccec2c423f5a3a960baeea6c08" Dec 03 13:42:41 crc kubenswrapper[4986]: I1203 13:42:41.943747 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gsm5c" Dec 03 13:42:41 crc kubenswrapper[4986]: I1203 13:42:41.944034 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gsm5c" Dec 03 13:42:41 crc kubenswrapper[4986]: I1203 13:42:41.995742 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gsm5c" Dec 03 13:42:42 crc kubenswrapper[4986]: I1203 13:42:42.961197 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c657e6c7-4b41-4328-8ce8-6a0daa612919" path="/var/lib/kubelet/pods/c657e6c7-4b41-4328-8ce8-6a0daa612919/volumes" Dec 03 13:42:43 crc kubenswrapper[4986]: I1203 13:42:43.766507 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gsm5c" Dec 03 13:42:44 crc kubenswrapper[4986]: I1203 13:42:44.198633 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gsm5c"] Dec 03 13:42:45 crc kubenswrapper[4986]: I1203 13:42:45.738126 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gsm5c" podUID="6dddcec0-cd26-489c-88d3-9e132f621145" containerName="registry-server" containerID="cri-o://c81b41faf3d92b46c7f4a8134e34440da9b73fa0e7f084b346e81b00dc32c9cb" gracePeriod=2 Dec 03 13:42:46 crc kubenswrapper[4986]: I1203 13:42:46.752568 4986 generic.go:334] "Generic (PLEG): container finished" podID="6dddcec0-cd26-489c-88d3-9e132f621145" containerID="c81b41faf3d92b46c7f4a8134e34440da9b73fa0e7f084b346e81b00dc32c9cb" exitCode=0 Dec 03 13:42:46 crc kubenswrapper[4986]: I1203 13:42:46.752642 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gsm5c" event={"ID":"6dddcec0-cd26-489c-88d3-9e132f621145","Type":"ContainerDied","Data":"c81b41faf3d92b46c7f4a8134e34440da9b73fa0e7f084b346e81b00dc32c9cb"} Dec 03 13:42:46 crc kubenswrapper[4986]: I1203 13:42:46.752814 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gsm5c" event={"ID":"6dddcec0-cd26-489c-88d3-9e132f621145","Type":"ContainerDied","Data":"b294bf5ee82bf976d4bc7d463a5d8f57f2901e25baba53f9294ba298e1e0a138"} Dec 03 13:42:46 crc kubenswrapper[4986]: I1203 13:42:46.752830 4986 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b294bf5ee82bf976d4bc7d463a5d8f57f2901e25baba53f9294ba298e1e0a138" Dec 03 13:42:46 crc kubenswrapper[4986]: I1203 13:42:46.813978 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gsm5c" Dec 03 13:42:46 crc kubenswrapper[4986]: I1203 13:42:46.853274 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zd7kk\" (UniqueName: \"kubernetes.io/projected/6dddcec0-cd26-489c-88d3-9e132f621145-kube-api-access-zd7kk\") pod \"6dddcec0-cd26-489c-88d3-9e132f621145\" (UID: \"6dddcec0-cd26-489c-88d3-9e132f621145\") " Dec 03 13:42:46 crc kubenswrapper[4986]: I1203 13:42:46.853453 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6dddcec0-cd26-489c-88d3-9e132f621145-catalog-content\") pod \"6dddcec0-cd26-489c-88d3-9e132f621145\" (UID: \"6dddcec0-cd26-489c-88d3-9e132f621145\") " Dec 03 13:42:46 crc kubenswrapper[4986]: I1203 13:42:46.853514 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6dddcec0-cd26-489c-88d3-9e132f621145-utilities\") pod \"6dddcec0-cd26-489c-88d3-9e132f621145\" (UID: \"6dddcec0-cd26-489c-88d3-9e132f621145\") " Dec 03 13:42:46 crc kubenswrapper[4986]: I1203 13:42:46.854740 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6dddcec0-cd26-489c-88d3-9e132f621145-utilities" (OuterVolumeSpecName: "utilities") pod "6dddcec0-cd26-489c-88d3-9e132f621145" (UID: "6dddcec0-cd26-489c-88d3-9e132f621145"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 13:42:46 crc kubenswrapper[4986]: I1203 13:42:46.866833 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6dddcec0-cd26-489c-88d3-9e132f621145-kube-api-access-zd7kk" (OuterVolumeSpecName: "kube-api-access-zd7kk") pod "6dddcec0-cd26-489c-88d3-9e132f621145" (UID: "6dddcec0-cd26-489c-88d3-9e132f621145"). InnerVolumeSpecName "kube-api-access-zd7kk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:42:46 crc kubenswrapper[4986]: I1203 13:42:46.909660 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6dddcec0-cd26-489c-88d3-9e132f621145-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6dddcec0-cd26-489c-88d3-9e132f621145" (UID: "6dddcec0-cd26-489c-88d3-9e132f621145"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 13:42:46 crc kubenswrapper[4986]: I1203 13:42:46.956030 4986 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6dddcec0-cd26-489c-88d3-9e132f621145-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 13:42:46 crc kubenswrapper[4986]: I1203 13:42:46.956066 4986 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6dddcec0-cd26-489c-88d3-9e132f621145-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 13:42:46 crc kubenswrapper[4986]: I1203 13:42:46.956185 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zd7kk\" (UniqueName: \"kubernetes.io/projected/6dddcec0-cd26-489c-88d3-9e132f621145-kube-api-access-zd7kk\") on node \"crc\" DevicePath \"\"" Dec 03 13:42:47 crc kubenswrapper[4986]: I1203 13:42:47.764050 4986 generic.go:334] "Generic (PLEG): container finished" podID="2d43935a-d3d4-4e5e-b92a-dacb88b12f26" containerID="89814b4d4325e8d6a2d425887733f8f8053d3ac5346a3ef3e71a714a663069cb" exitCode=0 Dec 03 13:42:47 crc kubenswrapper[4986]: I1203 13:42:47.764111 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-g6sdc" event={"ID":"2d43935a-d3d4-4e5e-b92a-dacb88b12f26","Type":"ContainerDied","Data":"89814b4d4325e8d6a2d425887733f8f8053d3ac5346a3ef3e71a714a663069cb"} Dec 03 13:42:47 crc kubenswrapper[4986]: I1203 13:42:47.765296 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gsm5c" Dec 03 13:42:47 crc kubenswrapper[4986]: I1203 13:42:47.818832 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gsm5c"] Dec 03 13:42:47 crc kubenswrapper[4986]: I1203 13:42:47.829872 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gsm5c"] Dec 03 13:42:48 crc kubenswrapper[4986]: I1203 13:42:48.956728 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6dddcec0-cd26-489c-88d3-9e132f621145" path="/var/lib/kubelet/pods/6dddcec0-cd26-489c-88d3-9e132f621145/volumes" Dec 03 13:42:49 crc kubenswrapper[4986]: I1203 13:42:49.227364 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-g6sdc" Dec 03 13:42:49 crc kubenswrapper[4986]: I1203 13:42:49.398105 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2d43935a-d3d4-4e5e-b92a-dacb88b12f26-inventory\") pod \"2d43935a-d3d4-4e5e-b92a-dacb88b12f26\" (UID: \"2d43935a-d3d4-4e5e-b92a-dacb88b12f26\") " Dec 03 13:42:49 crc kubenswrapper[4986]: I1203 13:42:49.398332 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/2d43935a-d3d4-4e5e-b92a-dacb88b12f26-nova-migration-ssh-key-1\") pod \"2d43935a-d3d4-4e5e-b92a-dacb88b12f26\" (UID: \"2d43935a-d3d4-4e5e-b92a-dacb88b12f26\") " Dec 03 13:42:49 crc kubenswrapper[4986]: I1203 13:42:49.398410 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d43935a-d3d4-4e5e-b92a-dacb88b12f26-nova-combined-ca-bundle\") pod \"2d43935a-d3d4-4e5e-b92a-dacb88b12f26\" (UID: \"2d43935a-d3d4-4e5e-b92a-dacb88b12f26\") " Dec 03 13:42:49 crc kubenswrapper[4986]: I1203 13:42:49.398437 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/2d43935a-d3d4-4e5e-b92a-dacb88b12f26-nova-cell1-compute-config-1\") pod \"2d43935a-d3d4-4e5e-b92a-dacb88b12f26\" (UID: \"2d43935a-d3d4-4e5e-b92a-dacb88b12f26\") " Dec 03 13:42:49 crc kubenswrapper[4986]: I1203 13:42:49.398473 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/2d43935a-d3d4-4e5e-b92a-dacb88b12f26-nova-cell1-compute-config-0\") pod \"2d43935a-d3d4-4e5e-b92a-dacb88b12f26\" (UID: \"2d43935a-d3d4-4e5e-b92a-dacb88b12f26\") " Dec 03 13:42:49 crc kubenswrapper[4986]: I1203 13:42:49.398501 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kgpv8\" (UniqueName: \"kubernetes.io/projected/2d43935a-d3d4-4e5e-b92a-dacb88b12f26-kube-api-access-kgpv8\") pod \"2d43935a-d3d4-4e5e-b92a-dacb88b12f26\" (UID: \"2d43935a-d3d4-4e5e-b92a-dacb88b12f26\") " Dec 03 13:42:49 crc kubenswrapper[4986]: I1203 13:42:49.398573 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/2d43935a-d3d4-4e5e-b92a-dacb88b12f26-nova-migration-ssh-key-0\") pod \"2d43935a-d3d4-4e5e-b92a-dacb88b12f26\" (UID: \"2d43935a-d3d4-4e5e-b92a-dacb88b12f26\") " Dec 03 13:42:49 crc kubenswrapper[4986]: I1203 13:42:49.398601 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/2d43935a-d3d4-4e5e-b92a-dacb88b12f26-nova-extra-config-0\") pod \"2d43935a-d3d4-4e5e-b92a-dacb88b12f26\" (UID: \"2d43935a-d3d4-4e5e-b92a-dacb88b12f26\") " Dec 03 13:42:49 crc kubenswrapper[4986]: I1203 13:42:49.398618 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2d43935a-d3d4-4e5e-b92a-dacb88b12f26-ssh-key\") pod \"2d43935a-d3d4-4e5e-b92a-dacb88b12f26\" (UID: \"2d43935a-d3d4-4e5e-b92a-dacb88b12f26\") " Dec 03 13:42:49 crc kubenswrapper[4986]: I1203 13:42:49.415551 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d43935a-d3d4-4e5e-b92a-dacb88b12f26-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "2d43935a-d3d4-4e5e-b92a-dacb88b12f26" (UID: "2d43935a-d3d4-4e5e-b92a-dacb88b12f26"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:42:49 crc kubenswrapper[4986]: I1203 13:42:49.421104 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d43935a-d3d4-4e5e-b92a-dacb88b12f26-kube-api-access-kgpv8" (OuterVolumeSpecName: "kube-api-access-kgpv8") pod "2d43935a-d3d4-4e5e-b92a-dacb88b12f26" (UID: "2d43935a-d3d4-4e5e-b92a-dacb88b12f26"). InnerVolumeSpecName "kube-api-access-kgpv8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:42:49 crc kubenswrapper[4986]: I1203 13:42:49.426866 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d43935a-d3d4-4e5e-b92a-dacb88b12f26-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "2d43935a-d3d4-4e5e-b92a-dacb88b12f26" (UID: "2d43935a-d3d4-4e5e-b92a-dacb88b12f26"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:42:49 crc kubenswrapper[4986]: I1203 13:42:49.428021 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d43935a-d3d4-4e5e-b92a-dacb88b12f26-inventory" (OuterVolumeSpecName: "inventory") pod "2d43935a-d3d4-4e5e-b92a-dacb88b12f26" (UID: "2d43935a-d3d4-4e5e-b92a-dacb88b12f26"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:42:49 crc kubenswrapper[4986]: I1203 13:42:49.428634 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d43935a-d3d4-4e5e-b92a-dacb88b12f26-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "2d43935a-d3d4-4e5e-b92a-dacb88b12f26" (UID: "2d43935a-d3d4-4e5e-b92a-dacb88b12f26"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:42:49 crc kubenswrapper[4986]: I1203 13:42:49.434189 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d43935a-d3d4-4e5e-b92a-dacb88b12f26-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "2d43935a-d3d4-4e5e-b92a-dacb88b12f26" (UID: "2d43935a-d3d4-4e5e-b92a-dacb88b12f26"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:42:49 crc kubenswrapper[4986]: I1203 13:42:49.437344 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d43935a-d3d4-4e5e-b92a-dacb88b12f26-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "2d43935a-d3d4-4e5e-b92a-dacb88b12f26" (UID: "2d43935a-d3d4-4e5e-b92a-dacb88b12f26"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:42:49 crc kubenswrapper[4986]: I1203 13:42:49.438018 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d43935a-d3d4-4e5e-b92a-dacb88b12f26-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "2d43935a-d3d4-4e5e-b92a-dacb88b12f26" (UID: "2d43935a-d3d4-4e5e-b92a-dacb88b12f26"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:42:49 crc kubenswrapper[4986]: I1203 13:42:49.450396 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d43935a-d3d4-4e5e-b92a-dacb88b12f26-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "2d43935a-d3d4-4e5e-b92a-dacb88b12f26" (UID: "2d43935a-d3d4-4e5e-b92a-dacb88b12f26"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:42:49 crc kubenswrapper[4986]: I1203 13:42:49.501549 4986 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/2d43935a-d3d4-4e5e-b92a-dacb88b12f26-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Dec 03 13:42:49 crc kubenswrapper[4986]: I1203 13:42:49.501596 4986 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/2d43935a-d3d4-4e5e-b92a-dacb88b12f26-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Dec 03 13:42:49 crc kubenswrapper[4986]: I1203 13:42:49.501613 4986 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2d43935a-d3d4-4e5e-b92a-dacb88b12f26-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 03 13:42:49 crc kubenswrapper[4986]: I1203 13:42:49.501627 4986 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2d43935a-d3d4-4e5e-b92a-dacb88b12f26-inventory\") on node \"crc\" DevicePath \"\"" Dec 03 13:42:49 crc kubenswrapper[4986]: I1203 13:42:49.501643 4986 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/2d43935a-d3d4-4e5e-b92a-dacb88b12f26-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Dec 03 13:42:49 crc kubenswrapper[4986]: I1203 13:42:49.501658 4986 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d43935a-d3d4-4e5e-b92a-dacb88b12f26-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 13:42:49 crc kubenswrapper[4986]: I1203 13:42:49.501672 4986 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/2d43935a-d3d4-4e5e-b92a-dacb88b12f26-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Dec 03 13:42:49 crc kubenswrapper[4986]: I1203 13:42:49.501688 4986 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/2d43935a-d3d4-4e5e-b92a-dacb88b12f26-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Dec 03 13:42:49 crc kubenswrapper[4986]: I1203 13:42:49.501702 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kgpv8\" (UniqueName: \"kubernetes.io/projected/2d43935a-d3d4-4e5e-b92a-dacb88b12f26-kube-api-access-kgpv8\") on node \"crc\" DevicePath \"\"" Dec 03 13:42:49 crc kubenswrapper[4986]: I1203 13:42:49.782435 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-g6sdc" event={"ID":"2d43935a-d3d4-4e5e-b92a-dacb88b12f26","Type":"ContainerDied","Data":"878521f290906887f01698c9d03243b6939380aa15d5d6afbb6477fbf3bd8dee"} Dec 03 13:42:49 crc kubenswrapper[4986]: I1203 13:42:49.782778 4986 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="878521f290906887f01698c9d03243b6939380aa15d5d6afbb6477fbf3bd8dee" Dec 03 13:42:49 crc kubenswrapper[4986]: I1203 13:42:49.782831 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-g6sdc" Dec 03 13:42:49 crc kubenswrapper[4986]: I1203 13:42:49.885857 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2vnss"] Dec 03 13:42:49 crc kubenswrapper[4986]: E1203 13:42:49.886267 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dddcec0-cd26-489c-88d3-9e132f621145" containerName="extract-utilities" Dec 03 13:42:49 crc kubenswrapper[4986]: I1203 13:42:49.886303 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dddcec0-cd26-489c-88d3-9e132f621145" containerName="extract-utilities" Dec 03 13:42:49 crc kubenswrapper[4986]: E1203 13:42:49.886320 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c657e6c7-4b41-4328-8ce8-6a0daa612919" containerName="extract-content" Dec 03 13:42:49 crc kubenswrapper[4986]: I1203 13:42:49.886330 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="c657e6c7-4b41-4328-8ce8-6a0daa612919" containerName="extract-content" Dec 03 13:42:49 crc kubenswrapper[4986]: E1203 13:42:49.886347 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dddcec0-cd26-489c-88d3-9e132f621145" containerName="registry-server" Dec 03 13:42:49 crc kubenswrapper[4986]: I1203 13:42:49.886355 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dddcec0-cd26-489c-88d3-9e132f621145" containerName="registry-server" Dec 03 13:42:49 crc kubenswrapper[4986]: E1203 13:42:49.886372 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c657e6c7-4b41-4328-8ce8-6a0daa612919" containerName="registry-server" Dec 03 13:42:49 crc kubenswrapper[4986]: I1203 13:42:49.886379 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="c657e6c7-4b41-4328-8ce8-6a0daa612919" containerName="registry-server" Dec 03 13:42:49 crc kubenswrapper[4986]: E1203 13:42:49.886395 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dddcec0-cd26-489c-88d3-9e132f621145" containerName="extract-content" Dec 03 13:42:49 crc kubenswrapper[4986]: I1203 13:42:49.886403 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dddcec0-cd26-489c-88d3-9e132f621145" containerName="extract-content" Dec 03 13:42:49 crc kubenswrapper[4986]: E1203 13:42:49.886420 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d43935a-d3d4-4e5e-b92a-dacb88b12f26" containerName="nova-edpm-deployment-openstack-edpm-ipam" Dec 03 13:42:49 crc kubenswrapper[4986]: I1203 13:42:49.886429 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d43935a-d3d4-4e5e-b92a-dacb88b12f26" containerName="nova-edpm-deployment-openstack-edpm-ipam" Dec 03 13:42:49 crc kubenswrapper[4986]: E1203 13:42:49.886445 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c657e6c7-4b41-4328-8ce8-6a0daa612919" containerName="extract-utilities" Dec 03 13:42:49 crc kubenswrapper[4986]: I1203 13:42:49.886453 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="c657e6c7-4b41-4328-8ce8-6a0daa612919" containerName="extract-utilities" Dec 03 13:42:49 crc kubenswrapper[4986]: I1203 13:42:49.886672 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d43935a-d3d4-4e5e-b92a-dacb88b12f26" containerName="nova-edpm-deployment-openstack-edpm-ipam" Dec 03 13:42:49 crc kubenswrapper[4986]: I1203 13:42:49.886692 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="6dddcec0-cd26-489c-88d3-9e132f621145" containerName="registry-server" Dec 03 13:42:49 crc kubenswrapper[4986]: I1203 13:42:49.886710 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="c657e6c7-4b41-4328-8ce8-6a0daa612919" containerName="registry-server" Dec 03 13:42:49 crc kubenswrapper[4986]: I1203 13:42:49.887533 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2vnss" Dec 03 13:42:49 crc kubenswrapper[4986]: I1203 13:42:49.889784 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 03 13:42:49 crc kubenswrapper[4986]: I1203 13:42:49.890016 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jr75v" Dec 03 13:42:49 crc kubenswrapper[4986]: I1203 13:42:49.890342 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Dec 03 13:42:49 crc kubenswrapper[4986]: I1203 13:42:49.890749 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 03 13:42:49 crc kubenswrapper[4986]: I1203 13:42:49.891066 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 03 13:42:49 crc kubenswrapper[4986]: I1203 13:42:49.902735 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2vnss"] Dec 03 13:42:50 crc kubenswrapper[4986]: I1203 13:42:50.021298 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/963319ab-2780-4d81-bf46-9b6dee690eeb-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2vnss\" (UID: \"963319ab-2780-4d81-bf46-9b6dee690eeb\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2vnss" Dec 03 13:42:50 crc kubenswrapper[4986]: I1203 13:42:50.021373 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/963319ab-2780-4d81-bf46-9b6dee690eeb-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2vnss\" (UID: \"963319ab-2780-4d81-bf46-9b6dee690eeb\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2vnss" Dec 03 13:42:50 crc kubenswrapper[4986]: I1203 13:42:50.021483 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/963319ab-2780-4d81-bf46-9b6dee690eeb-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2vnss\" (UID: \"963319ab-2780-4d81-bf46-9b6dee690eeb\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2vnss" Dec 03 13:42:50 crc kubenswrapper[4986]: I1203 13:42:50.021700 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/963319ab-2780-4d81-bf46-9b6dee690eeb-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2vnss\" (UID: \"963319ab-2780-4d81-bf46-9b6dee690eeb\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2vnss" Dec 03 13:42:50 crc kubenswrapper[4986]: I1203 13:42:50.021731 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/963319ab-2780-4d81-bf46-9b6dee690eeb-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2vnss\" (UID: \"963319ab-2780-4d81-bf46-9b6dee690eeb\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2vnss" Dec 03 13:42:50 crc kubenswrapper[4986]: I1203 13:42:50.021828 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqjb5\" (UniqueName: \"kubernetes.io/projected/963319ab-2780-4d81-bf46-9b6dee690eeb-kube-api-access-hqjb5\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2vnss\" (UID: \"963319ab-2780-4d81-bf46-9b6dee690eeb\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2vnss" Dec 03 13:42:50 crc kubenswrapper[4986]: I1203 13:42:50.021890 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/963319ab-2780-4d81-bf46-9b6dee690eeb-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2vnss\" (UID: \"963319ab-2780-4d81-bf46-9b6dee690eeb\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2vnss" Dec 03 13:42:50 crc kubenswrapper[4986]: I1203 13:42:50.124072 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/963319ab-2780-4d81-bf46-9b6dee690eeb-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2vnss\" (UID: \"963319ab-2780-4d81-bf46-9b6dee690eeb\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2vnss" Dec 03 13:42:50 crc kubenswrapper[4986]: I1203 13:42:50.124127 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/963319ab-2780-4d81-bf46-9b6dee690eeb-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2vnss\" (UID: \"963319ab-2780-4d81-bf46-9b6dee690eeb\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2vnss" Dec 03 13:42:50 crc kubenswrapper[4986]: I1203 13:42:50.124189 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/963319ab-2780-4d81-bf46-9b6dee690eeb-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2vnss\" (UID: \"963319ab-2780-4d81-bf46-9b6dee690eeb\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2vnss" Dec 03 13:42:50 crc kubenswrapper[4986]: I1203 13:42:50.124326 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/963319ab-2780-4d81-bf46-9b6dee690eeb-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2vnss\" (UID: \"963319ab-2780-4d81-bf46-9b6dee690eeb\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2vnss" Dec 03 13:42:50 crc kubenswrapper[4986]: I1203 13:42:50.124364 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/963319ab-2780-4d81-bf46-9b6dee690eeb-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2vnss\" (UID: \"963319ab-2780-4d81-bf46-9b6dee690eeb\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2vnss" Dec 03 13:42:50 crc kubenswrapper[4986]: I1203 13:42:50.124413 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqjb5\" (UniqueName: \"kubernetes.io/projected/963319ab-2780-4d81-bf46-9b6dee690eeb-kube-api-access-hqjb5\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2vnss\" (UID: \"963319ab-2780-4d81-bf46-9b6dee690eeb\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2vnss" Dec 03 13:42:50 crc kubenswrapper[4986]: I1203 13:42:50.124455 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/963319ab-2780-4d81-bf46-9b6dee690eeb-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2vnss\" (UID: \"963319ab-2780-4d81-bf46-9b6dee690eeb\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2vnss" Dec 03 13:42:50 crc kubenswrapper[4986]: I1203 13:42:50.130352 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/963319ab-2780-4d81-bf46-9b6dee690eeb-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2vnss\" (UID: \"963319ab-2780-4d81-bf46-9b6dee690eeb\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2vnss" Dec 03 13:42:50 crc kubenswrapper[4986]: I1203 13:42:50.130484 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/963319ab-2780-4d81-bf46-9b6dee690eeb-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2vnss\" (UID: \"963319ab-2780-4d81-bf46-9b6dee690eeb\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2vnss" Dec 03 13:42:50 crc kubenswrapper[4986]: I1203 13:42:50.130973 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/963319ab-2780-4d81-bf46-9b6dee690eeb-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2vnss\" (UID: \"963319ab-2780-4d81-bf46-9b6dee690eeb\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2vnss" Dec 03 13:42:50 crc kubenswrapper[4986]: I1203 13:42:50.132637 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/963319ab-2780-4d81-bf46-9b6dee690eeb-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2vnss\" (UID: \"963319ab-2780-4d81-bf46-9b6dee690eeb\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2vnss" Dec 03 13:42:50 crc kubenswrapper[4986]: I1203 13:42:50.141314 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/963319ab-2780-4d81-bf46-9b6dee690eeb-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2vnss\" (UID: \"963319ab-2780-4d81-bf46-9b6dee690eeb\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2vnss" Dec 03 13:42:50 crc kubenswrapper[4986]: I1203 13:42:50.142097 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/963319ab-2780-4d81-bf46-9b6dee690eeb-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2vnss\" (UID: \"963319ab-2780-4d81-bf46-9b6dee690eeb\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2vnss" Dec 03 13:42:50 crc kubenswrapper[4986]: I1203 13:42:50.144318 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqjb5\" (UniqueName: \"kubernetes.io/projected/963319ab-2780-4d81-bf46-9b6dee690eeb-kube-api-access-hqjb5\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2vnss\" (UID: \"963319ab-2780-4d81-bf46-9b6dee690eeb\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2vnss" Dec 03 13:42:50 crc kubenswrapper[4986]: I1203 13:42:50.204648 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2vnss" Dec 03 13:42:50 crc kubenswrapper[4986]: I1203 13:42:50.717795 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2vnss"] Dec 03 13:42:50 crc kubenswrapper[4986]: I1203 13:42:50.794646 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2vnss" event={"ID":"963319ab-2780-4d81-bf46-9b6dee690eeb","Type":"ContainerStarted","Data":"a753b53020a44d775c2bcc9e439e965dc7ecea660a7b51a41c097278dbcb4baf"} Dec 03 13:42:51 crc kubenswrapper[4986]: I1203 13:42:51.806443 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2vnss" event={"ID":"963319ab-2780-4d81-bf46-9b6dee690eeb","Type":"ContainerStarted","Data":"d8f855c7210ddb0ba026f58d6fd4f3d983b14a8d42838c52c4fa5f1b838a5c90"} Dec 03 13:42:51 crc kubenswrapper[4986]: I1203 13:42:51.843310 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2vnss" podStartSLOduration=2.436289233 podStartE2EDuration="2.8432692s" podCreationTimestamp="2025-12-03 13:42:49 +0000 UTC" firstStartedPulling="2025-12-03 13:42:50.723995352 +0000 UTC m=+2830.190426543" lastFinishedPulling="2025-12-03 13:42:51.130975319 +0000 UTC m=+2830.597406510" observedRunningTime="2025-12-03 13:42:51.832050807 +0000 UTC m=+2831.298482018" watchObservedRunningTime="2025-12-03 13:42:51.8432692 +0000 UTC m=+2831.309700391" Dec 03 13:44:03 crc kubenswrapper[4986]: I1203 13:44:03.491469 4986 patch_prober.go:28] interesting pod/machine-config-daemon-xggpj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 13:44:03 crc kubenswrapper[4986]: I1203 13:44:03.493238 4986 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 13:44:33 crc kubenswrapper[4986]: I1203 13:44:33.490952 4986 patch_prober.go:28] interesting pod/machine-config-daemon-xggpj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 13:44:33 crc kubenswrapper[4986]: I1203 13:44:33.491588 4986 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 13:44:43 crc kubenswrapper[4986]: I1203 13:44:43.646356 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9zbmg"] Dec 03 13:44:43 crc kubenswrapper[4986]: I1203 13:44:43.649533 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9zbmg" Dec 03 13:44:43 crc kubenswrapper[4986]: I1203 13:44:43.667419 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9zbmg"] Dec 03 13:44:43 crc kubenswrapper[4986]: I1203 13:44:43.695016 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g87p6\" (UniqueName: \"kubernetes.io/projected/7e1c1919-fbf2-483b-9817-22345a644dd3-kube-api-access-g87p6\") pod \"redhat-operators-9zbmg\" (UID: \"7e1c1919-fbf2-483b-9817-22345a644dd3\") " pod="openshift-marketplace/redhat-operators-9zbmg" Dec 03 13:44:43 crc kubenswrapper[4986]: I1203 13:44:43.695249 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e1c1919-fbf2-483b-9817-22345a644dd3-utilities\") pod \"redhat-operators-9zbmg\" (UID: \"7e1c1919-fbf2-483b-9817-22345a644dd3\") " pod="openshift-marketplace/redhat-operators-9zbmg" Dec 03 13:44:43 crc kubenswrapper[4986]: I1203 13:44:43.695381 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e1c1919-fbf2-483b-9817-22345a644dd3-catalog-content\") pod \"redhat-operators-9zbmg\" (UID: \"7e1c1919-fbf2-483b-9817-22345a644dd3\") " pod="openshift-marketplace/redhat-operators-9zbmg" Dec 03 13:44:43 crc kubenswrapper[4986]: I1203 13:44:43.796582 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g87p6\" (UniqueName: \"kubernetes.io/projected/7e1c1919-fbf2-483b-9817-22345a644dd3-kube-api-access-g87p6\") pod \"redhat-operators-9zbmg\" (UID: \"7e1c1919-fbf2-483b-9817-22345a644dd3\") " pod="openshift-marketplace/redhat-operators-9zbmg" Dec 03 13:44:43 crc kubenswrapper[4986]: I1203 13:44:43.796684 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e1c1919-fbf2-483b-9817-22345a644dd3-utilities\") pod \"redhat-operators-9zbmg\" (UID: \"7e1c1919-fbf2-483b-9817-22345a644dd3\") " pod="openshift-marketplace/redhat-operators-9zbmg" Dec 03 13:44:43 crc kubenswrapper[4986]: I1203 13:44:43.796720 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e1c1919-fbf2-483b-9817-22345a644dd3-catalog-content\") pod \"redhat-operators-9zbmg\" (UID: \"7e1c1919-fbf2-483b-9817-22345a644dd3\") " pod="openshift-marketplace/redhat-operators-9zbmg" Dec 03 13:44:43 crc kubenswrapper[4986]: I1203 13:44:43.797174 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e1c1919-fbf2-483b-9817-22345a644dd3-catalog-content\") pod \"redhat-operators-9zbmg\" (UID: \"7e1c1919-fbf2-483b-9817-22345a644dd3\") " pod="openshift-marketplace/redhat-operators-9zbmg" Dec 03 13:44:43 crc kubenswrapper[4986]: I1203 13:44:43.797400 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e1c1919-fbf2-483b-9817-22345a644dd3-utilities\") pod \"redhat-operators-9zbmg\" (UID: \"7e1c1919-fbf2-483b-9817-22345a644dd3\") " pod="openshift-marketplace/redhat-operators-9zbmg" Dec 03 13:44:43 crc kubenswrapper[4986]: I1203 13:44:43.822457 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g87p6\" (UniqueName: \"kubernetes.io/projected/7e1c1919-fbf2-483b-9817-22345a644dd3-kube-api-access-g87p6\") pod \"redhat-operators-9zbmg\" (UID: \"7e1c1919-fbf2-483b-9817-22345a644dd3\") " pod="openshift-marketplace/redhat-operators-9zbmg" Dec 03 13:44:43 crc kubenswrapper[4986]: I1203 13:44:43.969711 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9zbmg" Dec 03 13:44:44 crc kubenswrapper[4986]: I1203 13:44:44.466677 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9zbmg"] Dec 03 13:44:44 crc kubenswrapper[4986]: I1203 13:44:44.773048 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9zbmg" event={"ID":"7e1c1919-fbf2-483b-9817-22345a644dd3","Type":"ContainerStarted","Data":"9e59599a8fb4b98010b91d6965fde184bd16c29b12f4d8583ec264b16bfaea37"} Dec 03 13:44:44 crc kubenswrapper[4986]: I1203 13:44:44.773643 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9zbmg" event={"ID":"7e1c1919-fbf2-483b-9817-22345a644dd3","Type":"ContainerStarted","Data":"0918689d7fd0cbf0a7a1649061012dfc3aea9fc0b704f7daed845a155732aa27"} Dec 03 13:44:45 crc kubenswrapper[4986]: I1203 13:44:45.781967 4986 generic.go:334] "Generic (PLEG): container finished" podID="7e1c1919-fbf2-483b-9817-22345a644dd3" containerID="9e59599a8fb4b98010b91d6965fde184bd16c29b12f4d8583ec264b16bfaea37" exitCode=0 Dec 03 13:44:45 crc kubenswrapper[4986]: I1203 13:44:45.782312 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9zbmg" event={"ID":"7e1c1919-fbf2-483b-9817-22345a644dd3","Type":"ContainerDied","Data":"9e59599a8fb4b98010b91d6965fde184bd16c29b12f4d8583ec264b16bfaea37"} Dec 03 13:44:47 crc kubenswrapper[4986]: I1203 13:44:47.804185 4986 generic.go:334] "Generic (PLEG): container finished" podID="7e1c1919-fbf2-483b-9817-22345a644dd3" containerID="2cad0ec6daadb6f715cc194fe701d607d79fb433771e6bc732f611f92824da8e" exitCode=0 Dec 03 13:44:47 crc kubenswrapper[4986]: I1203 13:44:47.804300 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9zbmg" event={"ID":"7e1c1919-fbf2-483b-9817-22345a644dd3","Type":"ContainerDied","Data":"2cad0ec6daadb6f715cc194fe701d607d79fb433771e6bc732f611f92824da8e"} Dec 03 13:44:49 crc kubenswrapper[4986]: I1203 13:44:49.826327 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9zbmg" event={"ID":"7e1c1919-fbf2-483b-9817-22345a644dd3","Type":"ContainerStarted","Data":"0f91c70888e6f7753519fdf4afea498148a49272cc08690661bcd601ac3a7d3c"} Dec 03 13:44:49 crc kubenswrapper[4986]: I1203 13:44:49.846598 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9zbmg" podStartSLOduration=3.8932021629999998 podStartE2EDuration="6.846578732s" podCreationTimestamp="2025-12-03 13:44:43 +0000 UTC" firstStartedPulling="2025-12-03 13:44:45.784039608 +0000 UTC m=+2945.250470799" lastFinishedPulling="2025-12-03 13:44:48.737416177 +0000 UTC m=+2948.203847368" observedRunningTime="2025-12-03 13:44:49.839908361 +0000 UTC m=+2949.306339562" watchObservedRunningTime="2025-12-03 13:44:49.846578732 +0000 UTC m=+2949.313009913" Dec 03 13:44:53 crc kubenswrapper[4986]: I1203 13:44:53.970081 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9zbmg" Dec 03 13:44:53 crc kubenswrapper[4986]: I1203 13:44:53.971156 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9zbmg" Dec 03 13:44:54 crc kubenswrapper[4986]: I1203 13:44:54.034596 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9zbmg" Dec 03 13:44:54 crc kubenswrapper[4986]: I1203 13:44:54.937683 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9zbmg" Dec 03 13:44:55 crc kubenswrapper[4986]: I1203 13:44:55.008700 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9zbmg"] Dec 03 13:44:56 crc kubenswrapper[4986]: I1203 13:44:56.890082 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9zbmg" podUID="7e1c1919-fbf2-483b-9817-22345a644dd3" containerName="registry-server" containerID="cri-o://0f91c70888e6f7753519fdf4afea498148a49272cc08690661bcd601ac3a7d3c" gracePeriod=2 Dec 03 13:44:57 crc kubenswrapper[4986]: I1203 13:44:57.364986 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9zbmg" Dec 03 13:44:57 crc kubenswrapper[4986]: I1203 13:44:57.478137 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e1c1919-fbf2-483b-9817-22345a644dd3-utilities\") pod \"7e1c1919-fbf2-483b-9817-22345a644dd3\" (UID: \"7e1c1919-fbf2-483b-9817-22345a644dd3\") " Dec 03 13:44:57 crc kubenswrapper[4986]: I1203 13:44:57.478369 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g87p6\" (UniqueName: \"kubernetes.io/projected/7e1c1919-fbf2-483b-9817-22345a644dd3-kube-api-access-g87p6\") pod \"7e1c1919-fbf2-483b-9817-22345a644dd3\" (UID: \"7e1c1919-fbf2-483b-9817-22345a644dd3\") " Dec 03 13:44:57 crc kubenswrapper[4986]: I1203 13:44:57.478495 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e1c1919-fbf2-483b-9817-22345a644dd3-catalog-content\") pod \"7e1c1919-fbf2-483b-9817-22345a644dd3\" (UID: \"7e1c1919-fbf2-483b-9817-22345a644dd3\") " Dec 03 13:44:57 crc kubenswrapper[4986]: I1203 13:44:57.478913 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e1c1919-fbf2-483b-9817-22345a644dd3-utilities" (OuterVolumeSpecName: "utilities") pod "7e1c1919-fbf2-483b-9817-22345a644dd3" (UID: "7e1c1919-fbf2-483b-9817-22345a644dd3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 13:44:57 crc kubenswrapper[4986]: I1203 13:44:57.484567 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e1c1919-fbf2-483b-9817-22345a644dd3-kube-api-access-g87p6" (OuterVolumeSpecName: "kube-api-access-g87p6") pod "7e1c1919-fbf2-483b-9817-22345a644dd3" (UID: "7e1c1919-fbf2-483b-9817-22345a644dd3"). InnerVolumeSpecName "kube-api-access-g87p6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:44:57 crc kubenswrapper[4986]: I1203 13:44:57.582262 4986 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e1c1919-fbf2-483b-9817-22345a644dd3-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 13:44:57 crc kubenswrapper[4986]: I1203 13:44:57.582345 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g87p6\" (UniqueName: \"kubernetes.io/projected/7e1c1919-fbf2-483b-9817-22345a644dd3-kube-api-access-g87p6\") on node \"crc\" DevicePath \"\"" Dec 03 13:44:57 crc kubenswrapper[4986]: I1203 13:44:57.900298 4986 generic.go:334] "Generic (PLEG): container finished" podID="7e1c1919-fbf2-483b-9817-22345a644dd3" containerID="0f91c70888e6f7753519fdf4afea498148a49272cc08690661bcd601ac3a7d3c" exitCode=0 Dec 03 13:44:57 crc kubenswrapper[4986]: I1203 13:44:57.900338 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9zbmg" event={"ID":"7e1c1919-fbf2-483b-9817-22345a644dd3","Type":"ContainerDied","Data":"0f91c70888e6f7753519fdf4afea498148a49272cc08690661bcd601ac3a7d3c"} Dec 03 13:44:57 crc kubenswrapper[4986]: I1203 13:44:57.900365 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9zbmg" event={"ID":"7e1c1919-fbf2-483b-9817-22345a644dd3","Type":"ContainerDied","Data":"0918689d7fd0cbf0a7a1649061012dfc3aea9fc0b704f7daed845a155732aa27"} Dec 03 13:44:57 crc kubenswrapper[4986]: I1203 13:44:57.900455 4986 scope.go:117] "RemoveContainer" containerID="0f91c70888e6f7753519fdf4afea498148a49272cc08690661bcd601ac3a7d3c" Dec 03 13:44:57 crc kubenswrapper[4986]: I1203 13:44:57.901356 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9zbmg" Dec 03 13:44:57 crc kubenswrapper[4986]: I1203 13:44:57.929920 4986 scope.go:117] "RemoveContainer" containerID="2cad0ec6daadb6f715cc194fe701d607d79fb433771e6bc732f611f92824da8e" Dec 03 13:44:57 crc kubenswrapper[4986]: I1203 13:44:57.958772 4986 scope.go:117] "RemoveContainer" containerID="9e59599a8fb4b98010b91d6965fde184bd16c29b12f4d8583ec264b16bfaea37" Dec 03 13:44:57 crc kubenswrapper[4986]: I1203 13:44:57.999482 4986 scope.go:117] "RemoveContainer" containerID="0f91c70888e6f7753519fdf4afea498148a49272cc08690661bcd601ac3a7d3c" Dec 03 13:44:58 crc kubenswrapper[4986]: E1203 13:44:58.001851 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f91c70888e6f7753519fdf4afea498148a49272cc08690661bcd601ac3a7d3c\": container with ID starting with 0f91c70888e6f7753519fdf4afea498148a49272cc08690661bcd601ac3a7d3c not found: ID does not exist" containerID="0f91c70888e6f7753519fdf4afea498148a49272cc08690661bcd601ac3a7d3c" Dec 03 13:44:58 crc kubenswrapper[4986]: I1203 13:44:58.001893 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f91c70888e6f7753519fdf4afea498148a49272cc08690661bcd601ac3a7d3c"} err="failed to get container status \"0f91c70888e6f7753519fdf4afea498148a49272cc08690661bcd601ac3a7d3c\": rpc error: code = NotFound desc = could not find container \"0f91c70888e6f7753519fdf4afea498148a49272cc08690661bcd601ac3a7d3c\": container with ID starting with 0f91c70888e6f7753519fdf4afea498148a49272cc08690661bcd601ac3a7d3c not found: ID does not exist" Dec 03 13:44:58 crc kubenswrapper[4986]: I1203 13:44:58.001914 4986 scope.go:117] "RemoveContainer" containerID="2cad0ec6daadb6f715cc194fe701d607d79fb433771e6bc732f611f92824da8e" Dec 03 13:44:58 crc kubenswrapper[4986]: E1203 13:44:58.002290 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2cad0ec6daadb6f715cc194fe701d607d79fb433771e6bc732f611f92824da8e\": container with ID starting with 2cad0ec6daadb6f715cc194fe701d607d79fb433771e6bc732f611f92824da8e not found: ID does not exist" containerID="2cad0ec6daadb6f715cc194fe701d607d79fb433771e6bc732f611f92824da8e" Dec 03 13:44:58 crc kubenswrapper[4986]: I1203 13:44:58.002316 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cad0ec6daadb6f715cc194fe701d607d79fb433771e6bc732f611f92824da8e"} err="failed to get container status \"2cad0ec6daadb6f715cc194fe701d607d79fb433771e6bc732f611f92824da8e\": rpc error: code = NotFound desc = could not find container \"2cad0ec6daadb6f715cc194fe701d607d79fb433771e6bc732f611f92824da8e\": container with ID starting with 2cad0ec6daadb6f715cc194fe701d607d79fb433771e6bc732f611f92824da8e not found: ID does not exist" Dec 03 13:44:58 crc kubenswrapper[4986]: I1203 13:44:58.002332 4986 scope.go:117] "RemoveContainer" containerID="9e59599a8fb4b98010b91d6965fde184bd16c29b12f4d8583ec264b16bfaea37" Dec 03 13:44:58 crc kubenswrapper[4986]: E1203 13:44:58.002766 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e59599a8fb4b98010b91d6965fde184bd16c29b12f4d8583ec264b16bfaea37\": container with ID starting with 9e59599a8fb4b98010b91d6965fde184bd16c29b12f4d8583ec264b16bfaea37 not found: ID does not exist" containerID="9e59599a8fb4b98010b91d6965fde184bd16c29b12f4d8583ec264b16bfaea37" Dec 03 13:44:58 crc kubenswrapper[4986]: I1203 13:44:58.002790 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e59599a8fb4b98010b91d6965fde184bd16c29b12f4d8583ec264b16bfaea37"} err="failed to get container status \"9e59599a8fb4b98010b91d6965fde184bd16c29b12f4d8583ec264b16bfaea37\": rpc error: code = NotFound desc = could not find container \"9e59599a8fb4b98010b91d6965fde184bd16c29b12f4d8583ec264b16bfaea37\": container with ID starting with 9e59599a8fb4b98010b91d6965fde184bd16c29b12f4d8583ec264b16bfaea37 not found: ID does not exist" Dec 03 13:44:59 crc kubenswrapper[4986]: I1203 13:44:59.131017 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e1c1919-fbf2-483b-9817-22345a644dd3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7e1c1919-fbf2-483b-9817-22345a644dd3" (UID: "7e1c1919-fbf2-483b-9817-22345a644dd3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 13:44:59 crc kubenswrapper[4986]: I1203 13:44:59.212350 4986 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e1c1919-fbf2-483b-9817-22345a644dd3-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 13:44:59 crc kubenswrapper[4986]: I1203 13:44:59.441737 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9zbmg"] Dec 03 13:44:59 crc kubenswrapper[4986]: I1203 13:44:59.450875 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9zbmg"] Dec 03 13:45:00 crc kubenswrapper[4986]: I1203 13:45:00.150505 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412825-tf8hr"] Dec 03 13:45:00 crc kubenswrapper[4986]: E1203 13:45:00.150879 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e1c1919-fbf2-483b-9817-22345a644dd3" containerName="extract-content" Dec 03 13:45:00 crc kubenswrapper[4986]: I1203 13:45:00.150892 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e1c1919-fbf2-483b-9817-22345a644dd3" containerName="extract-content" Dec 03 13:45:00 crc kubenswrapper[4986]: E1203 13:45:00.150909 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e1c1919-fbf2-483b-9817-22345a644dd3" containerName="registry-server" Dec 03 13:45:00 crc kubenswrapper[4986]: I1203 13:45:00.150915 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e1c1919-fbf2-483b-9817-22345a644dd3" containerName="registry-server" Dec 03 13:45:00 crc kubenswrapper[4986]: E1203 13:45:00.150961 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e1c1919-fbf2-483b-9817-22345a644dd3" containerName="extract-utilities" Dec 03 13:45:00 crc kubenswrapper[4986]: I1203 13:45:00.150968 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e1c1919-fbf2-483b-9817-22345a644dd3" containerName="extract-utilities" Dec 03 13:45:00 crc kubenswrapper[4986]: I1203 13:45:00.151165 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e1c1919-fbf2-483b-9817-22345a644dd3" containerName="registry-server" Dec 03 13:45:00 crc kubenswrapper[4986]: I1203 13:45:00.151892 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412825-tf8hr" Dec 03 13:45:00 crc kubenswrapper[4986]: I1203 13:45:00.155051 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 03 13:45:00 crc kubenswrapper[4986]: I1203 13:45:00.155272 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 03 13:45:00 crc kubenswrapper[4986]: I1203 13:45:00.162493 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412825-tf8hr"] Dec 03 13:45:00 crc kubenswrapper[4986]: I1203 13:45:00.329353 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0048aa60-3577-48c4-832a-62ab5f8565d7-config-volume\") pod \"collect-profiles-29412825-tf8hr\" (UID: \"0048aa60-3577-48c4-832a-62ab5f8565d7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412825-tf8hr" Dec 03 13:45:00 crc kubenswrapper[4986]: I1203 13:45:00.329452 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m46sq\" (UniqueName: \"kubernetes.io/projected/0048aa60-3577-48c4-832a-62ab5f8565d7-kube-api-access-m46sq\") pod \"collect-profiles-29412825-tf8hr\" (UID: \"0048aa60-3577-48c4-832a-62ab5f8565d7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412825-tf8hr" Dec 03 13:45:00 crc kubenswrapper[4986]: I1203 13:45:00.329566 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0048aa60-3577-48c4-832a-62ab5f8565d7-secret-volume\") pod \"collect-profiles-29412825-tf8hr\" (UID: \"0048aa60-3577-48c4-832a-62ab5f8565d7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412825-tf8hr" Dec 03 13:45:00 crc kubenswrapper[4986]: I1203 13:45:00.431896 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0048aa60-3577-48c4-832a-62ab5f8565d7-secret-volume\") pod \"collect-profiles-29412825-tf8hr\" (UID: \"0048aa60-3577-48c4-832a-62ab5f8565d7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412825-tf8hr" Dec 03 13:45:00 crc kubenswrapper[4986]: I1203 13:45:00.432090 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0048aa60-3577-48c4-832a-62ab5f8565d7-config-volume\") pod \"collect-profiles-29412825-tf8hr\" (UID: \"0048aa60-3577-48c4-832a-62ab5f8565d7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412825-tf8hr" Dec 03 13:45:00 crc kubenswrapper[4986]: I1203 13:45:00.432124 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m46sq\" (UniqueName: \"kubernetes.io/projected/0048aa60-3577-48c4-832a-62ab5f8565d7-kube-api-access-m46sq\") pod \"collect-profiles-29412825-tf8hr\" (UID: \"0048aa60-3577-48c4-832a-62ab5f8565d7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412825-tf8hr" Dec 03 13:45:00 crc kubenswrapper[4986]: I1203 13:45:00.432944 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0048aa60-3577-48c4-832a-62ab5f8565d7-config-volume\") pod \"collect-profiles-29412825-tf8hr\" (UID: \"0048aa60-3577-48c4-832a-62ab5f8565d7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412825-tf8hr" Dec 03 13:45:00 crc kubenswrapper[4986]: I1203 13:45:00.438432 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0048aa60-3577-48c4-832a-62ab5f8565d7-secret-volume\") pod \"collect-profiles-29412825-tf8hr\" (UID: \"0048aa60-3577-48c4-832a-62ab5f8565d7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412825-tf8hr" Dec 03 13:45:00 crc kubenswrapper[4986]: I1203 13:45:00.449380 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m46sq\" (UniqueName: \"kubernetes.io/projected/0048aa60-3577-48c4-832a-62ab5f8565d7-kube-api-access-m46sq\") pod \"collect-profiles-29412825-tf8hr\" (UID: \"0048aa60-3577-48c4-832a-62ab5f8565d7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412825-tf8hr" Dec 03 13:45:00 crc kubenswrapper[4986]: I1203 13:45:00.474864 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412825-tf8hr" Dec 03 13:45:00 crc kubenswrapper[4986]: I1203 13:45:00.953347 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e1c1919-fbf2-483b-9817-22345a644dd3" path="/var/lib/kubelet/pods/7e1c1919-fbf2-483b-9817-22345a644dd3/volumes" Dec 03 13:45:01 crc kubenswrapper[4986]: I1203 13:45:01.027133 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412825-tf8hr"] Dec 03 13:45:01 crc kubenswrapper[4986]: I1203 13:45:01.940783 4986 generic.go:334] "Generic (PLEG): container finished" podID="0048aa60-3577-48c4-832a-62ab5f8565d7" containerID="9981af08bc0a3d4f65f7f964341d72762b49b2761ec8e790195978074b8225f9" exitCode=0 Dec 03 13:45:01 crc kubenswrapper[4986]: I1203 13:45:01.940860 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412825-tf8hr" event={"ID":"0048aa60-3577-48c4-832a-62ab5f8565d7","Type":"ContainerDied","Data":"9981af08bc0a3d4f65f7f964341d72762b49b2761ec8e790195978074b8225f9"} Dec 03 13:45:01 crc kubenswrapper[4986]: I1203 13:45:01.941100 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412825-tf8hr" event={"ID":"0048aa60-3577-48c4-832a-62ab5f8565d7","Type":"ContainerStarted","Data":"9b4be165043651ecc0c76704e62cbb191b916c47c422bf2b4372a1d42ecc177b"} Dec 03 13:45:03 crc kubenswrapper[4986]: I1203 13:45:03.279021 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412825-tf8hr" Dec 03 13:45:03 crc kubenswrapper[4986]: I1203 13:45:03.387487 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m46sq\" (UniqueName: \"kubernetes.io/projected/0048aa60-3577-48c4-832a-62ab5f8565d7-kube-api-access-m46sq\") pod \"0048aa60-3577-48c4-832a-62ab5f8565d7\" (UID: \"0048aa60-3577-48c4-832a-62ab5f8565d7\") " Dec 03 13:45:03 crc kubenswrapper[4986]: I1203 13:45:03.387741 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0048aa60-3577-48c4-832a-62ab5f8565d7-secret-volume\") pod \"0048aa60-3577-48c4-832a-62ab5f8565d7\" (UID: \"0048aa60-3577-48c4-832a-62ab5f8565d7\") " Dec 03 13:45:03 crc kubenswrapper[4986]: I1203 13:45:03.387856 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0048aa60-3577-48c4-832a-62ab5f8565d7-config-volume\") pod \"0048aa60-3577-48c4-832a-62ab5f8565d7\" (UID: \"0048aa60-3577-48c4-832a-62ab5f8565d7\") " Dec 03 13:45:03 crc kubenswrapper[4986]: I1203 13:45:03.388419 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0048aa60-3577-48c4-832a-62ab5f8565d7-config-volume" (OuterVolumeSpecName: "config-volume") pod "0048aa60-3577-48c4-832a-62ab5f8565d7" (UID: "0048aa60-3577-48c4-832a-62ab5f8565d7"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:45:03 crc kubenswrapper[4986]: I1203 13:45:03.393444 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0048aa60-3577-48c4-832a-62ab5f8565d7-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0048aa60-3577-48c4-832a-62ab5f8565d7" (UID: "0048aa60-3577-48c4-832a-62ab5f8565d7"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:45:03 crc kubenswrapper[4986]: I1203 13:45:03.393522 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0048aa60-3577-48c4-832a-62ab5f8565d7-kube-api-access-m46sq" (OuterVolumeSpecName: "kube-api-access-m46sq") pod "0048aa60-3577-48c4-832a-62ab5f8565d7" (UID: "0048aa60-3577-48c4-832a-62ab5f8565d7"). InnerVolumeSpecName "kube-api-access-m46sq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:45:03 crc kubenswrapper[4986]: I1203 13:45:03.490826 4986 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0048aa60-3577-48c4-832a-62ab5f8565d7-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 03 13:45:03 crc kubenswrapper[4986]: I1203 13:45:03.490889 4986 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0048aa60-3577-48c4-832a-62ab5f8565d7-config-volume\") on node \"crc\" DevicePath \"\"" Dec 03 13:45:03 crc kubenswrapper[4986]: I1203 13:45:03.490906 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m46sq\" (UniqueName: \"kubernetes.io/projected/0048aa60-3577-48c4-832a-62ab5f8565d7-kube-api-access-m46sq\") on node \"crc\" DevicePath \"\"" Dec 03 13:45:03 crc kubenswrapper[4986]: I1203 13:45:03.491485 4986 patch_prober.go:28] interesting pod/machine-config-daemon-xggpj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 13:45:03 crc kubenswrapper[4986]: I1203 13:45:03.491552 4986 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 13:45:03 crc kubenswrapper[4986]: I1203 13:45:03.491605 4986 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" Dec 03 13:45:03 crc kubenswrapper[4986]: I1203 13:45:03.492763 4986 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bc3d6d2a861df402c588ed3c0b6e28555a58365f9870f007a996075068d0e641"} pod="openshift-machine-config-operator/machine-config-daemon-xggpj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 13:45:03 crc kubenswrapper[4986]: I1203 13:45:03.492825 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" containerName="machine-config-daemon" containerID="cri-o://bc3d6d2a861df402c588ed3c0b6e28555a58365f9870f007a996075068d0e641" gracePeriod=600 Dec 03 13:45:03 crc kubenswrapper[4986]: E1203 13:45:03.615359 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 13:45:03 crc kubenswrapper[4986]: I1203 13:45:03.957725 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412825-tf8hr" event={"ID":"0048aa60-3577-48c4-832a-62ab5f8565d7","Type":"ContainerDied","Data":"9b4be165043651ecc0c76704e62cbb191b916c47c422bf2b4372a1d42ecc177b"} Dec 03 13:45:03 crc kubenswrapper[4986]: I1203 13:45:03.957776 4986 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b4be165043651ecc0c76704e62cbb191b916c47c422bf2b4372a1d42ecc177b" Dec 03 13:45:03 crc kubenswrapper[4986]: I1203 13:45:03.957785 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412825-tf8hr" Dec 03 13:45:03 crc kubenswrapper[4986]: I1203 13:45:03.961225 4986 generic.go:334] "Generic (PLEG): container finished" podID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" containerID="bc3d6d2a861df402c588ed3c0b6e28555a58365f9870f007a996075068d0e641" exitCode=0 Dec 03 13:45:03 crc kubenswrapper[4986]: I1203 13:45:03.961256 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" event={"ID":"f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c","Type":"ContainerDied","Data":"bc3d6d2a861df402c588ed3c0b6e28555a58365f9870f007a996075068d0e641"} Dec 03 13:45:03 crc kubenswrapper[4986]: I1203 13:45:03.961296 4986 scope.go:117] "RemoveContainer" containerID="0bddd0a15eacde8641e29df77fc8459eb902921d6b45739e6df93c4ee9eb0dda" Dec 03 13:45:03 crc kubenswrapper[4986]: I1203 13:45:03.961892 4986 scope.go:117] "RemoveContainer" containerID="bc3d6d2a861df402c588ed3c0b6e28555a58365f9870f007a996075068d0e641" Dec 03 13:45:03 crc kubenswrapper[4986]: E1203 13:45:03.962177 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 13:45:04 crc kubenswrapper[4986]: I1203 13:45:04.353543 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412780-c4mlz"] Dec 03 13:45:04 crc kubenswrapper[4986]: I1203 13:45:04.361793 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412780-2k56t"] Dec 03 13:45:04 crc kubenswrapper[4986]: I1203 13:45:04.370116 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412780-c4mlz"] Dec 03 13:45:04 crc kubenswrapper[4986]: I1203 13:45:04.378573 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412780-2k56t"] Dec 03 13:45:04 crc kubenswrapper[4986]: I1203 13:45:04.955781 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3474dcb3-8f5b-4eed-ada7-ec711dae3b1a" path="/var/lib/kubelet/pods/3474dcb3-8f5b-4eed-ada7-ec711dae3b1a/volumes" Dec 03 13:45:04 crc kubenswrapper[4986]: I1203 13:45:04.956779 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f68bd6da-f9ec-44ee-9a80-1b5820c75d8e" path="/var/lib/kubelet/pods/f68bd6da-f9ec-44ee-9a80-1b5820c75d8e/volumes" Dec 03 13:45:15 crc kubenswrapper[4986]: I1203 13:45:15.066412 4986 generic.go:334] "Generic (PLEG): container finished" podID="963319ab-2780-4d81-bf46-9b6dee690eeb" containerID="d8f855c7210ddb0ba026f58d6fd4f3d983b14a8d42838c52c4fa5f1b838a5c90" exitCode=0 Dec 03 13:45:15 crc kubenswrapper[4986]: I1203 13:45:15.066495 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2vnss" event={"ID":"963319ab-2780-4d81-bf46-9b6dee690eeb","Type":"ContainerDied","Data":"d8f855c7210ddb0ba026f58d6fd4f3d983b14a8d42838c52c4fa5f1b838a5c90"} Dec 03 13:45:16 crc kubenswrapper[4986]: I1203 13:45:16.483781 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2vnss" Dec 03 13:45:16 crc kubenswrapper[4986]: I1203 13:45:16.533895 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/963319ab-2780-4d81-bf46-9b6dee690eeb-ceilometer-compute-config-data-2\") pod \"963319ab-2780-4d81-bf46-9b6dee690eeb\" (UID: \"963319ab-2780-4d81-bf46-9b6dee690eeb\") " Dec 03 13:45:16 crc kubenswrapper[4986]: I1203 13:45:16.534046 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/963319ab-2780-4d81-bf46-9b6dee690eeb-inventory\") pod \"963319ab-2780-4d81-bf46-9b6dee690eeb\" (UID: \"963319ab-2780-4d81-bf46-9b6dee690eeb\") " Dec 03 13:45:16 crc kubenswrapper[4986]: I1203 13:45:16.534091 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqjb5\" (UniqueName: \"kubernetes.io/projected/963319ab-2780-4d81-bf46-9b6dee690eeb-kube-api-access-hqjb5\") pod \"963319ab-2780-4d81-bf46-9b6dee690eeb\" (UID: \"963319ab-2780-4d81-bf46-9b6dee690eeb\") " Dec 03 13:45:16 crc kubenswrapper[4986]: I1203 13:45:16.534144 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/963319ab-2780-4d81-bf46-9b6dee690eeb-telemetry-combined-ca-bundle\") pod \"963319ab-2780-4d81-bf46-9b6dee690eeb\" (UID: \"963319ab-2780-4d81-bf46-9b6dee690eeb\") " Dec 03 13:45:16 crc kubenswrapper[4986]: I1203 13:45:16.534198 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/963319ab-2780-4d81-bf46-9b6dee690eeb-ceilometer-compute-config-data-0\") pod \"963319ab-2780-4d81-bf46-9b6dee690eeb\" (UID: \"963319ab-2780-4d81-bf46-9b6dee690eeb\") " Dec 03 13:45:16 crc kubenswrapper[4986]: I1203 13:45:16.534236 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/963319ab-2780-4d81-bf46-9b6dee690eeb-ceilometer-compute-config-data-1\") pod \"963319ab-2780-4d81-bf46-9b6dee690eeb\" (UID: \"963319ab-2780-4d81-bf46-9b6dee690eeb\") " Dec 03 13:45:16 crc kubenswrapper[4986]: I1203 13:45:16.534265 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/963319ab-2780-4d81-bf46-9b6dee690eeb-ssh-key\") pod \"963319ab-2780-4d81-bf46-9b6dee690eeb\" (UID: \"963319ab-2780-4d81-bf46-9b6dee690eeb\") " Dec 03 13:45:16 crc kubenswrapper[4986]: I1203 13:45:16.540169 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/963319ab-2780-4d81-bf46-9b6dee690eeb-kube-api-access-hqjb5" (OuterVolumeSpecName: "kube-api-access-hqjb5") pod "963319ab-2780-4d81-bf46-9b6dee690eeb" (UID: "963319ab-2780-4d81-bf46-9b6dee690eeb"). InnerVolumeSpecName "kube-api-access-hqjb5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:45:16 crc kubenswrapper[4986]: I1203 13:45:16.540445 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/963319ab-2780-4d81-bf46-9b6dee690eeb-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "963319ab-2780-4d81-bf46-9b6dee690eeb" (UID: "963319ab-2780-4d81-bf46-9b6dee690eeb"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:45:16 crc kubenswrapper[4986]: I1203 13:45:16.563823 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/963319ab-2780-4d81-bf46-9b6dee690eeb-inventory" (OuterVolumeSpecName: "inventory") pod "963319ab-2780-4d81-bf46-9b6dee690eeb" (UID: "963319ab-2780-4d81-bf46-9b6dee690eeb"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:45:16 crc kubenswrapper[4986]: I1203 13:45:16.564521 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/963319ab-2780-4d81-bf46-9b6dee690eeb-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "963319ab-2780-4d81-bf46-9b6dee690eeb" (UID: "963319ab-2780-4d81-bf46-9b6dee690eeb"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:45:16 crc kubenswrapper[4986]: I1203 13:45:16.566000 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/963319ab-2780-4d81-bf46-9b6dee690eeb-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "963319ab-2780-4d81-bf46-9b6dee690eeb" (UID: "963319ab-2780-4d81-bf46-9b6dee690eeb"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:45:16 crc kubenswrapper[4986]: I1203 13:45:16.571348 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/963319ab-2780-4d81-bf46-9b6dee690eeb-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "963319ab-2780-4d81-bf46-9b6dee690eeb" (UID: "963319ab-2780-4d81-bf46-9b6dee690eeb"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:45:16 crc kubenswrapper[4986]: I1203 13:45:16.574432 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/963319ab-2780-4d81-bf46-9b6dee690eeb-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "963319ab-2780-4d81-bf46-9b6dee690eeb" (UID: "963319ab-2780-4d81-bf46-9b6dee690eeb"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:45:16 crc kubenswrapper[4986]: I1203 13:45:16.636150 4986 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/963319ab-2780-4d81-bf46-9b6dee690eeb-inventory\") on node \"crc\" DevicePath \"\"" Dec 03 13:45:16 crc kubenswrapper[4986]: I1203 13:45:16.636189 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hqjb5\" (UniqueName: \"kubernetes.io/projected/963319ab-2780-4d81-bf46-9b6dee690eeb-kube-api-access-hqjb5\") on node \"crc\" DevicePath \"\"" Dec 03 13:45:16 crc kubenswrapper[4986]: I1203 13:45:16.636201 4986 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/963319ab-2780-4d81-bf46-9b6dee690eeb-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 13:45:16 crc kubenswrapper[4986]: I1203 13:45:16.636212 4986 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/963319ab-2780-4d81-bf46-9b6dee690eeb-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Dec 03 13:45:16 crc kubenswrapper[4986]: I1203 13:45:16.636222 4986 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/963319ab-2780-4d81-bf46-9b6dee690eeb-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Dec 03 13:45:16 crc kubenswrapper[4986]: I1203 13:45:16.636231 4986 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/963319ab-2780-4d81-bf46-9b6dee690eeb-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 03 13:45:16 crc kubenswrapper[4986]: I1203 13:45:16.636240 4986 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/963319ab-2780-4d81-bf46-9b6dee690eeb-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Dec 03 13:45:16 crc kubenswrapper[4986]: I1203 13:45:16.943034 4986 scope.go:117] "RemoveContainer" containerID="bc3d6d2a861df402c588ed3c0b6e28555a58365f9870f007a996075068d0e641" Dec 03 13:45:16 crc kubenswrapper[4986]: E1203 13:45:16.943494 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 13:45:17 crc kubenswrapper[4986]: I1203 13:45:17.087459 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2vnss" event={"ID":"963319ab-2780-4d81-bf46-9b6dee690eeb","Type":"ContainerDied","Data":"a753b53020a44d775c2bcc9e439e965dc7ecea660a7b51a41c097278dbcb4baf"} Dec 03 13:45:17 crc kubenswrapper[4986]: I1203 13:45:17.087497 4986 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a753b53020a44d775c2bcc9e439e965dc7ecea660a7b51a41c097278dbcb4baf" Dec 03 13:45:17 crc kubenswrapper[4986]: I1203 13:45:17.087575 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2vnss" Dec 03 13:45:31 crc kubenswrapper[4986]: I1203 13:45:31.943251 4986 scope.go:117] "RemoveContainer" containerID="bc3d6d2a861df402c588ed3c0b6e28555a58365f9870f007a996075068d0e641" Dec 03 13:45:31 crc kubenswrapper[4986]: E1203 13:45:31.943971 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 13:45:45 crc kubenswrapper[4986]: I1203 13:45:45.944592 4986 scope.go:117] "RemoveContainer" containerID="bc3d6d2a861df402c588ed3c0b6e28555a58365f9870f007a996075068d0e641" Dec 03 13:45:45 crc kubenswrapper[4986]: E1203 13:45:45.945361 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 13:45:47 crc kubenswrapper[4986]: I1203 13:45:47.705145 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7pp5j"] Dec 03 13:45:47 crc kubenswrapper[4986]: E1203 13:45:47.706258 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="963319ab-2780-4d81-bf46-9b6dee690eeb" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Dec 03 13:45:47 crc kubenswrapper[4986]: I1203 13:45:47.706275 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="963319ab-2780-4d81-bf46-9b6dee690eeb" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Dec 03 13:45:47 crc kubenswrapper[4986]: E1203 13:45:47.706313 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0048aa60-3577-48c4-832a-62ab5f8565d7" containerName="collect-profiles" Dec 03 13:45:47 crc kubenswrapper[4986]: I1203 13:45:47.706321 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="0048aa60-3577-48c4-832a-62ab5f8565d7" containerName="collect-profiles" Dec 03 13:45:47 crc kubenswrapper[4986]: I1203 13:45:47.706496 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="0048aa60-3577-48c4-832a-62ab5f8565d7" containerName="collect-profiles" Dec 03 13:45:47 crc kubenswrapper[4986]: I1203 13:45:47.706524 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="963319ab-2780-4d81-bf46-9b6dee690eeb" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Dec 03 13:45:47 crc kubenswrapper[4986]: I1203 13:45:47.708052 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7pp5j" Dec 03 13:45:47 crc kubenswrapper[4986]: I1203 13:45:47.714309 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7pp5j"] Dec 03 13:45:47 crc kubenswrapper[4986]: I1203 13:45:47.856944 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dbb4faa7-ae3d-4eb0-a128-75cd52b148a3-catalog-content\") pod \"certified-operators-7pp5j\" (UID: \"dbb4faa7-ae3d-4eb0-a128-75cd52b148a3\") " pod="openshift-marketplace/certified-operators-7pp5j" Dec 03 13:45:47 crc kubenswrapper[4986]: I1203 13:45:47.857070 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dbb4faa7-ae3d-4eb0-a128-75cd52b148a3-utilities\") pod \"certified-operators-7pp5j\" (UID: \"dbb4faa7-ae3d-4eb0-a128-75cd52b148a3\") " pod="openshift-marketplace/certified-operators-7pp5j" Dec 03 13:45:47 crc kubenswrapper[4986]: I1203 13:45:47.857110 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j944p\" (UniqueName: \"kubernetes.io/projected/dbb4faa7-ae3d-4eb0-a128-75cd52b148a3-kube-api-access-j944p\") pod \"certified-operators-7pp5j\" (UID: \"dbb4faa7-ae3d-4eb0-a128-75cd52b148a3\") " pod="openshift-marketplace/certified-operators-7pp5j" Dec 03 13:45:47 crc kubenswrapper[4986]: I1203 13:45:47.958497 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dbb4faa7-ae3d-4eb0-a128-75cd52b148a3-utilities\") pod \"certified-operators-7pp5j\" (UID: \"dbb4faa7-ae3d-4eb0-a128-75cd52b148a3\") " pod="openshift-marketplace/certified-operators-7pp5j" Dec 03 13:45:47 crc kubenswrapper[4986]: I1203 13:45:47.958562 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j944p\" (UniqueName: \"kubernetes.io/projected/dbb4faa7-ae3d-4eb0-a128-75cd52b148a3-kube-api-access-j944p\") pod \"certified-operators-7pp5j\" (UID: \"dbb4faa7-ae3d-4eb0-a128-75cd52b148a3\") " pod="openshift-marketplace/certified-operators-7pp5j" Dec 03 13:45:47 crc kubenswrapper[4986]: I1203 13:45:47.958763 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dbb4faa7-ae3d-4eb0-a128-75cd52b148a3-catalog-content\") pod \"certified-operators-7pp5j\" (UID: \"dbb4faa7-ae3d-4eb0-a128-75cd52b148a3\") " pod="openshift-marketplace/certified-operators-7pp5j" Dec 03 13:45:47 crc kubenswrapper[4986]: I1203 13:45:47.959153 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dbb4faa7-ae3d-4eb0-a128-75cd52b148a3-utilities\") pod \"certified-operators-7pp5j\" (UID: \"dbb4faa7-ae3d-4eb0-a128-75cd52b148a3\") " pod="openshift-marketplace/certified-operators-7pp5j" Dec 03 13:45:47 crc kubenswrapper[4986]: I1203 13:45:47.959205 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dbb4faa7-ae3d-4eb0-a128-75cd52b148a3-catalog-content\") pod \"certified-operators-7pp5j\" (UID: \"dbb4faa7-ae3d-4eb0-a128-75cd52b148a3\") " pod="openshift-marketplace/certified-operators-7pp5j" Dec 03 13:45:47 crc kubenswrapper[4986]: I1203 13:45:47.979507 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j944p\" (UniqueName: \"kubernetes.io/projected/dbb4faa7-ae3d-4eb0-a128-75cd52b148a3-kube-api-access-j944p\") pod \"certified-operators-7pp5j\" (UID: \"dbb4faa7-ae3d-4eb0-a128-75cd52b148a3\") " pod="openshift-marketplace/certified-operators-7pp5j" Dec 03 13:45:48 crc kubenswrapper[4986]: I1203 13:45:48.029586 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7pp5j" Dec 03 13:45:48 crc kubenswrapper[4986]: I1203 13:45:48.559934 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7pp5j"] Dec 03 13:45:49 crc kubenswrapper[4986]: I1203 13:45:49.392227 4986 generic.go:334] "Generic (PLEG): container finished" podID="dbb4faa7-ae3d-4eb0-a128-75cd52b148a3" containerID="a8041b2b31624b525aa37e123957f63f063a2655139f37be160d4b94cf29462e" exitCode=0 Dec 03 13:45:49 crc kubenswrapper[4986]: I1203 13:45:49.392328 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7pp5j" event={"ID":"dbb4faa7-ae3d-4eb0-a128-75cd52b148a3","Type":"ContainerDied","Data":"a8041b2b31624b525aa37e123957f63f063a2655139f37be160d4b94cf29462e"} Dec 03 13:45:49 crc kubenswrapper[4986]: I1203 13:45:49.392524 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7pp5j" event={"ID":"dbb4faa7-ae3d-4eb0-a128-75cd52b148a3","Type":"ContainerStarted","Data":"9daac9c94490a2f7e6f97a3150dc1f943657d29564dc6e8a692148b45a804450"} Dec 03 13:45:49 crc kubenswrapper[4986]: I1203 13:45:49.394092 4986 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 03 13:45:51 crc kubenswrapper[4986]: I1203 13:45:51.415389 4986 generic.go:334] "Generic (PLEG): container finished" podID="dbb4faa7-ae3d-4eb0-a128-75cd52b148a3" containerID="79b0f276c9bb94f1942486060f4c64bbd03d65ea061c86bc25c277aebafd9c45" exitCode=0 Dec 03 13:45:51 crc kubenswrapper[4986]: I1203 13:45:51.415444 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7pp5j" event={"ID":"dbb4faa7-ae3d-4eb0-a128-75cd52b148a3","Type":"ContainerDied","Data":"79b0f276c9bb94f1942486060f4c64bbd03d65ea061c86bc25c277aebafd9c45"} Dec 03 13:45:53 crc kubenswrapper[4986]: I1203 13:45:53.434039 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7pp5j" event={"ID":"dbb4faa7-ae3d-4eb0-a128-75cd52b148a3","Type":"ContainerStarted","Data":"d89d32283e8207c8aa2ca53cb1e97bea99309bd837ecd430d76efe038f6febef"} Dec 03 13:45:53 crc kubenswrapper[4986]: I1203 13:45:53.456673 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7pp5j" podStartSLOduration=2.9034700669999998 podStartE2EDuration="6.456651862s" podCreationTimestamp="2025-12-03 13:45:47 +0000 UTC" firstStartedPulling="2025-12-03 13:45:49.393744858 +0000 UTC m=+3008.860176049" lastFinishedPulling="2025-12-03 13:45:52.946926653 +0000 UTC m=+3012.413357844" observedRunningTime="2025-12-03 13:45:53.448187252 +0000 UTC m=+3012.914618453" watchObservedRunningTime="2025-12-03 13:45:53.456651862 +0000 UTC m=+3012.923083073" Dec 03 13:45:56 crc kubenswrapper[4986]: I1203 13:45:56.943873 4986 scope.go:117] "RemoveContainer" containerID="bc3d6d2a861df402c588ed3c0b6e28555a58365f9870f007a996075068d0e641" Dec 03 13:45:56 crc kubenswrapper[4986]: E1203 13:45:56.944812 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 13:45:57 crc kubenswrapper[4986]: I1203 13:45:57.618148 4986 scope.go:117] "RemoveContainer" containerID="f4daa17ef3f81eb15e49faf9923d47649cbb27674c0588e430e42b009f1926ac" Dec 03 13:45:57 crc kubenswrapper[4986]: I1203 13:45:57.650688 4986 scope.go:117] "RemoveContainer" containerID="e4f38f90fa9d5d2989041bb12d203e764d18eb4cf3e4ff97a25e583616d8b82b" Dec 03 13:45:58 crc kubenswrapper[4986]: I1203 13:45:58.030369 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7pp5j" Dec 03 13:45:58 crc kubenswrapper[4986]: I1203 13:45:58.030755 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7pp5j" Dec 03 13:45:58 crc kubenswrapper[4986]: I1203 13:45:58.079424 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7pp5j" Dec 03 13:45:58 crc kubenswrapper[4986]: I1203 13:45:58.518120 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7pp5j" Dec 03 13:45:58 crc kubenswrapper[4986]: I1203 13:45:58.563815 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7pp5j"] Dec 03 13:46:00 crc kubenswrapper[4986]: I1203 13:46:00.494169 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7pp5j" podUID="dbb4faa7-ae3d-4eb0-a128-75cd52b148a3" containerName="registry-server" containerID="cri-o://d89d32283e8207c8aa2ca53cb1e97bea99309bd837ecd430d76efe038f6febef" gracePeriod=2 Dec 03 13:46:00 crc kubenswrapper[4986]: I1203 13:46:00.949406 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7pp5j" Dec 03 13:46:01 crc kubenswrapper[4986]: I1203 13:46:01.114000 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j944p\" (UniqueName: \"kubernetes.io/projected/dbb4faa7-ae3d-4eb0-a128-75cd52b148a3-kube-api-access-j944p\") pod \"dbb4faa7-ae3d-4eb0-a128-75cd52b148a3\" (UID: \"dbb4faa7-ae3d-4eb0-a128-75cd52b148a3\") " Dec 03 13:46:01 crc kubenswrapper[4986]: I1203 13:46:01.115056 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dbb4faa7-ae3d-4eb0-a128-75cd52b148a3-utilities\") pod \"dbb4faa7-ae3d-4eb0-a128-75cd52b148a3\" (UID: \"dbb4faa7-ae3d-4eb0-a128-75cd52b148a3\") " Dec 03 13:46:01 crc kubenswrapper[4986]: I1203 13:46:01.115173 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dbb4faa7-ae3d-4eb0-a128-75cd52b148a3-catalog-content\") pod \"dbb4faa7-ae3d-4eb0-a128-75cd52b148a3\" (UID: \"dbb4faa7-ae3d-4eb0-a128-75cd52b148a3\") " Dec 03 13:46:01 crc kubenswrapper[4986]: I1203 13:46:01.116003 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dbb4faa7-ae3d-4eb0-a128-75cd52b148a3-utilities" (OuterVolumeSpecName: "utilities") pod "dbb4faa7-ae3d-4eb0-a128-75cd52b148a3" (UID: "dbb4faa7-ae3d-4eb0-a128-75cd52b148a3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 13:46:01 crc kubenswrapper[4986]: I1203 13:46:01.127191 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbb4faa7-ae3d-4eb0-a128-75cd52b148a3-kube-api-access-j944p" (OuterVolumeSpecName: "kube-api-access-j944p") pod "dbb4faa7-ae3d-4eb0-a128-75cd52b148a3" (UID: "dbb4faa7-ae3d-4eb0-a128-75cd52b148a3"). InnerVolumeSpecName "kube-api-access-j944p". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:46:01 crc kubenswrapper[4986]: I1203 13:46:01.183491 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dbb4faa7-ae3d-4eb0-a128-75cd52b148a3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dbb4faa7-ae3d-4eb0-a128-75cd52b148a3" (UID: "dbb4faa7-ae3d-4eb0-a128-75cd52b148a3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 13:46:01 crc kubenswrapper[4986]: I1203 13:46:01.217915 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j944p\" (UniqueName: \"kubernetes.io/projected/dbb4faa7-ae3d-4eb0-a128-75cd52b148a3-kube-api-access-j944p\") on node \"crc\" DevicePath \"\"" Dec 03 13:46:01 crc kubenswrapper[4986]: I1203 13:46:01.217960 4986 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dbb4faa7-ae3d-4eb0-a128-75cd52b148a3-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 13:46:01 crc kubenswrapper[4986]: I1203 13:46:01.217971 4986 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dbb4faa7-ae3d-4eb0-a128-75cd52b148a3-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 13:46:01 crc kubenswrapper[4986]: I1203 13:46:01.557674 4986 generic.go:334] "Generic (PLEG): container finished" podID="dbb4faa7-ae3d-4eb0-a128-75cd52b148a3" containerID="d89d32283e8207c8aa2ca53cb1e97bea99309bd837ecd430d76efe038f6febef" exitCode=0 Dec 03 13:46:01 crc kubenswrapper[4986]: I1203 13:46:01.557713 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7pp5j" event={"ID":"dbb4faa7-ae3d-4eb0-a128-75cd52b148a3","Type":"ContainerDied","Data":"d89d32283e8207c8aa2ca53cb1e97bea99309bd837ecd430d76efe038f6febef"} Dec 03 13:46:01 crc kubenswrapper[4986]: I1203 13:46:01.557746 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7pp5j" event={"ID":"dbb4faa7-ae3d-4eb0-a128-75cd52b148a3","Type":"ContainerDied","Data":"9daac9c94490a2f7e6f97a3150dc1f943657d29564dc6e8a692148b45a804450"} Dec 03 13:46:01 crc kubenswrapper[4986]: I1203 13:46:01.557763 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7pp5j" Dec 03 13:46:01 crc kubenswrapper[4986]: I1203 13:46:01.557780 4986 scope.go:117] "RemoveContainer" containerID="d89d32283e8207c8aa2ca53cb1e97bea99309bd837ecd430d76efe038f6febef" Dec 03 13:46:01 crc kubenswrapper[4986]: I1203 13:46:01.587170 4986 scope.go:117] "RemoveContainer" containerID="79b0f276c9bb94f1942486060f4c64bbd03d65ea061c86bc25c277aebafd9c45" Dec 03 13:46:01 crc kubenswrapper[4986]: I1203 13:46:01.609372 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7pp5j"] Dec 03 13:46:01 crc kubenswrapper[4986]: I1203 13:46:01.615665 4986 scope.go:117] "RemoveContainer" containerID="a8041b2b31624b525aa37e123957f63f063a2655139f37be160d4b94cf29462e" Dec 03 13:46:01 crc kubenswrapper[4986]: I1203 13:46:01.617923 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7pp5j"] Dec 03 13:46:01 crc kubenswrapper[4986]: I1203 13:46:01.658608 4986 scope.go:117] "RemoveContainer" containerID="d89d32283e8207c8aa2ca53cb1e97bea99309bd837ecd430d76efe038f6febef" Dec 03 13:46:01 crc kubenswrapper[4986]: E1203 13:46:01.659091 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d89d32283e8207c8aa2ca53cb1e97bea99309bd837ecd430d76efe038f6febef\": container with ID starting with d89d32283e8207c8aa2ca53cb1e97bea99309bd837ecd430d76efe038f6febef not found: ID does not exist" containerID="d89d32283e8207c8aa2ca53cb1e97bea99309bd837ecd430d76efe038f6febef" Dec 03 13:46:01 crc kubenswrapper[4986]: I1203 13:46:01.659127 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d89d32283e8207c8aa2ca53cb1e97bea99309bd837ecd430d76efe038f6febef"} err="failed to get container status \"d89d32283e8207c8aa2ca53cb1e97bea99309bd837ecd430d76efe038f6febef\": rpc error: code = NotFound desc = could not find container \"d89d32283e8207c8aa2ca53cb1e97bea99309bd837ecd430d76efe038f6febef\": container with ID starting with d89d32283e8207c8aa2ca53cb1e97bea99309bd837ecd430d76efe038f6febef not found: ID does not exist" Dec 03 13:46:01 crc kubenswrapper[4986]: I1203 13:46:01.659153 4986 scope.go:117] "RemoveContainer" containerID="79b0f276c9bb94f1942486060f4c64bbd03d65ea061c86bc25c277aebafd9c45" Dec 03 13:46:01 crc kubenswrapper[4986]: E1203 13:46:01.659467 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79b0f276c9bb94f1942486060f4c64bbd03d65ea061c86bc25c277aebafd9c45\": container with ID starting with 79b0f276c9bb94f1942486060f4c64bbd03d65ea061c86bc25c277aebafd9c45 not found: ID does not exist" containerID="79b0f276c9bb94f1942486060f4c64bbd03d65ea061c86bc25c277aebafd9c45" Dec 03 13:46:01 crc kubenswrapper[4986]: I1203 13:46:01.659498 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79b0f276c9bb94f1942486060f4c64bbd03d65ea061c86bc25c277aebafd9c45"} err="failed to get container status \"79b0f276c9bb94f1942486060f4c64bbd03d65ea061c86bc25c277aebafd9c45\": rpc error: code = NotFound desc = could not find container \"79b0f276c9bb94f1942486060f4c64bbd03d65ea061c86bc25c277aebafd9c45\": container with ID starting with 79b0f276c9bb94f1942486060f4c64bbd03d65ea061c86bc25c277aebafd9c45 not found: ID does not exist" Dec 03 13:46:01 crc kubenswrapper[4986]: I1203 13:46:01.659517 4986 scope.go:117] "RemoveContainer" containerID="a8041b2b31624b525aa37e123957f63f063a2655139f37be160d4b94cf29462e" Dec 03 13:46:01 crc kubenswrapper[4986]: E1203 13:46:01.659827 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8041b2b31624b525aa37e123957f63f063a2655139f37be160d4b94cf29462e\": container with ID starting with a8041b2b31624b525aa37e123957f63f063a2655139f37be160d4b94cf29462e not found: ID does not exist" containerID="a8041b2b31624b525aa37e123957f63f063a2655139f37be160d4b94cf29462e" Dec 03 13:46:01 crc kubenswrapper[4986]: I1203 13:46:01.659851 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8041b2b31624b525aa37e123957f63f063a2655139f37be160d4b94cf29462e"} err="failed to get container status \"a8041b2b31624b525aa37e123957f63f063a2655139f37be160d4b94cf29462e\": rpc error: code = NotFound desc = could not find container \"a8041b2b31624b525aa37e123957f63f063a2655139f37be160d4b94cf29462e\": container with ID starting with a8041b2b31624b525aa37e123957f63f063a2655139f37be160d4b94cf29462e not found: ID does not exist" Dec 03 13:46:02 crc kubenswrapper[4986]: I1203 13:46:02.954232 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbb4faa7-ae3d-4eb0-a128-75cd52b148a3" path="/var/lib/kubelet/pods/dbb4faa7-ae3d-4eb0-a128-75cd52b148a3/volumes" Dec 03 13:46:09 crc kubenswrapper[4986]: I1203 13:46:09.944129 4986 scope.go:117] "RemoveContainer" containerID="bc3d6d2a861df402c588ed3c0b6e28555a58365f9870f007a996075068d0e641" Dec 03 13:46:09 crc kubenswrapper[4986]: E1203 13:46:09.944961 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 13:46:17 crc kubenswrapper[4986]: I1203 13:46:17.018559 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Dec 03 13:46:17 crc kubenswrapper[4986]: E1203 13:46:17.019626 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbb4faa7-ae3d-4eb0-a128-75cd52b148a3" containerName="extract-content" Dec 03 13:46:17 crc kubenswrapper[4986]: I1203 13:46:17.019646 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbb4faa7-ae3d-4eb0-a128-75cd52b148a3" containerName="extract-content" Dec 03 13:46:17 crc kubenswrapper[4986]: E1203 13:46:17.019660 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbb4faa7-ae3d-4eb0-a128-75cd52b148a3" containerName="registry-server" Dec 03 13:46:17 crc kubenswrapper[4986]: I1203 13:46:17.019668 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbb4faa7-ae3d-4eb0-a128-75cd52b148a3" containerName="registry-server" Dec 03 13:46:17 crc kubenswrapper[4986]: E1203 13:46:17.019699 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbb4faa7-ae3d-4eb0-a128-75cd52b148a3" containerName="extract-utilities" Dec 03 13:46:17 crc kubenswrapper[4986]: I1203 13:46:17.019706 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbb4faa7-ae3d-4eb0-a128-75cd52b148a3" containerName="extract-utilities" Dec 03 13:46:17 crc kubenswrapper[4986]: I1203 13:46:17.019939 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbb4faa7-ae3d-4eb0-a128-75cd52b148a3" containerName="registry-server" Dec 03 13:46:17 crc kubenswrapper[4986]: I1203 13:46:17.020648 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 03 13:46:17 crc kubenswrapper[4986]: I1203 13:46:17.027069 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Dec 03 13:46:17 crc kubenswrapper[4986]: I1203 13:46:17.027175 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Dec 03 13:46:17 crc kubenswrapper[4986]: I1203 13:46:17.027234 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-lvd4x" Dec 03 13:46:17 crc kubenswrapper[4986]: I1203 13:46:17.027383 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Dec 03 13:46:17 crc kubenswrapper[4986]: I1203 13:46:17.029153 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Dec 03 13:46:17 crc kubenswrapper[4986]: I1203 13:46:17.129912 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4159adcb-0a7a-4765-ac54-186effebee8e-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"4159adcb-0a7a-4765-ac54-186effebee8e\") " pod="openstack/tempest-tests-tempest" Dec 03 13:46:17 crc kubenswrapper[4986]: I1203 13:46:17.130260 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/4159adcb-0a7a-4765-ac54-186effebee8e-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"4159adcb-0a7a-4765-ac54-186effebee8e\") " pod="openstack/tempest-tests-tempest" Dec 03 13:46:17 crc kubenswrapper[4986]: I1203 13:46:17.130430 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/4159adcb-0a7a-4765-ac54-186effebee8e-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"4159adcb-0a7a-4765-ac54-186effebee8e\") " pod="openstack/tempest-tests-tempest" Dec 03 13:46:17 crc kubenswrapper[4986]: I1203 13:46:17.130616 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/4159adcb-0a7a-4765-ac54-186effebee8e-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"4159adcb-0a7a-4765-ac54-186effebee8e\") " pod="openstack/tempest-tests-tempest" Dec 03 13:46:17 crc kubenswrapper[4986]: I1203 13:46:17.130740 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4159adcb-0a7a-4765-ac54-186effebee8e-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"4159adcb-0a7a-4765-ac54-186effebee8e\") " pod="openstack/tempest-tests-tempest" Dec 03 13:46:17 crc kubenswrapper[4986]: I1203 13:46:17.130891 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tempest-tests-tempest\" (UID: \"4159adcb-0a7a-4765-ac54-186effebee8e\") " pod="openstack/tempest-tests-tempest" Dec 03 13:46:17 crc kubenswrapper[4986]: I1203 13:46:17.131048 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4159adcb-0a7a-4765-ac54-186effebee8e-config-data\") pod \"tempest-tests-tempest\" (UID: \"4159adcb-0a7a-4765-ac54-186effebee8e\") " pod="openstack/tempest-tests-tempest" Dec 03 13:46:17 crc kubenswrapper[4986]: I1203 13:46:17.131201 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4159adcb-0a7a-4765-ac54-186effebee8e-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"4159adcb-0a7a-4765-ac54-186effebee8e\") " pod="openstack/tempest-tests-tempest" Dec 03 13:46:17 crc kubenswrapper[4986]: I1203 13:46:17.131344 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4qv5\" (UniqueName: \"kubernetes.io/projected/4159adcb-0a7a-4765-ac54-186effebee8e-kube-api-access-g4qv5\") pod \"tempest-tests-tempest\" (UID: \"4159adcb-0a7a-4765-ac54-186effebee8e\") " pod="openstack/tempest-tests-tempest" Dec 03 13:46:17 crc kubenswrapper[4986]: I1203 13:46:17.232747 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tempest-tests-tempest\" (UID: \"4159adcb-0a7a-4765-ac54-186effebee8e\") " pod="openstack/tempest-tests-tempest" Dec 03 13:46:17 crc kubenswrapper[4986]: I1203 13:46:17.232818 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4159adcb-0a7a-4765-ac54-186effebee8e-config-data\") pod \"tempest-tests-tempest\" (UID: \"4159adcb-0a7a-4765-ac54-186effebee8e\") " pod="openstack/tempest-tests-tempest" Dec 03 13:46:17 crc kubenswrapper[4986]: I1203 13:46:17.232863 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4159adcb-0a7a-4765-ac54-186effebee8e-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"4159adcb-0a7a-4765-ac54-186effebee8e\") " pod="openstack/tempest-tests-tempest" Dec 03 13:46:17 crc kubenswrapper[4986]: I1203 13:46:17.232894 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4qv5\" (UniqueName: \"kubernetes.io/projected/4159adcb-0a7a-4765-ac54-186effebee8e-kube-api-access-g4qv5\") pod \"tempest-tests-tempest\" (UID: \"4159adcb-0a7a-4765-ac54-186effebee8e\") " pod="openstack/tempest-tests-tempest" Dec 03 13:46:17 crc kubenswrapper[4986]: I1203 13:46:17.232935 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4159adcb-0a7a-4765-ac54-186effebee8e-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"4159adcb-0a7a-4765-ac54-186effebee8e\") " pod="openstack/tempest-tests-tempest" Dec 03 13:46:17 crc kubenswrapper[4986]: I1203 13:46:17.232968 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/4159adcb-0a7a-4765-ac54-186effebee8e-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"4159adcb-0a7a-4765-ac54-186effebee8e\") " pod="openstack/tempest-tests-tempest" Dec 03 13:46:17 crc kubenswrapper[4986]: I1203 13:46:17.232992 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/4159adcb-0a7a-4765-ac54-186effebee8e-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"4159adcb-0a7a-4765-ac54-186effebee8e\") " pod="openstack/tempest-tests-tempest" Dec 03 13:46:17 crc kubenswrapper[4986]: I1203 13:46:17.233046 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/4159adcb-0a7a-4765-ac54-186effebee8e-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"4159adcb-0a7a-4765-ac54-186effebee8e\") " pod="openstack/tempest-tests-tempest" Dec 03 13:46:17 crc kubenswrapper[4986]: I1203 13:46:17.233064 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4159adcb-0a7a-4765-ac54-186effebee8e-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"4159adcb-0a7a-4765-ac54-186effebee8e\") " pod="openstack/tempest-tests-tempest" Dec 03 13:46:17 crc kubenswrapper[4986]: I1203 13:46:17.233153 4986 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tempest-tests-tempest\" (UID: \"4159adcb-0a7a-4765-ac54-186effebee8e\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/tempest-tests-tempest" Dec 03 13:46:17 crc kubenswrapper[4986]: I1203 13:46:17.234666 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4159adcb-0a7a-4765-ac54-186effebee8e-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"4159adcb-0a7a-4765-ac54-186effebee8e\") " pod="openstack/tempest-tests-tempest" Dec 03 13:46:17 crc kubenswrapper[4986]: I1203 13:46:17.235726 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/4159adcb-0a7a-4765-ac54-186effebee8e-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"4159adcb-0a7a-4765-ac54-186effebee8e\") " pod="openstack/tempest-tests-tempest" Dec 03 13:46:17 crc kubenswrapper[4986]: I1203 13:46:17.235838 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/4159adcb-0a7a-4765-ac54-186effebee8e-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"4159adcb-0a7a-4765-ac54-186effebee8e\") " pod="openstack/tempest-tests-tempest" Dec 03 13:46:17 crc kubenswrapper[4986]: I1203 13:46:17.236439 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4159adcb-0a7a-4765-ac54-186effebee8e-config-data\") pod \"tempest-tests-tempest\" (UID: \"4159adcb-0a7a-4765-ac54-186effebee8e\") " pod="openstack/tempest-tests-tempest" Dec 03 13:46:17 crc kubenswrapper[4986]: I1203 13:46:17.239661 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/4159adcb-0a7a-4765-ac54-186effebee8e-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"4159adcb-0a7a-4765-ac54-186effebee8e\") " pod="openstack/tempest-tests-tempest" Dec 03 13:46:17 crc kubenswrapper[4986]: I1203 13:46:17.247169 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4159adcb-0a7a-4765-ac54-186effebee8e-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"4159adcb-0a7a-4765-ac54-186effebee8e\") " pod="openstack/tempest-tests-tempest" Dec 03 13:46:17 crc kubenswrapper[4986]: I1203 13:46:17.248113 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4159adcb-0a7a-4765-ac54-186effebee8e-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"4159adcb-0a7a-4765-ac54-186effebee8e\") " pod="openstack/tempest-tests-tempest" Dec 03 13:46:17 crc kubenswrapper[4986]: I1203 13:46:17.252107 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4qv5\" (UniqueName: \"kubernetes.io/projected/4159adcb-0a7a-4765-ac54-186effebee8e-kube-api-access-g4qv5\") pod \"tempest-tests-tempest\" (UID: \"4159adcb-0a7a-4765-ac54-186effebee8e\") " pod="openstack/tempest-tests-tempest" Dec 03 13:46:17 crc kubenswrapper[4986]: I1203 13:46:17.257714 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tempest-tests-tempest\" (UID: \"4159adcb-0a7a-4765-ac54-186effebee8e\") " pod="openstack/tempest-tests-tempest" Dec 03 13:46:17 crc kubenswrapper[4986]: I1203 13:46:17.342211 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 03 13:46:17 crc kubenswrapper[4986]: I1203 13:46:17.772876 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Dec 03 13:46:18 crc kubenswrapper[4986]: I1203 13:46:18.710768 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"4159adcb-0a7a-4765-ac54-186effebee8e","Type":"ContainerStarted","Data":"064547c1186df606640774d3213cf27d8bd21113e38f31ccb1c2ab464170f277"} Dec 03 13:46:22 crc kubenswrapper[4986]: I1203 13:46:22.944059 4986 scope.go:117] "RemoveContainer" containerID="bc3d6d2a861df402c588ed3c0b6e28555a58365f9870f007a996075068d0e641" Dec 03 13:46:22 crc kubenswrapper[4986]: E1203 13:46:22.946135 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 13:46:33 crc kubenswrapper[4986]: I1203 13:46:33.944039 4986 scope.go:117] "RemoveContainer" containerID="bc3d6d2a861df402c588ed3c0b6e28555a58365f9870f007a996075068d0e641" Dec 03 13:46:33 crc kubenswrapper[4986]: E1203 13:46:33.946591 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 13:46:46 crc kubenswrapper[4986]: I1203 13:46:46.943761 4986 scope.go:117] "RemoveContainer" containerID="bc3d6d2a861df402c588ed3c0b6e28555a58365f9870f007a996075068d0e641" Dec 03 13:46:46 crc kubenswrapper[4986]: E1203 13:46:46.944477 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 13:46:56 crc kubenswrapper[4986]: E1203 13:46:56.404624 4986 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Dec 03 13:46:56 crc kubenswrapper[4986]: E1203 13:46:56.405334 4986 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g4qv5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(4159adcb-0a7a-4765-ac54-186effebee8e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 13:46:56 crc kubenswrapper[4986]: E1203 13:46:56.406687 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="4159adcb-0a7a-4765-ac54-186effebee8e" Dec 03 13:46:57 crc kubenswrapper[4986]: E1203 13:46:57.041868 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="4159adcb-0a7a-4765-ac54-186effebee8e" Dec 03 13:47:00 crc kubenswrapper[4986]: I1203 13:47:00.952750 4986 scope.go:117] "RemoveContainer" containerID="bc3d6d2a861df402c588ed3c0b6e28555a58365f9870f007a996075068d0e641" Dec 03 13:47:00 crc kubenswrapper[4986]: E1203 13:47:00.953419 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 13:47:10 crc kubenswrapper[4986]: I1203 13:47:10.382016 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Dec 03 13:47:11 crc kubenswrapper[4986]: I1203 13:47:11.943598 4986 scope.go:117] "RemoveContainer" containerID="bc3d6d2a861df402c588ed3c0b6e28555a58365f9870f007a996075068d0e641" Dec 03 13:47:11 crc kubenswrapper[4986]: E1203 13:47:11.944477 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 13:47:12 crc kubenswrapper[4986]: I1203 13:47:12.175214 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"4159adcb-0a7a-4765-ac54-186effebee8e","Type":"ContainerStarted","Data":"4172b99320d799d9259c74f1c5320b3163d2de38b22bf2e9e3a7d62653b419e4"} Dec 03 13:47:12 crc kubenswrapper[4986]: I1203 13:47:12.198688 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.607078847 podStartE2EDuration="57.198669726s" podCreationTimestamp="2025-12-03 13:46:15 +0000 UTC" firstStartedPulling="2025-12-03 13:46:17.78747907 +0000 UTC m=+3037.253910301" lastFinishedPulling="2025-12-03 13:47:10.379069989 +0000 UTC m=+3089.845501180" observedRunningTime="2025-12-03 13:47:12.190583558 +0000 UTC m=+3091.657014749" watchObservedRunningTime="2025-12-03 13:47:12.198669726 +0000 UTC m=+3091.665100917" Dec 03 13:47:22 crc kubenswrapper[4986]: I1203 13:47:22.952707 4986 scope.go:117] "RemoveContainer" containerID="bc3d6d2a861df402c588ed3c0b6e28555a58365f9870f007a996075068d0e641" Dec 03 13:47:22 crc kubenswrapper[4986]: E1203 13:47:22.953799 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 13:47:36 crc kubenswrapper[4986]: I1203 13:47:36.949029 4986 scope.go:117] "RemoveContainer" containerID="bc3d6d2a861df402c588ed3c0b6e28555a58365f9870f007a996075068d0e641" Dec 03 13:47:36 crc kubenswrapper[4986]: E1203 13:47:36.949707 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 13:47:48 crc kubenswrapper[4986]: I1203 13:47:48.943820 4986 scope.go:117] "RemoveContainer" containerID="bc3d6d2a861df402c588ed3c0b6e28555a58365f9870f007a996075068d0e641" Dec 03 13:47:48 crc kubenswrapper[4986]: E1203 13:47:48.944680 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 13:48:00 crc kubenswrapper[4986]: I1203 13:48:00.950685 4986 scope.go:117] "RemoveContainer" containerID="bc3d6d2a861df402c588ed3c0b6e28555a58365f9870f007a996075068d0e641" Dec 03 13:48:00 crc kubenswrapper[4986]: E1203 13:48:00.951380 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 13:48:13 crc kubenswrapper[4986]: I1203 13:48:13.943430 4986 scope.go:117] "RemoveContainer" containerID="bc3d6d2a861df402c588ed3c0b6e28555a58365f9870f007a996075068d0e641" Dec 03 13:48:13 crc kubenswrapper[4986]: E1203 13:48:13.944127 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 13:48:27 crc kubenswrapper[4986]: I1203 13:48:27.944629 4986 scope.go:117] "RemoveContainer" containerID="bc3d6d2a861df402c588ed3c0b6e28555a58365f9870f007a996075068d0e641" Dec 03 13:48:27 crc kubenswrapper[4986]: E1203 13:48:27.945428 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 13:48:41 crc kubenswrapper[4986]: I1203 13:48:41.943675 4986 scope.go:117] "RemoveContainer" containerID="bc3d6d2a861df402c588ed3c0b6e28555a58365f9870f007a996075068d0e641" Dec 03 13:48:41 crc kubenswrapper[4986]: E1203 13:48:41.944382 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 13:48:56 crc kubenswrapper[4986]: I1203 13:48:56.945090 4986 scope.go:117] "RemoveContainer" containerID="bc3d6d2a861df402c588ed3c0b6e28555a58365f9870f007a996075068d0e641" Dec 03 13:48:56 crc kubenswrapper[4986]: E1203 13:48:56.946474 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 13:48:57 crc kubenswrapper[4986]: I1203 13:48:57.830778 4986 scope.go:117] "RemoveContainer" containerID="4a8bf8d89a4a538400616ffc16a2c1e10fa2de2cfbdd73b2ca0256f45eae4b65" Dec 03 13:48:57 crc kubenswrapper[4986]: I1203 13:48:57.860164 4986 scope.go:117] "RemoveContainer" containerID="c81b41faf3d92b46c7f4a8134e34440da9b73fa0e7f084b346e81b00dc32c9cb" Dec 03 13:48:57 crc kubenswrapper[4986]: I1203 13:48:57.898714 4986 scope.go:117] "RemoveContainer" containerID="b4a8ac70c8f14be563edbb5fa0995a79b29fd23e8ebc1c11e7ce710548086c6a" Dec 03 13:49:11 crc kubenswrapper[4986]: I1203 13:49:11.943635 4986 scope.go:117] "RemoveContainer" containerID="bc3d6d2a861df402c588ed3c0b6e28555a58365f9870f007a996075068d0e641" Dec 03 13:49:11 crc kubenswrapper[4986]: E1203 13:49:11.944447 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 13:49:23 crc kubenswrapper[4986]: I1203 13:49:23.943325 4986 scope.go:117] "RemoveContainer" containerID="bc3d6d2a861df402c588ed3c0b6e28555a58365f9870f007a996075068d0e641" Dec 03 13:49:23 crc kubenswrapper[4986]: E1203 13:49:23.944057 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 13:49:34 crc kubenswrapper[4986]: I1203 13:49:34.943549 4986 scope.go:117] "RemoveContainer" containerID="bc3d6d2a861df402c588ed3c0b6e28555a58365f9870f007a996075068d0e641" Dec 03 13:49:34 crc kubenswrapper[4986]: E1203 13:49:34.944317 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 13:49:49 crc kubenswrapper[4986]: I1203 13:49:49.943081 4986 scope.go:117] "RemoveContainer" containerID="bc3d6d2a861df402c588ed3c0b6e28555a58365f9870f007a996075068d0e641" Dec 03 13:49:49 crc kubenswrapper[4986]: E1203 13:49:49.943887 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 13:50:01 crc kubenswrapper[4986]: I1203 13:50:01.944351 4986 scope.go:117] "RemoveContainer" containerID="bc3d6d2a861df402c588ed3c0b6e28555a58365f9870f007a996075068d0e641" Dec 03 13:50:01 crc kubenswrapper[4986]: E1203 13:50:01.945493 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 13:50:12 crc kubenswrapper[4986]: I1203 13:50:12.945001 4986 scope.go:117] "RemoveContainer" containerID="bc3d6d2a861df402c588ed3c0b6e28555a58365f9870f007a996075068d0e641" Dec 03 13:50:13 crc kubenswrapper[4986]: I1203 13:50:13.865695 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" event={"ID":"f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c","Type":"ContainerStarted","Data":"958267ff706c66b6b925066f50b19f9b728e7a63b4775cbe35b93517f57da144"} Dec 03 13:52:33 crc kubenswrapper[4986]: I1203 13:52:33.491075 4986 patch_prober.go:28] interesting pod/machine-config-daemon-xggpj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 13:52:33 crc kubenswrapper[4986]: I1203 13:52:33.491687 4986 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 13:53:03 crc kubenswrapper[4986]: I1203 13:53:03.491543 4986 patch_prober.go:28] interesting pod/machine-config-daemon-xggpj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 13:53:03 crc kubenswrapper[4986]: I1203 13:53:03.492121 4986 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 13:53:28 crc kubenswrapper[4986]: I1203 13:53:28.415955 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vckx8"] Dec 03 13:53:28 crc kubenswrapper[4986]: I1203 13:53:28.418816 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vckx8" Dec 03 13:53:28 crc kubenswrapper[4986]: I1203 13:53:28.437911 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vckx8"] Dec 03 13:53:28 crc kubenswrapper[4986]: I1203 13:53:28.581100 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fthgz\" (UniqueName: \"kubernetes.io/projected/4302fded-456a-4d38-9d8c-310c9cc554a4-kube-api-access-fthgz\") pod \"community-operators-vckx8\" (UID: \"4302fded-456a-4d38-9d8c-310c9cc554a4\") " pod="openshift-marketplace/community-operators-vckx8" Dec 03 13:53:28 crc kubenswrapper[4986]: I1203 13:53:28.581265 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4302fded-456a-4d38-9d8c-310c9cc554a4-utilities\") pod \"community-operators-vckx8\" (UID: \"4302fded-456a-4d38-9d8c-310c9cc554a4\") " pod="openshift-marketplace/community-operators-vckx8" Dec 03 13:53:28 crc kubenswrapper[4986]: I1203 13:53:28.581394 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4302fded-456a-4d38-9d8c-310c9cc554a4-catalog-content\") pod \"community-operators-vckx8\" (UID: \"4302fded-456a-4d38-9d8c-310c9cc554a4\") " pod="openshift-marketplace/community-operators-vckx8" Dec 03 13:53:28 crc kubenswrapper[4986]: I1203 13:53:28.682835 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fthgz\" (UniqueName: \"kubernetes.io/projected/4302fded-456a-4d38-9d8c-310c9cc554a4-kube-api-access-fthgz\") pod \"community-operators-vckx8\" (UID: \"4302fded-456a-4d38-9d8c-310c9cc554a4\") " pod="openshift-marketplace/community-operators-vckx8" Dec 03 13:53:28 crc kubenswrapper[4986]: I1203 13:53:28.682937 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4302fded-456a-4d38-9d8c-310c9cc554a4-utilities\") pod \"community-operators-vckx8\" (UID: \"4302fded-456a-4d38-9d8c-310c9cc554a4\") " pod="openshift-marketplace/community-operators-vckx8" Dec 03 13:53:28 crc kubenswrapper[4986]: I1203 13:53:28.682974 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4302fded-456a-4d38-9d8c-310c9cc554a4-catalog-content\") pod \"community-operators-vckx8\" (UID: \"4302fded-456a-4d38-9d8c-310c9cc554a4\") " pod="openshift-marketplace/community-operators-vckx8" Dec 03 13:53:28 crc kubenswrapper[4986]: I1203 13:53:28.683566 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4302fded-456a-4d38-9d8c-310c9cc554a4-utilities\") pod \"community-operators-vckx8\" (UID: \"4302fded-456a-4d38-9d8c-310c9cc554a4\") " pod="openshift-marketplace/community-operators-vckx8" Dec 03 13:53:28 crc kubenswrapper[4986]: I1203 13:53:28.683622 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4302fded-456a-4d38-9d8c-310c9cc554a4-catalog-content\") pod \"community-operators-vckx8\" (UID: \"4302fded-456a-4d38-9d8c-310c9cc554a4\") " pod="openshift-marketplace/community-operators-vckx8" Dec 03 13:53:28 crc kubenswrapper[4986]: I1203 13:53:28.711920 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fthgz\" (UniqueName: \"kubernetes.io/projected/4302fded-456a-4d38-9d8c-310c9cc554a4-kube-api-access-fthgz\") pod \"community-operators-vckx8\" (UID: \"4302fded-456a-4d38-9d8c-310c9cc554a4\") " pod="openshift-marketplace/community-operators-vckx8" Dec 03 13:53:28 crc kubenswrapper[4986]: I1203 13:53:28.742370 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vckx8" Dec 03 13:53:29 crc kubenswrapper[4986]: I1203 13:53:29.320959 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vckx8"] Dec 03 13:53:29 crc kubenswrapper[4986]: I1203 13:53:29.815908 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vckx8" event={"ID":"4302fded-456a-4d38-9d8c-310c9cc554a4","Type":"ContainerStarted","Data":"4fd080f5a870ed321ab2756c49374b9a43167f6a97e46dd8d95e58fe5773b045"} Dec 03 13:53:31 crc kubenswrapper[4986]: I1203 13:53:31.399905 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-f9c5x"] Dec 03 13:53:31 crc kubenswrapper[4986]: I1203 13:53:31.402618 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f9c5x" Dec 03 13:53:31 crc kubenswrapper[4986]: I1203 13:53:31.410540 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-f9c5x"] Dec 03 13:53:31 crc kubenswrapper[4986]: I1203 13:53:31.542549 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9979\" (UniqueName: \"kubernetes.io/projected/d9a135c3-d98f-46b0-9980-96cfad5859ae-kube-api-access-z9979\") pod \"redhat-marketplace-f9c5x\" (UID: \"d9a135c3-d98f-46b0-9980-96cfad5859ae\") " pod="openshift-marketplace/redhat-marketplace-f9c5x" Dec 03 13:53:31 crc kubenswrapper[4986]: I1203 13:53:31.542633 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9a135c3-d98f-46b0-9980-96cfad5859ae-utilities\") pod \"redhat-marketplace-f9c5x\" (UID: \"d9a135c3-d98f-46b0-9980-96cfad5859ae\") " pod="openshift-marketplace/redhat-marketplace-f9c5x" Dec 03 13:53:31 crc kubenswrapper[4986]: I1203 13:53:31.542768 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9a135c3-d98f-46b0-9980-96cfad5859ae-catalog-content\") pod \"redhat-marketplace-f9c5x\" (UID: \"d9a135c3-d98f-46b0-9980-96cfad5859ae\") " pod="openshift-marketplace/redhat-marketplace-f9c5x" Dec 03 13:53:31 crc kubenswrapper[4986]: I1203 13:53:31.644900 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9979\" (UniqueName: \"kubernetes.io/projected/d9a135c3-d98f-46b0-9980-96cfad5859ae-kube-api-access-z9979\") pod \"redhat-marketplace-f9c5x\" (UID: \"d9a135c3-d98f-46b0-9980-96cfad5859ae\") " pod="openshift-marketplace/redhat-marketplace-f9c5x" Dec 03 13:53:31 crc kubenswrapper[4986]: I1203 13:53:31.645351 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9a135c3-d98f-46b0-9980-96cfad5859ae-utilities\") pod \"redhat-marketplace-f9c5x\" (UID: \"d9a135c3-d98f-46b0-9980-96cfad5859ae\") " pod="openshift-marketplace/redhat-marketplace-f9c5x" Dec 03 13:53:31 crc kubenswrapper[4986]: I1203 13:53:31.645389 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9a135c3-d98f-46b0-9980-96cfad5859ae-catalog-content\") pod \"redhat-marketplace-f9c5x\" (UID: \"d9a135c3-d98f-46b0-9980-96cfad5859ae\") " pod="openshift-marketplace/redhat-marketplace-f9c5x" Dec 03 13:53:31 crc kubenswrapper[4986]: I1203 13:53:31.646140 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9a135c3-d98f-46b0-9980-96cfad5859ae-utilities\") pod \"redhat-marketplace-f9c5x\" (UID: \"d9a135c3-d98f-46b0-9980-96cfad5859ae\") " pod="openshift-marketplace/redhat-marketplace-f9c5x" Dec 03 13:53:31 crc kubenswrapper[4986]: I1203 13:53:31.646198 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9a135c3-d98f-46b0-9980-96cfad5859ae-catalog-content\") pod \"redhat-marketplace-f9c5x\" (UID: \"d9a135c3-d98f-46b0-9980-96cfad5859ae\") " pod="openshift-marketplace/redhat-marketplace-f9c5x" Dec 03 13:53:31 crc kubenswrapper[4986]: I1203 13:53:31.667329 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9979\" (UniqueName: \"kubernetes.io/projected/d9a135c3-d98f-46b0-9980-96cfad5859ae-kube-api-access-z9979\") pod \"redhat-marketplace-f9c5x\" (UID: \"d9a135c3-d98f-46b0-9980-96cfad5859ae\") " pod="openshift-marketplace/redhat-marketplace-f9c5x" Dec 03 13:53:31 crc kubenswrapper[4986]: I1203 13:53:31.732083 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f9c5x" Dec 03 13:53:31 crc kubenswrapper[4986]: I1203 13:53:31.839161 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vckx8" event={"ID":"4302fded-456a-4d38-9d8c-310c9cc554a4","Type":"ContainerStarted","Data":"6461b9a73f9f8e8163fe1f264cfc041218fbdea00cd2a8368087a98740aa927f"} Dec 03 13:53:32 crc kubenswrapper[4986]: I1203 13:53:32.213851 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-f9c5x"] Dec 03 13:53:32 crc kubenswrapper[4986]: I1203 13:53:32.849892 4986 generic.go:334] "Generic (PLEG): container finished" podID="d9a135c3-d98f-46b0-9980-96cfad5859ae" containerID="0429c44fc9475a1e4f0d6d1967123e277b3c9db0b28f3a639ea900bb4531d43c" exitCode=0 Dec 03 13:53:32 crc kubenswrapper[4986]: I1203 13:53:32.849961 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f9c5x" event={"ID":"d9a135c3-d98f-46b0-9980-96cfad5859ae","Type":"ContainerDied","Data":"0429c44fc9475a1e4f0d6d1967123e277b3c9db0b28f3a639ea900bb4531d43c"} Dec 03 13:53:32 crc kubenswrapper[4986]: I1203 13:53:32.850353 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f9c5x" event={"ID":"d9a135c3-d98f-46b0-9980-96cfad5859ae","Type":"ContainerStarted","Data":"af42fa5f154386013cd82747a559850406b7a2752653470df6f9410c760a9b18"} Dec 03 13:53:32 crc kubenswrapper[4986]: I1203 13:53:32.852356 4986 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 03 13:53:32 crc kubenswrapper[4986]: I1203 13:53:32.853637 4986 generic.go:334] "Generic (PLEG): container finished" podID="4302fded-456a-4d38-9d8c-310c9cc554a4" containerID="6461b9a73f9f8e8163fe1f264cfc041218fbdea00cd2a8368087a98740aa927f" exitCode=0 Dec 03 13:53:32 crc kubenswrapper[4986]: I1203 13:53:32.853687 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vckx8" event={"ID":"4302fded-456a-4d38-9d8c-310c9cc554a4","Type":"ContainerDied","Data":"6461b9a73f9f8e8163fe1f264cfc041218fbdea00cd2a8368087a98740aa927f"} Dec 03 13:53:33 crc kubenswrapper[4986]: I1203 13:53:33.491743 4986 patch_prober.go:28] interesting pod/machine-config-daemon-xggpj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 13:53:33 crc kubenswrapper[4986]: I1203 13:53:33.491868 4986 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 13:53:33 crc kubenswrapper[4986]: I1203 13:53:33.492036 4986 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" Dec 03 13:53:33 crc kubenswrapper[4986]: I1203 13:53:33.493167 4986 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"958267ff706c66b6b925066f50b19f9b728e7a63b4775cbe35b93517f57da144"} pod="openshift-machine-config-operator/machine-config-daemon-xggpj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 13:53:33 crc kubenswrapper[4986]: I1203 13:53:33.493332 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" containerName="machine-config-daemon" containerID="cri-o://958267ff706c66b6b925066f50b19f9b728e7a63b4775cbe35b93517f57da144" gracePeriod=600 Dec 03 13:53:34 crc kubenswrapper[4986]: I1203 13:53:34.884780 4986 generic.go:334] "Generic (PLEG): container finished" podID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" containerID="958267ff706c66b6b925066f50b19f9b728e7a63b4775cbe35b93517f57da144" exitCode=0 Dec 03 13:53:34 crc kubenswrapper[4986]: I1203 13:53:34.884859 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" event={"ID":"f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c","Type":"ContainerDied","Data":"958267ff706c66b6b925066f50b19f9b728e7a63b4775cbe35b93517f57da144"} Dec 03 13:53:34 crc kubenswrapper[4986]: I1203 13:53:34.885133 4986 scope.go:117] "RemoveContainer" containerID="bc3d6d2a861df402c588ed3c0b6e28555a58365f9870f007a996075068d0e641" Dec 03 13:53:35 crc kubenswrapper[4986]: I1203 13:53:35.895889 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" event={"ID":"f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c","Type":"ContainerStarted","Data":"f4aa16a78e6dc7399cb3f0bba100cc3ca3d479a66dc5b2e6c12ee775b73c7cc4"} Dec 03 13:53:36 crc kubenswrapper[4986]: I1203 13:53:36.907719 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f9c5x" event={"ID":"d9a135c3-d98f-46b0-9980-96cfad5859ae","Type":"ContainerStarted","Data":"e692d45fe624986aa8a2c814d699164199c3cc6b92157e898694a95130caf921"} Dec 03 13:53:36 crc kubenswrapper[4986]: I1203 13:53:36.911422 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vckx8" event={"ID":"4302fded-456a-4d38-9d8c-310c9cc554a4","Type":"ContainerStarted","Data":"391ec1bce019c86d112ed9fdbcde6901d8066351a726b259955540f361053e04"} Dec 03 13:53:38 crc kubenswrapper[4986]: I1203 13:53:38.928358 4986 generic.go:334] "Generic (PLEG): container finished" podID="d9a135c3-d98f-46b0-9980-96cfad5859ae" containerID="e692d45fe624986aa8a2c814d699164199c3cc6b92157e898694a95130caf921" exitCode=0 Dec 03 13:53:38 crc kubenswrapper[4986]: I1203 13:53:38.928424 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f9c5x" event={"ID":"d9a135c3-d98f-46b0-9980-96cfad5859ae","Type":"ContainerDied","Data":"e692d45fe624986aa8a2c814d699164199c3cc6b92157e898694a95130caf921"} Dec 03 13:53:38 crc kubenswrapper[4986]: I1203 13:53:38.930791 4986 generic.go:334] "Generic (PLEG): container finished" podID="4302fded-456a-4d38-9d8c-310c9cc554a4" containerID="391ec1bce019c86d112ed9fdbcde6901d8066351a726b259955540f361053e04" exitCode=0 Dec 03 13:53:38 crc kubenswrapper[4986]: I1203 13:53:38.930810 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vckx8" event={"ID":"4302fded-456a-4d38-9d8c-310c9cc554a4","Type":"ContainerDied","Data":"391ec1bce019c86d112ed9fdbcde6901d8066351a726b259955540f361053e04"} Dec 03 13:53:39 crc kubenswrapper[4986]: I1203 13:53:39.944255 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vckx8" event={"ID":"4302fded-456a-4d38-9d8c-310c9cc554a4","Type":"ContainerStarted","Data":"73b11e450a3ff5f685a61ee0033295bcc93ffa6dcbd8f91154d99a204f53ea5a"} Dec 03 13:53:39 crc kubenswrapper[4986]: I1203 13:53:39.949272 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f9c5x" event={"ID":"d9a135c3-d98f-46b0-9980-96cfad5859ae","Type":"ContainerStarted","Data":"b389c9db139e9cdf93dddcef6ad5cd0ad8d82d97f5ddae08a24eb82de329b227"} Dec 03 13:53:39 crc kubenswrapper[4986]: I1203 13:53:39.963839 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vckx8" podStartSLOduration=5.111177883 podStartE2EDuration="11.963818617s" podCreationTimestamp="2025-12-03 13:53:28 +0000 UTC" firstStartedPulling="2025-12-03 13:53:32.856658265 +0000 UTC m=+3472.323089456" lastFinishedPulling="2025-12-03 13:53:39.709298999 +0000 UTC m=+3479.175730190" observedRunningTime="2025-12-03 13:53:39.960504216 +0000 UTC m=+3479.426935417" watchObservedRunningTime="2025-12-03 13:53:39.963818617 +0000 UTC m=+3479.430249808" Dec 03 13:53:39 crc kubenswrapper[4986]: I1203 13:53:39.978150 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-f9c5x" podStartSLOduration=2.435640985 podStartE2EDuration="8.978132495s" podCreationTimestamp="2025-12-03 13:53:31 +0000 UTC" firstStartedPulling="2025-12-03 13:53:32.851853994 +0000 UTC m=+3472.318285185" lastFinishedPulling="2025-12-03 13:53:39.394345504 +0000 UTC m=+3478.860776695" observedRunningTime="2025-12-03 13:53:39.976223883 +0000 UTC m=+3479.442655084" watchObservedRunningTime="2025-12-03 13:53:39.978132495 +0000 UTC m=+3479.444563686" Dec 03 13:53:41 crc kubenswrapper[4986]: I1203 13:53:41.732634 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-f9c5x" Dec 03 13:53:41 crc kubenswrapper[4986]: I1203 13:53:41.732958 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-f9c5x" Dec 03 13:53:41 crc kubenswrapper[4986]: I1203 13:53:41.801393 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-f9c5x" Dec 03 13:53:48 crc kubenswrapper[4986]: I1203 13:53:48.743697 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vckx8" Dec 03 13:53:48 crc kubenswrapper[4986]: I1203 13:53:48.745386 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vckx8" Dec 03 13:53:48 crc kubenswrapper[4986]: I1203 13:53:48.800179 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vckx8" Dec 03 13:53:49 crc kubenswrapper[4986]: I1203 13:53:49.074009 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vckx8" Dec 03 13:53:49 crc kubenswrapper[4986]: I1203 13:53:49.134677 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vckx8"] Dec 03 13:53:51 crc kubenswrapper[4986]: I1203 13:53:51.047617 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vckx8" podUID="4302fded-456a-4d38-9d8c-310c9cc554a4" containerName="registry-server" containerID="cri-o://73b11e450a3ff5f685a61ee0033295bcc93ffa6dcbd8f91154d99a204f53ea5a" gracePeriod=2 Dec 03 13:53:51 crc kubenswrapper[4986]: I1203 13:53:51.528150 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vckx8" Dec 03 13:53:51 crc kubenswrapper[4986]: I1203 13:53:51.628666 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fthgz\" (UniqueName: \"kubernetes.io/projected/4302fded-456a-4d38-9d8c-310c9cc554a4-kube-api-access-fthgz\") pod \"4302fded-456a-4d38-9d8c-310c9cc554a4\" (UID: \"4302fded-456a-4d38-9d8c-310c9cc554a4\") " Dec 03 13:53:51 crc kubenswrapper[4986]: I1203 13:53:51.628738 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4302fded-456a-4d38-9d8c-310c9cc554a4-catalog-content\") pod \"4302fded-456a-4d38-9d8c-310c9cc554a4\" (UID: \"4302fded-456a-4d38-9d8c-310c9cc554a4\") " Dec 03 13:53:51 crc kubenswrapper[4986]: I1203 13:53:51.628832 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4302fded-456a-4d38-9d8c-310c9cc554a4-utilities\") pod \"4302fded-456a-4d38-9d8c-310c9cc554a4\" (UID: \"4302fded-456a-4d38-9d8c-310c9cc554a4\") " Dec 03 13:53:51 crc kubenswrapper[4986]: I1203 13:53:51.629801 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4302fded-456a-4d38-9d8c-310c9cc554a4-utilities" (OuterVolumeSpecName: "utilities") pod "4302fded-456a-4d38-9d8c-310c9cc554a4" (UID: "4302fded-456a-4d38-9d8c-310c9cc554a4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 13:53:51 crc kubenswrapper[4986]: I1203 13:53:51.636706 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4302fded-456a-4d38-9d8c-310c9cc554a4-kube-api-access-fthgz" (OuterVolumeSpecName: "kube-api-access-fthgz") pod "4302fded-456a-4d38-9d8c-310c9cc554a4" (UID: "4302fded-456a-4d38-9d8c-310c9cc554a4"). InnerVolumeSpecName "kube-api-access-fthgz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:53:51 crc kubenswrapper[4986]: I1203 13:53:51.687942 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4302fded-456a-4d38-9d8c-310c9cc554a4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4302fded-456a-4d38-9d8c-310c9cc554a4" (UID: "4302fded-456a-4d38-9d8c-310c9cc554a4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 13:53:51 crc kubenswrapper[4986]: I1203 13:53:51.731715 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fthgz\" (UniqueName: \"kubernetes.io/projected/4302fded-456a-4d38-9d8c-310c9cc554a4-kube-api-access-fthgz\") on node \"crc\" DevicePath \"\"" Dec 03 13:53:51 crc kubenswrapper[4986]: I1203 13:53:51.731760 4986 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4302fded-456a-4d38-9d8c-310c9cc554a4-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 13:53:51 crc kubenswrapper[4986]: I1203 13:53:51.731773 4986 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4302fded-456a-4d38-9d8c-310c9cc554a4-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 13:53:51 crc kubenswrapper[4986]: I1203 13:53:51.781078 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-f9c5x" Dec 03 13:53:52 crc kubenswrapper[4986]: I1203 13:53:52.059335 4986 generic.go:334] "Generic (PLEG): container finished" podID="4302fded-456a-4d38-9d8c-310c9cc554a4" containerID="73b11e450a3ff5f685a61ee0033295bcc93ffa6dcbd8f91154d99a204f53ea5a" exitCode=0 Dec 03 13:53:52 crc kubenswrapper[4986]: I1203 13:53:52.059576 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vckx8" Dec 03 13:53:52 crc kubenswrapper[4986]: I1203 13:53:52.059608 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vckx8" event={"ID":"4302fded-456a-4d38-9d8c-310c9cc554a4","Type":"ContainerDied","Data":"73b11e450a3ff5f685a61ee0033295bcc93ffa6dcbd8f91154d99a204f53ea5a"} Dec 03 13:53:52 crc kubenswrapper[4986]: I1203 13:53:52.061415 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vckx8" event={"ID":"4302fded-456a-4d38-9d8c-310c9cc554a4","Type":"ContainerDied","Data":"4fd080f5a870ed321ab2756c49374b9a43167f6a97e46dd8d95e58fe5773b045"} Dec 03 13:53:52 crc kubenswrapper[4986]: I1203 13:53:52.061455 4986 scope.go:117] "RemoveContainer" containerID="73b11e450a3ff5f685a61ee0033295bcc93ffa6dcbd8f91154d99a204f53ea5a" Dec 03 13:53:52 crc kubenswrapper[4986]: I1203 13:53:52.089591 4986 scope.go:117] "RemoveContainer" containerID="391ec1bce019c86d112ed9fdbcde6901d8066351a726b259955540f361053e04" Dec 03 13:53:52 crc kubenswrapper[4986]: I1203 13:53:52.102710 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vckx8"] Dec 03 13:53:52 crc kubenswrapper[4986]: I1203 13:53:52.111586 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vckx8"] Dec 03 13:53:52 crc kubenswrapper[4986]: I1203 13:53:52.132946 4986 scope.go:117] "RemoveContainer" containerID="6461b9a73f9f8e8163fe1f264cfc041218fbdea00cd2a8368087a98740aa927f" Dec 03 13:53:52 crc kubenswrapper[4986]: I1203 13:53:52.169822 4986 scope.go:117] "RemoveContainer" containerID="73b11e450a3ff5f685a61ee0033295bcc93ffa6dcbd8f91154d99a204f53ea5a" Dec 03 13:53:52 crc kubenswrapper[4986]: E1203 13:53:52.170563 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73b11e450a3ff5f685a61ee0033295bcc93ffa6dcbd8f91154d99a204f53ea5a\": container with ID starting with 73b11e450a3ff5f685a61ee0033295bcc93ffa6dcbd8f91154d99a204f53ea5a not found: ID does not exist" containerID="73b11e450a3ff5f685a61ee0033295bcc93ffa6dcbd8f91154d99a204f53ea5a" Dec 03 13:53:52 crc kubenswrapper[4986]: I1203 13:53:52.170611 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73b11e450a3ff5f685a61ee0033295bcc93ffa6dcbd8f91154d99a204f53ea5a"} err="failed to get container status \"73b11e450a3ff5f685a61ee0033295bcc93ffa6dcbd8f91154d99a204f53ea5a\": rpc error: code = NotFound desc = could not find container \"73b11e450a3ff5f685a61ee0033295bcc93ffa6dcbd8f91154d99a204f53ea5a\": container with ID starting with 73b11e450a3ff5f685a61ee0033295bcc93ffa6dcbd8f91154d99a204f53ea5a not found: ID does not exist" Dec 03 13:53:52 crc kubenswrapper[4986]: I1203 13:53:52.170644 4986 scope.go:117] "RemoveContainer" containerID="391ec1bce019c86d112ed9fdbcde6901d8066351a726b259955540f361053e04" Dec 03 13:53:52 crc kubenswrapper[4986]: E1203 13:53:52.175002 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"391ec1bce019c86d112ed9fdbcde6901d8066351a726b259955540f361053e04\": container with ID starting with 391ec1bce019c86d112ed9fdbcde6901d8066351a726b259955540f361053e04 not found: ID does not exist" containerID="391ec1bce019c86d112ed9fdbcde6901d8066351a726b259955540f361053e04" Dec 03 13:53:52 crc kubenswrapper[4986]: I1203 13:53:52.175043 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"391ec1bce019c86d112ed9fdbcde6901d8066351a726b259955540f361053e04"} err="failed to get container status \"391ec1bce019c86d112ed9fdbcde6901d8066351a726b259955540f361053e04\": rpc error: code = NotFound desc = could not find container \"391ec1bce019c86d112ed9fdbcde6901d8066351a726b259955540f361053e04\": container with ID starting with 391ec1bce019c86d112ed9fdbcde6901d8066351a726b259955540f361053e04 not found: ID does not exist" Dec 03 13:53:52 crc kubenswrapper[4986]: I1203 13:53:52.175071 4986 scope.go:117] "RemoveContainer" containerID="6461b9a73f9f8e8163fe1f264cfc041218fbdea00cd2a8368087a98740aa927f" Dec 03 13:53:52 crc kubenswrapper[4986]: E1203 13:53:52.175442 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6461b9a73f9f8e8163fe1f264cfc041218fbdea00cd2a8368087a98740aa927f\": container with ID starting with 6461b9a73f9f8e8163fe1f264cfc041218fbdea00cd2a8368087a98740aa927f not found: ID does not exist" containerID="6461b9a73f9f8e8163fe1f264cfc041218fbdea00cd2a8368087a98740aa927f" Dec 03 13:53:52 crc kubenswrapper[4986]: I1203 13:53:52.175463 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6461b9a73f9f8e8163fe1f264cfc041218fbdea00cd2a8368087a98740aa927f"} err="failed to get container status \"6461b9a73f9f8e8163fe1f264cfc041218fbdea00cd2a8368087a98740aa927f\": rpc error: code = NotFound desc = could not find container \"6461b9a73f9f8e8163fe1f264cfc041218fbdea00cd2a8368087a98740aa927f\": container with ID starting with 6461b9a73f9f8e8163fe1f264cfc041218fbdea00cd2a8368087a98740aa927f not found: ID does not exist" Dec 03 13:53:52 crc kubenswrapper[4986]: I1203 13:53:52.638119 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-f9c5x"] Dec 03 13:53:52 crc kubenswrapper[4986]: I1203 13:53:52.638619 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-f9c5x" podUID="d9a135c3-d98f-46b0-9980-96cfad5859ae" containerName="registry-server" containerID="cri-o://b389c9db139e9cdf93dddcef6ad5cd0ad8d82d97f5ddae08a24eb82de329b227" gracePeriod=2 Dec 03 13:53:52 crc kubenswrapper[4986]: I1203 13:53:52.956874 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4302fded-456a-4d38-9d8c-310c9cc554a4" path="/var/lib/kubelet/pods/4302fded-456a-4d38-9d8c-310c9cc554a4/volumes" Dec 03 13:53:53 crc kubenswrapper[4986]: I1203 13:53:53.071825 4986 generic.go:334] "Generic (PLEG): container finished" podID="d9a135c3-d98f-46b0-9980-96cfad5859ae" containerID="b389c9db139e9cdf93dddcef6ad5cd0ad8d82d97f5ddae08a24eb82de329b227" exitCode=0 Dec 03 13:53:53 crc kubenswrapper[4986]: I1203 13:53:53.071905 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f9c5x" event={"ID":"d9a135c3-d98f-46b0-9980-96cfad5859ae","Type":"ContainerDied","Data":"b389c9db139e9cdf93dddcef6ad5cd0ad8d82d97f5ddae08a24eb82de329b227"} Dec 03 13:53:53 crc kubenswrapper[4986]: I1203 13:53:53.071988 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f9c5x" event={"ID":"d9a135c3-d98f-46b0-9980-96cfad5859ae","Type":"ContainerDied","Data":"af42fa5f154386013cd82747a559850406b7a2752653470df6f9410c760a9b18"} Dec 03 13:53:53 crc kubenswrapper[4986]: I1203 13:53:53.072006 4986 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af42fa5f154386013cd82747a559850406b7a2752653470df6f9410c760a9b18" Dec 03 13:53:53 crc kubenswrapper[4986]: I1203 13:53:53.099485 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f9c5x" Dec 03 13:53:53 crc kubenswrapper[4986]: I1203 13:53:53.260645 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9979\" (UniqueName: \"kubernetes.io/projected/d9a135c3-d98f-46b0-9980-96cfad5859ae-kube-api-access-z9979\") pod \"d9a135c3-d98f-46b0-9980-96cfad5859ae\" (UID: \"d9a135c3-d98f-46b0-9980-96cfad5859ae\") " Dec 03 13:53:53 crc kubenswrapper[4986]: I1203 13:53:53.260817 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9a135c3-d98f-46b0-9980-96cfad5859ae-catalog-content\") pod \"d9a135c3-d98f-46b0-9980-96cfad5859ae\" (UID: \"d9a135c3-d98f-46b0-9980-96cfad5859ae\") " Dec 03 13:53:53 crc kubenswrapper[4986]: I1203 13:53:53.260859 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9a135c3-d98f-46b0-9980-96cfad5859ae-utilities\") pod \"d9a135c3-d98f-46b0-9980-96cfad5859ae\" (UID: \"d9a135c3-d98f-46b0-9980-96cfad5859ae\") " Dec 03 13:53:53 crc kubenswrapper[4986]: I1203 13:53:53.261929 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d9a135c3-d98f-46b0-9980-96cfad5859ae-utilities" (OuterVolumeSpecName: "utilities") pod "d9a135c3-d98f-46b0-9980-96cfad5859ae" (UID: "d9a135c3-d98f-46b0-9980-96cfad5859ae"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 13:53:53 crc kubenswrapper[4986]: I1203 13:53:53.266407 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9a135c3-d98f-46b0-9980-96cfad5859ae-kube-api-access-z9979" (OuterVolumeSpecName: "kube-api-access-z9979") pod "d9a135c3-d98f-46b0-9980-96cfad5859ae" (UID: "d9a135c3-d98f-46b0-9980-96cfad5859ae"). InnerVolumeSpecName "kube-api-access-z9979". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:53:53 crc kubenswrapper[4986]: I1203 13:53:53.281781 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d9a135c3-d98f-46b0-9980-96cfad5859ae-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d9a135c3-d98f-46b0-9980-96cfad5859ae" (UID: "d9a135c3-d98f-46b0-9980-96cfad5859ae"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 13:53:53 crc kubenswrapper[4986]: I1203 13:53:53.363370 4986 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9a135c3-d98f-46b0-9980-96cfad5859ae-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 13:53:53 crc kubenswrapper[4986]: I1203 13:53:53.363404 4986 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9a135c3-d98f-46b0-9980-96cfad5859ae-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 13:53:53 crc kubenswrapper[4986]: I1203 13:53:53.363430 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9979\" (UniqueName: \"kubernetes.io/projected/d9a135c3-d98f-46b0-9980-96cfad5859ae-kube-api-access-z9979\") on node \"crc\" DevicePath \"\"" Dec 03 13:53:54 crc kubenswrapper[4986]: I1203 13:53:54.084071 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f9c5x" Dec 03 13:53:54 crc kubenswrapper[4986]: I1203 13:53:54.127476 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-f9c5x"] Dec 03 13:53:54 crc kubenswrapper[4986]: I1203 13:53:54.135029 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-f9c5x"] Dec 03 13:53:54 crc kubenswrapper[4986]: I1203 13:53:54.955069 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9a135c3-d98f-46b0-9980-96cfad5859ae" path="/var/lib/kubelet/pods/d9a135c3-d98f-46b0-9980-96cfad5859ae/volumes" Dec 03 13:55:02 crc kubenswrapper[4986]: I1203 13:55:02.961939 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mq7kr"] Dec 03 13:55:02 crc kubenswrapper[4986]: E1203 13:55:02.962761 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9a135c3-d98f-46b0-9980-96cfad5859ae" containerName="registry-server" Dec 03 13:55:02 crc kubenswrapper[4986]: I1203 13:55:02.962773 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9a135c3-d98f-46b0-9980-96cfad5859ae" containerName="registry-server" Dec 03 13:55:02 crc kubenswrapper[4986]: E1203 13:55:02.962798 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4302fded-456a-4d38-9d8c-310c9cc554a4" containerName="registry-server" Dec 03 13:55:02 crc kubenswrapper[4986]: I1203 13:55:02.962804 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="4302fded-456a-4d38-9d8c-310c9cc554a4" containerName="registry-server" Dec 03 13:55:02 crc kubenswrapper[4986]: E1203 13:55:02.962817 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9a135c3-d98f-46b0-9980-96cfad5859ae" containerName="extract-utilities" Dec 03 13:55:02 crc kubenswrapper[4986]: I1203 13:55:02.962823 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9a135c3-d98f-46b0-9980-96cfad5859ae" containerName="extract-utilities" Dec 03 13:55:02 crc kubenswrapper[4986]: E1203 13:55:02.962832 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4302fded-456a-4d38-9d8c-310c9cc554a4" containerName="extract-utilities" Dec 03 13:55:02 crc kubenswrapper[4986]: I1203 13:55:02.962839 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="4302fded-456a-4d38-9d8c-310c9cc554a4" containerName="extract-utilities" Dec 03 13:55:02 crc kubenswrapper[4986]: E1203 13:55:02.962857 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4302fded-456a-4d38-9d8c-310c9cc554a4" containerName="extract-content" Dec 03 13:55:02 crc kubenswrapper[4986]: I1203 13:55:02.962862 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="4302fded-456a-4d38-9d8c-310c9cc554a4" containerName="extract-content" Dec 03 13:55:02 crc kubenswrapper[4986]: E1203 13:55:02.962875 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9a135c3-d98f-46b0-9980-96cfad5859ae" containerName="extract-content" Dec 03 13:55:02 crc kubenswrapper[4986]: I1203 13:55:02.962881 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9a135c3-d98f-46b0-9980-96cfad5859ae" containerName="extract-content" Dec 03 13:55:02 crc kubenswrapper[4986]: I1203 13:55:02.963066 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="4302fded-456a-4d38-9d8c-310c9cc554a4" containerName="registry-server" Dec 03 13:55:02 crc kubenswrapper[4986]: I1203 13:55:02.963086 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9a135c3-d98f-46b0-9980-96cfad5859ae" containerName="registry-server" Dec 03 13:55:02 crc kubenswrapper[4986]: I1203 13:55:02.964503 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mq7kr"] Dec 03 13:55:02 crc kubenswrapper[4986]: I1203 13:55:02.964613 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mq7kr" Dec 03 13:55:03 crc kubenswrapper[4986]: I1203 13:55:03.019968 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0daffff-be83-4987-aad2-997013c8a8f8-catalog-content\") pod \"redhat-operators-mq7kr\" (UID: \"d0daffff-be83-4987-aad2-997013c8a8f8\") " pod="openshift-marketplace/redhat-operators-mq7kr" Dec 03 13:55:03 crc kubenswrapper[4986]: I1203 13:55:03.020035 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdsfx\" (UniqueName: \"kubernetes.io/projected/d0daffff-be83-4987-aad2-997013c8a8f8-kube-api-access-wdsfx\") pod \"redhat-operators-mq7kr\" (UID: \"d0daffff-be83-4987-aad2-997013c8a8f8\") " pod="openshift-marketplace/redhat-operators-mq7kr" Dec 03 13:55:03 crc kubenswrapper[4986]: I1203 13:55:03.020198 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0daffff-be83-4987-aad2-997013c8a8f8-utilities\") pod \"redhat-operators-mq7kr\" (UID: \"d0daffff-be83-4987-aad2-997013c8a8f8\") " pod="openshift-marketplace/redhat-operators-mq7kr" Dec 03 13:55:03 crc kubenswrapper[4986]: I1203 13:55:03.122970 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0daffff-be83-4987-aad2-997013c8a8f8-catalog-content\") pod \"redhat-operators-mq7kr\" (UID: \"d0daffff-be83-4987-aad2-997013c8a8f8\") " pod="openshift-marketplace/redhat-operators-mq7kr" Dec 03 13:55:03 crc kubenswrapper[4986]: I1203 13:55:03.123034 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdsfx\" (UniqueName: \"kubernetes.io/projected/d0daffff-be83-4987-aad2-997013c8a8f8-kube-api-access-wdsfx\") pod \"redhat-operators-mq7kr\" (UID: \"d0daffff-be83-4987-aad2-997013c8a8f8\") " pod="openshift-marketplace/redhat-operators-mq7kr" Dec 03 13:55:03 crc kubenswrapper[4986]: I1203 13:55:03.123070 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0daffff-be83-4987-aad2-997013c8a8f8-utilities\") pod \"redhat-operators-mq7kr\" (UID: \"d0daffff-be83-4987-aad2-997013c8a8f8\") " pod="openshift-marketplace/redhat-operators-mq7kr" Dec 03 13:55:03 crc kubenswrapper[4986]: I1203 13:55:03.123577 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0daffff-be83-4987-aad2-997013c8a8f8-utilities\") pod \"redhat-operators-mq7kr\" (UID: \"d0daffff-be83-4987-aad2-997013c8a8f8\") " pod="openshift-marketplace/redhat-operators-mq7kr" Dec 03 13:55:03 crc kubenswrapper[4986]: I1203 13:55:03.123577 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0daffff-be83-4987-aad2-997013c8a8f8-catalog-content\") pod \"redhat-operators-mq7kr\" (UID: \"d0daffff-be83-4987-aad2-997013c8a8f8\") " pod="openshift-marketplace/redhat-operators-mq7kr" Dec 03 13:55:03 crc kubenswrapper[4986]: I1203 13:55:03.147185 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdsfx\" (UniqueName: \"kubernetes.io/projected/d0daffff-be83-4987-aad2-997013c8a8f8-kube-api-access-wdsfx\") pod \"redhat-operators-mq7kr\" (UID: \"d0daffff-be83-4987-aad2-997013c8a8f8\") " pod="openshift-marketplace/redhat-operators-mq7kr" Dec 03 13:55:03 crc kubenswrapper[4986]: I1203 13:55:03.299371 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mq7kr" Dec 03 13:55:03 crc kubenswrapper[4986]: I1203 13:55:03.799754 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mq7kr"] Dec 03 13:55:04 crc kubenswrapper[4986]: I1203 13:55:04.703240 4986 generic.go:334] "Generic (PLEG): container finished" podID="d0daffff-be83-4987-aad2-997013c8a8f8" containerID="3f966f75ec2d98649c93bdbe07ad74baf8374519ff108ab64fa44ad9b8e09abd" exitCode=0 Dec 03 13:55:04 crc kubenswrapper[4986]: I1203 13:55:04.703332 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mq7kr" event={"ID":"d0daffff-be83-4987-aad2-997013c8a8f8","Type":"ContainerDied","Data":"3f966f75ec2d98649c93bdbe07ad74baf8374519ff108ab64fa44ad9b8e09abd"} Dec 03 13:55:04 crc kubenswrapper[4986]: I1203 13:55:04.703567 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mq7kr" event={"ID":"d0daffff-be83-4987-aad2-997013c8a8f8","Type":"ContainerStarted","Data":"f10eca3d416c89defaa751ec2b6914ffec80084bfbe812affbbc6208170c4fec"} Dec 03 13:55:05 crc kubenswrapper[4986]: I1203 13:55:05.716940 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mq7kr" event={"ID":"d0daffff-be83-4987-aad2-997013c8a8f8","Type":"ContainerStarted","Data":"0b8b3c69247cb9ece2ae73f07637b115e311cb1598de5a8bffc10d7f07bba1dc"} Dec 03 13:55:08 crc kubenswrapper[4986]: I1203 13:55:08.743152 4986 generic.go:334] "Generic (PLEG): container finished" podID="d0daffff-be83-4987-aad2-997013c8a8f8" containerID="0b8b3c69247cb9ece2ae73f07637b115e311cb1598de5a8bffc10d7f07bba1dc" exitCode=0 Dec 03 13:55:08 crc kubenswrapper[4986]: I1203 13:55:08.743230 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mq7kr" event={"ID":"d0daffff-be83-4987-aad2-997013c8a8f8","Type":"ContainerDied","Data":"0b8b3c69247cb9ece2ae73f07637b115e311cb1598de5a8bffc10d7f07bba1dc"} Dec 03 13:55:10 crc kubenswrapper[4986]: I1203 13:55:10.767376 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mq7kr" event={"ID":"d0daffff-be83-4987-aad2-997013c8a8f8","Type":"ContainerStarted","Data":"f4d9db6e2638c340984ede66079d640153e2a98c9e6413a8b33ce75aa9705b5f"} Dec 03 13:55:10 crc kubenswrapper[4986]: I1203 13:55:10.795003 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mq7kr" podStartSLOduration=3.7863461000000003 podStartE2EDuration="8.794975772s" podCreationTimestamp="2025-12-03 13:55:02 +0000 UTC" firstStartedPulling="2025-12-03 13:55:04.705396757 +0000 UTC m=+3564.171827948" lastFinishedPulling="2025-12-03 13:55:09.714026429 +0000 UTC m=+3569.180457620" observedRunningTime="2025-12-03 13:55:10.787788768 +0000 UTC m=+3570.254219959" watchObservedRunningTime="2025-12-03 13:55:10.794975772 +0000 UTC m=+3570.261406963" Dec 03 13:55:13 crc kubenswrapper[4986]: I1203 13:55:13.299746 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mq7kr" Dec 03 13:55:13 crc kubenswrapper[4986]: I1203 13:55:13.300296 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mq7kr" Dec 03 13:55:14 crc kubenswrapper[4986]: I1203 13:55:14.345701 4986 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mq7kr" podUID="d0daffff-be83-4987-aad2-997013c8a8f8" containerName="registry-server" probeResult="failure" output=< Dec 03 13:55:14 crc kubenswrapper[4986]: timeout: failed to connect service ":50051" within 1s Dec 03 13:55:14 crc kubenswrapper[4986]: > Dec 03 13:55:23 crc kubenswrapper[4986]: I1203 13:55:23.348987 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mq7kr" Dec 03 13:55:23 crc kubenswrapper[4986]: I1203 13:55:23.397219 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mq7kr" Dec 03 13:55:24 crc kubenswrapper[4986]: I1203 13:55:24.904974 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mq7kr"] Dec 03 13:55:24 crc kubenswrapper[4986]: I1203 13:55:24.905570 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mq7kr" podUID="d0daffff-be83-4987-aad2-997013c8a8f8" containerName="registry-server" containerID="cri-o://f4d9db6e2638c340984ede66079d640153e2a98c9e6413a8b33ce75aa9705b5f" gracePeriod=2 Dec 03 13:55:25 crc kubenswrapper[4986]: I1203 13:55:25.389335 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mq7kr" Dec 03 13:55:25 crc kubenswrapper[4986]: I1203 13:55:25.479596 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0daffff-be83-4987-aad2-997013c8a8f8-catalog-content\") pod \"d0daffff-be83-4987-aad2-997013c8a8f8\" (UID: \"d0daffff-be83-4987-aad2-997013c8a8f8\") " Dec 03 13:55:25 crc kubenswrapper[4986]: I1203 13:55:25.479722 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wdsfx\" (UniqueName: \"kubernetes.io/projected/d0daffff-be83-4987-aad2-997013c8a8f8-kube-api-access-wdsfx\") pod \"d0daffff-be83-4987-aad2-997013c8a8f8\" (UID: \"d0daffff-be83-4987-aad2-997013c8a8f8\") " Dec 03 13:55:25 crc kubenswrapper[4986]: I1203 13:55:25.479776 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0daffff-be83-4987-aad2-997013c8a8f8-utilities\") pod \"d0daffff-be83-4987-aad2-997013c8a8f8\" (UID: \"d0daffff-be83-4987-aad2-997013c8a8f8\") " Dec 03 13:55:25 crc kubenswrapper[4986]: I1203 13:55:25.480730 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0daffff-be83-4987-aad2-997013c8a8f8-utilities" (OuterVolumeSpecName: "utilities") pod "d0daffff-be83-4987-aad2-997013c8a8f8" (UID: "d0daffff-be83-4987-aad2-997013c8a8f8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 13:55:25 crc kubenswrapper[4986]: I1203 13:55:25.494127 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0daffff-be83-4987-aad2-997013c8a8f8-kube-api-access-wdsfx" (OuterVolumeSpecName: "kube-api-access-wdsfx") pod "d0daffff-be83-4987-aad2-997013c8a8f8" (UID: "d0daffff-be83-4987-aad2-997013c8a8f8"). InnerVolumeSpecName "kube-api-access-wdsfx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:55:25 crc kubenswrapper[4986]: I1203 13:55:25.582484 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wdsfx\" (UniqueName: \"kubernetes.io/projected/d0daffff-be83-4987-aad2-997013c8a8f8-kube-api-access-wdsfx\") on node \"crc\" DevicePath \"\"" Dec 03 13:55:25 crc kubenswrapper[4986]: I1203 13:55:25.582858 4986 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0daffff-be83-4987-aad2-997013c8a8f8-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 13:55:25 crc kubenswrapper[4986]: I1203 13:55:25.585620 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0daffff-be83-4987-aad2-997013c8a8f8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d0daffff-be83-4987-aad2-997013c8a8f8" (UID: "d0daffff-be83-4987-aad2-997013c8a8f8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 13:55:25 crc kubenswrapper[4986]: I1203 13:55:25.684342 4986 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0daffff-be83-4987-aad2-997013c8a8f8-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 13:55:25 crc kubenswrapper[4986]: I1203 13:55:25.924244 4986 generic.go:334] "Generic (PLEG): container finished" podID="d0daffff-be83-4987-aad2-997013c8a8f8" containerID="f4d9db6e2638c340984ede66079d640153e2a98c9e6413a8b33ce75aa9705b5f" exitCode=0 Dec 03 13:55:25 crc kubenswrapper[4986]: I1203 13:55:25.924319 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mq7kr" event={"ID":"d0daffff-be83-4987-aad2-997013c8a8f8","Type":"ContainerDied","Data":"f4d9db6e2638c340984ede66079d640153e2a98c9e6413a8b33ce75aa9705b5f"} Dec 03 13:55:25 crc kubenswrapper[4986]: I1203 13:55:25.924355 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mq7kr" event={"ID":"d0daffff-be83-4987-aad2-997013c8a8f8","Type":"ContainerDied","Data":"f10eca3d416c89defaa751ec2b6914ffec80084bfbe812affbbc6208170c4fec"} Dec 03 13:55:25 crc kubenswrapper[4986]: I1203 13:55:25.924391 4986 scope.go:117] "RemoveContainer" containerID="f4d9db6e2638c340984ede66079d640153e2a98c9e6413a8b33ce75aa9705b5f" Dec 03 13:55:25 crc kubenswrapper[4986]: I1203 13:55:25.924617 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mq7kr" Dec 03 13:55:25 crc kubenswrapper[4986]: I1203 13:55:25.972066 4986 scope.go:117] "RemoveContainer" containerID="0b8b3c69247cb9ece2ae73f07637b115e311cb1598de5a8bffc10d7f07bba1dc" Dec 03 13:55:25 crc kubenswrapper[4986]: I1203 13:55:25.978126 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mq7kr"] Dec 03 13:55:25 crc kubenswrapper[4986]: I1203 13:55:25.988875 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mq7kr"] Dec 03 13:55:26 crc kubenswrapper[4986]: I1203 13:55:26.003753 4986 scope.go:117] "RemoveContainer" containerID="3f966f75ec2d98649c93bdbe07ad74baf8374519ff108ab64fa44ad9b8e09abd" Dec 03 13:55:26 crc kubenswrapper[4986]: I1203 13:55:26.056196 4986 scope.go:117] "RemoveContainer" containerID="f4d9db6e2638c340984ede66079d640153e2a98c9e6413a8b33ce75aa9705b5f" Dec 03 13:55:26 crc kubenswrapper[4986]: E1203 13:55:26.056901 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4d9db6e2638c340984ede66079d640153e2a98c9e6413a8b33ce75aa9705b5f\": container with ID starting with f4d9db6e2638c340984ede66079d640153e2a98c9e6413a8b33ce75aa9705b5f not found: ID does not exist" containerID="f4d9db6e2638c340984ede66079d640153e2a98c9e6413a8b33ce75aa9705b5f" Dec 03 13:55:26 crc kubenswrapper[4986]: I1203 13:55:26.056968 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4d9db6e2638c340984ede66079d640153e2a98c9e6413a8b33ce75aa9705b5f"} err="failed to get container status \"f4d9db6e2638c340984ede66079d640153e2a98c9e6413a8b33ce75aa9705b5f\": rpc error: code = NotFound desc = could not find container \"f4d9db6e2638c340984ede66079d640153e2a98c9e6413a8b33ce75aa9705b5f\": container with ID starting with f4d9db6e2638c340984ede66079d640153e2a98c9e6413a8b33ce75aa9705b5f not found: ID does not exist" Dec 03 13:55:26 crc kubenswrapper[4986]: I1203 13:55:26.057003 4986 scope.go:117] "RemoveContainer" containerID="0b8b3c69247cb9ece2ae73f07637b115e311cb1598de5a8bffc10d7f07bba1dc" Dec 03 13:55:26 crc kubenswrapper[4986]: E1203 13:55:26.058271 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b8b3c69247cb9ece2ae73f07637b115e311cb1598de5a8bffc10d7f07bba1dc\": container with ID starting with 0b8b3c69247cb9ece2ae73f07637b115e311cb1598de5a8bffc10d7f07bba1dc not found: ID does not exist" containerID="0b8b3c69247cb9ece2ae73f07637b115e311cb1598de5a8bffc10d7f07bba1dc" Dec 03 13:55:26 crc kubenswrapper[4986]: I1203 13:55:26.058343 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b8b3c69247cb9ece2ae73f07637b115e311cb1598de5a8bffc10d7f07bba1dc"} err="failed to get container status \"0b8b3c69247cb9ece2ae73f07637b115e311cb1598de5a8bffc10d7f07bba1dc\": rpc error: code = NotFound desc = could not find container \"0b8b3c69247cb9ece2ae73f07637b115e311cb1598de5a8bffc10d7f07bba1dc\": container with ID starting with 0b8b3c69247cb9ece2ae73f07637b115e311cb1598de5a8bffc10d7f07bba1dc not found: ID does not exist" Dec 03 13:55:26 crc kubenswrapper[4986]: I1203 13:55:26.058405 4986 scope.go:117] "RemoveContainer" containerID="3f966f75ec2d98649c93bdbe07ad74baf8374519ff108ab64fa44ad9b8e09abd" Dec 03 13:55:26 crc kubenswrapper[4986]: E1203 13:55:26.058727 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f966f75ec2d98649c93bdbe07ad74baf8374519ff108ab64fa44ad9b8e09abd\": container with ID starting with 3f966f75ec2d98649c93bdbe07ad74baf8374519ff108ab64fa44ad9b8e09abd not found: ID does not exist" containerID="3f966f75ec2d98649c93bdbe07ad74baf8374519ff108ab64fa44ad9b8e09abd" Dec 03 13:55:26 crc kubenswrapper[4986]: I1203 13:55:26.058852 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f966f75ec2d98649c93bdbe07ad74baf8374519ff108ab64fa44ad9b8e09abd"} err="failed to get container status \"3f966f75ec2d98649c93bdbe07ad74baf8374519ff108ab64fa44ad9b8e09abd\": rpc error: code = NotFound desc = could not find container \"3f966f75ec2d98649c93bdbe07ad74baf8374519ff108ab64fa44ad9b8e09abd\": container with ID starting with 3f966f75ec2d98649c93bdbe07ad74baf8374519ff108ab64fa44ad9b8e09abd not found: ID does not exist" Dec 03 13:55:26 crc kubenswrapper[4986]: I1203 13:55:26.953391 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0daffff-be83-4987-aad2-997013c8a8f8" path="/var/lib/kubelet/pods/d0daffff-be83-4987-aad2-997013c8a8f8/volumes" Dec 03 13:56:03 crc kubenswrapper[4986]: I1203 13:56:03.491032 4986 patch_prober.go:28] interesting pod/machine-config-daemon-xggpj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 13:56:03 crc kubenswrapper[4986]: I1203 13:56:03.492146 4986 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 13:56:33 crc kubenswrapper[4986]: I1203 13:56:33.490933 4986 patch_prober.go:28] interesting pod/machine-config-daemon-xggpj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 13:56:33 crc kubenswrapper[4986]: I1203 13:56:33.491698 4986 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 13:56:37 crc kubenswrapper[4986]: I1203 13:56:37.109776 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tmxm6"] Dec 03 13:56:37 crc kubenswrapper[4986]: E1203 13:56:37.114521 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0daffff-be83-4987-aad2-997013c8a8f8" containerName="extract-content" Dec 03 13:56:37 crc kubenswrapper[4986]: I1203 13:56:37.114560 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0daffff-be83-4987-aad2-997013c8a8f8" containerName="extract-content" Dec 03 13:56:37 crc kubenswrapper[4986]: E1203 13:56:37.114597 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0daffff-be83-4987-aad2-997013c8a8f8" containerName="registry-server" Dec 03 13:56:37 crc kubenswrapper[4986]: I1203 13:56:37.114605 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0daffff-be83-4987-aad2-997013c8a8f8" containerName="registry-server" Dec 03 13:56:37 crc kubenswrapper[4986]: E1203 13:56:37.114659 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0daffff-be83-4987-aad2-997013c8a8f8" containerName="extract-utilities" Dec 03 13:56:37 crc kubenswrapper[4986]: I1203 13:56:37.114667 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0daffff-be83-4987-aad2-997013c8a8f8" containerName="extract-utilities" Dec 03 13:56:37 crc kubenswrapper[4986]: I1203 13:56:37.114965 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0daffff-be83-4987-aad2-997013c8a8f8" containerName="registry-server" Dec 03 13:56:37 crc kubenswrapper[4986]: I1203 13:56:37.159039 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tmxm6" Dec 03 13:56:37 crc kubenswrapper[4986]: I1203 13:56:37.173122 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tmxm6"] Dec 03 13:56:37 crc kubenswrapper[4986]: I1203 13:56:37.301522 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2qd9\" (UniqueName: \"kubernetes.io/projected/87eefbf7-dc1a-41ea-9a63-818aea56e77f-kube-api-access-d2qd9\") pod \"certified-operators-tmxm6\" (UID: \"87eefbf7-dc1a-41ea-9a63-818aea56e77f\") " pod="openshift-marketplace/certified-operators-tmxm6" Dec 03 13:56:37 crc kubenswrapper[4986]: I1203 13:56:37.301626 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87eefbf7-dc1a-41ea-9a63-818aea56e77f-utilities\") pod \"certified-operators-tmxm6\" (UID: \"87eefbf7-dc1a-41ea-9a63-818aea56e77f\") " pod="openshift-marketplace/certified-operators-tmxm6" Dec 03 13:56:37 crc kubenswrapper[4986]: I1203 13:56:37.301694 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87eefbf7-dc1a-41ea-9a63-818aea56e77f-catalog-content\") pod \"certified-operators-tmxm6\" (UID: \"87eefbf7-dc1a-41ea-9a63-818aea56e77f\") " pod="openshift-marketplace/certified-operators-tmxm6" Dec 03 13:56:37 crc kubenswrapper[4986]: I1203 13:56:37.403682 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87eefbf7-dc1a-41ea-9a63-818aea56e77f-utilities\") pod \"certified-operators-tmxm6\" (UID: \"87eefbf7-dc1a-41ea-9a63-818aea56e77f\") " pod="openshift-marketplace/certified-operators-tmxm6" Dec 03 13:56:37 crc kubenswrapper[4986]: I1203 13:56:37.403726 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87eefbf7-dc1a-41ea-9a63-818aea56e77f-catalog-content\") pod \"certified-operators-tmxm6\" (UID: \"87eefbf7-dc1a-41ea-9a63-818aea56e77f\") " pod="openshift-marketplace/certified-operators-tmxm6" Dec 03 13:56:37 crc kubenswrapper[4986]: I1203 13:56:37.403841 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2qd9\" (UniqueName: \"kubernetes.io/projected/87eefbf7-dc1a-41ea-9a63-818aea56e77f-kube-api-access-d2qd9\") pod \"certified-operators-tmxm6\" (UID: \"87eefbf7-dc1a-41ea-9a63-818aea56e77f\") " pod="openshift-marketplace/certified-operators-tmxm6" Dec 03 13:56:37 crc kubenswrapper[4986]: I1203 13:56:37.404327 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87eefbf7-dc1a-41ea-9a63-818aea56e77f-catalog-content\") pod \"certified-operators-tmxm6\" (UID: \"87eefbf7-dc1a-41ea-9a63-818aea56e77f\") " pod="openshift-marketplace/certified-operators-tmxm6" Dec 03 13:56:37 crc kubenswrapper[4986]: I1203 13:56:37.404910 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87eefbf7-dc1a-41ea-9a63-818aea56e77f-utilities\") pod \"certified-operators-tmxm6\" (UID: \"87eefbf7-dc1a-41ea-9a63-818aea56e77f\") " pod="openshift-marketplace/certified-operators-tmxm6" Dec 03 13:56:37 crc kubenswrapper[4986]: I1203 13:56:37.426631 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2qd9\" (UniqueName: \"kubernetes.io/projected/87eefbf7-dc1a-41ea-9a63-818aea56e77f-kube-api-access-d2qd9\") pod \"certified-operators-tmxm6\" (UID: \"87eefbf7-dc1a-41ea-9a63-818aea56e77f\") " pod="openshift-marketplace/certified-operators-tmxm6" Dec 03 13:56:37 crc kubenswrapper[4986]: I1203 13:56:37.492077 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tmxm6" Dec 03 13:56:38 crc kubenswrapper[4986]: I1203 13:56:38.119615 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tmxm6"] Dec 03 13:56:38 crc kubenswrapper[4986]: I1203 13:56:38.590196 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tmxm6" event={"ID":"87eefbf7-dc1a-41ea-9a63-818aea56e77f","Type":"ContainerStarted","Data":"9a66237649ea1e6ca4acf5d30af2b04ba8e37aa22a7862f09b1a17f940ad3b13"} Dec 03 13:56:39 crc kubenswrapper[4986]: I1203 13:56:39.602338 4986 generic.go:334] "Generic (PLEG): container finished" podID="87eefbf7-dc1a-41ea-9a63-818aea56e77f" containerID="87cec8285f2334dc1294f9ab130a56c583a304a7aa45d99602911b617503ac51" exitCode=0 Dec 03 13:56:39 crc kubenswrapper[4986]: I1203 13:56:39.602719 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tmxm6" event={"ID":"87eefbf7-dc1a-41ea-9a63-818aea56e77f","Type":"ContainerDied","Data":"87cec8285f2334dc1294f9ab130a56c583a304a7aa45d99602911b617503ac51"} Dec 03 13:56:40 crc kubenswrapper[4986]: I1203 13:56:40.615475 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tmxm6" event={"ID":"87eefbf7-dc1a-41ea-9a63-818aea56e77f","Type":"ContainerStarted","Data":"57a268494a1003be5c101f50f3916acf6045ad8092d797dc8025cacb83e9778b"} Dec 03 13:56:41 crc kubenswrapper[4986]: I1203 13:56:41.626452 4986 generic.go:334] "Generic (PLEG): container finished" podID="87eefbf7-dc1a-41ea-9a63-818aea56e77f" containerID="57a268494a1003be5c101f50f3916acf6045ad8092d797dc8025cacb83e9778b" exitCode=0 Dec 03 13:56:41 crc kubenswrapper[4986]: I1203 13:56:41.626716 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tmxm6" event={"ID":"87eefbf7-dc1a-41ea-9a63-818aea56e77f","Type":"ContainerDied","Data":"57a268494a1003be5c101f50f3916acf6045ad8092d797dc8025cacb83e9778b"} Dec 03 13:56:42 crc kubenswrapper[4986]: I1203 13:56:42.648070 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tmxm6" event={"ID":"87eefbf7-dc1a-41ea-9a63-818aea56e77f","Type":"ContainerStarted","Data":"b826314d0b264e78f8c0503c92207f79e3302cbace9097bd550f1302788c5f31"} Dec 03 13:56:42 crc kubenswrapper[4986]: I1203 13:56:42.664729 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tmxm6" podStartSLOduration=3.164290201 podStartE2EDuration="5.664712704s" podCreationTimestamp="2025-12-03 13:56:37 +0000 UTC" firstStartedPulling="2025-12-03 13:56:39.604693736 +0000 UTC m=+3659.071124927" lastFinishedPulling="2025-12-03 13:56:42.105116239 +0000 UTC m=+3661.571547430" observedRunningTime="2025-12-03 13:56:42.662090543 +0000 UTC m=+3662.128521754" watchObservedRunningTime="2025-12-03 13:56:42.664712704 +0000 UTC m=+3662.131143895" Dec 03 13:56:47 crc kubenswrapper[4986]: I1203 13:56:47.493262 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tmxm6" Dec 03 13:56:47 crc kubenswrapper[4986]: I1203 13:56:47.493896 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tmxm6" Dec 03 13:56:47 crc kubenswrapper[4986]: I1203 13:56:47.554992 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tmxm6" Dec 03 13:56:47 crc kubenswrapper[4986]: I1203 13:56:47.755666 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tmxm6" Dec 03 13:56:47 crc kubenswrapper[4986]: I1203 13:56:47.810907 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tmxm6"] Dec 03 13:56:49 crc kubenswrapper[4986]: I1203 13:56:49.715446 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tmxm6" podUID="87eefbf7-dc1a-41ea-9a63-818aea56e77f" containerName="registry-server" containerID="cri-o://b826314d0b264e78f8c0503c92207f79e3302cbace9097bd550f1302788c5f31" gracePeriod=2 Dec 03 13:56:49 crc kubenswrapper[4986]: E1203 13:56:49.791310 4986 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod87eefbf7_dc1a_41ea_9a63_818aea56e77f.slice/crio-b826314d0b264e78f8c0503c92207f79e3302cbace9097bd550f1302788c5f31.scope\": RecentStats: unable to find data in memory cache]" Dec 03 13:56:50 crc kubenswrapper[4986]: I1203 13:56:50.227694 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tmxm6" Dec 03 13:56:50 crc kubenswrapper[4986]: I1203 13:56:50.377317 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87eefbf7-dc1a-41ea-9a63-818aea56e77f-catalog-content\") pod \"87eefbf7-dc1a-41ea-9a63-818aea56e77f\" (UID: \"87eefbf7-dc1a-41ea-9a63-818aea56e77f\") " Dec 03 13:56:50 crc kubenswrapper[4986]: I1203 13:56:50.377922 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2qd9\" (UniqueName: \"kubernetes.io/projected/87eefbf7-dc1a-41ea-9a63-818aea56e77f-kube-api-access-d2qd9\") pod \"87eefbf7-dc1a-41ea-9a63-818aea56e77f\" (UID: \"87eefbf7-dc1a-41ea-9a63-818aea56e77f\") " Dec 03 13:56:50 crc kubenswrapper[4986]: I1203 13:56:50.377983 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87eefbf7-dc1a-41ea-9a63-818aea56e77f-utilities\") pod \"87eefbf7-dc1a-41ea-9a63-818aea56e77f\" (UID: \"87eefbf7-dc1a-41ea-9a63-818aea56e77f\") " Dec 03 13:56:50 crc kubenswrapper[4986]: I1203 13:56:50.379136 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87eefbf7-dc1a-41ea-9a63-818aea56e77f-utilities" (OuterVolumeSpecName: "utilities") pod "87eefbf7-dc1a-41ea-9a63-818aea56e77f" (UID: "87eefbf7-dc1a-41ea-9a63-818aea56e77f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 13:56:50 crc kubenswrapper[4986]: I1203 13:56:50.389004 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87eefbf7-dc1a-41ea-9a63-818aea56e77f-kube-api-access-d2qd9" (OuterVolumeSpecName: "kube-api-access-d2qd9") pod "87eefbf7-dc1a-41ea-9a63-818aea56e77f" (UID: "87eefbf7-dc1a-41ea-9a63-818aea56e77f"). InnerVolumeSpecName "kube-api-access-d2qd9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:56:50 crc kubenswrapper[4986]: I1203 13:56:50.425791 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87eefbf7-dc1a-41ea-9a63-818aea56e77f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "87eefbf7-dc1a-41ea-9a63-818aea56e77f" (UID: "87eefbf7-dc1a-41ea-9a63-818aea56e77f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 13:56:50 crc kubenswrapper[4986]: I1203 13:56:50.480441 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2qd9\" (UniqueName: \"kubernetes.io/projected/87eefbf7-dc1a-41ea-9a63-818aea56e77f-kube-api-access-d2qd9\") on node \"crc\" DevicePath \"\"" Dec 03 13:56:50 crc kubenswrapper[4986]: I1203 13:56:50.480484 4986 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87eefbf7-dc1a-41ea-9a63-818aea56e77f-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 13:56:50 crc kubenswrapper[4986]: I1203 13:56:50.480498 4986 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87eefbf7-dc1a-41ea-9a63-818aea56e77f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 13:56:50 crc kubenswrapper[4986]: I1203 13:56:50.729087 4986 generic.go:334] "Generic (PLEG): container finished" podID="87eefbf7-dc1a-41ea-9a63-818aea56e77f" containerID="b826314d0b264e78f8c0503c92207f79e3302cbace9097bd550f1302788c5f31" exitCode=0 Dec 03 13:56:50 crc kubenswrapper[4986]: I1203 13:56:50.729158 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tmxm6" event={"ID":"87eefbf7-dc1a-41ea-9a63-818aea56e77f","Type":"ContainerDied","Data":"b826314d0b264e78f8c0503c92207f79e3302cbace9097bd550f1302788c5f31"} Dec 03 13:56:50 crc kubenswrapper[4986]: I1203 13:56:50.729188 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tmxm6" event={"ID":"87eefbf7-dc1a-41ea-9a63-818aea56e77f","Type":"ContainerDied","Data":"9a66237649ea1e6ca4acf5d30af2b04ba8e37aa22a7862f09b1a17f940ad3b13"} Dec 03 13:56:50 crc kubenswrapper[4986]: I1203 13:56:50.729208 4986 scope.go:117] "RemoveContainer" containerID="b826314d0b264e78f8c0503c92207f79e3302cbace9097bd550f1302788c5f31" Dec 03 13:56:50 crc kubenswrapper[4986]: I1203 13:56:50.729371 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tmxm6" Dec 03 13:56:50 crc kubenswrapper[4986]: I1203 13:56:50.772908 4986 scope.go:117] "RemoveContainer" containerID="57a268494a1003be5c101f50f3916acf6045ad8092d797dc8025cacb83e9778b" Dec 03 13:56:50 crc kubenswrapper[4986]: I1203 13:56:50.773039 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tmxm6"] Dec 03 13:56:50 crc kubenswrapper[4986]: I1203 13:56:50.782193 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tmxm6"] Dec 03 13:56:50 crc kubenswrapper[4986]: I1203 13:56:50.803047 4986 scope.go:117] "RemoveContainer" containerID="87cec8285f2334dc1294f9ab130a56c583a304a7aa45d99602911b617503ac51" Dec 03 13:56:50 crc kubenswrapper[4986]: I1203 13:56:50.864805 4986 scope.go:117] "RemoveContainer" containerID="b826314d0b264e78f8c0503c92207f79e3302cbace9097bd550f1302788c5f31" Dec 03 13:56:50 crc kubenswrapper[4986]: E1203 13:56:50.865475 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b826314d0b264e78f8c0503c92207f79e3302cbace9097bd550f1302788c5f31\": container with ID starting with b826314d0b264e78f8c0503c92207f79e3302cbace9097bd550f1302788c5f31 not found: ID does not exist" containerID="b826314d0b264e78f8c0503c92207f79e3302cbace9097bd550f1302788c5f31" Dec 03 13:56:50 crc kubenswrapper[4986]: I1203 13:56:50.865540 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b826314d0b264e78f8c0503c92207f79e3302cbace9097bd550f1302788c5f31"} err="failed to get container status \"b826314d0b264e78f8c0503c92207f79e3302cbace9097bd550f1302788c5f31\": rpc error: code = NotFound desc = could not find container \"b826314d0b264e78f8c0503c92207f79e3302cbace9097bd550f1302788c5f31\": container with ID starting with b826314d0b264e78f8c0503c92207f79e3302cbace9097bd550f1302788c5f31 not found: ID does not exist" Dec 03 13:56:50 crc kubenswrapper[4986]: I1203 13:56:50.865569 4986 scope.go:117] "RemoveContainer" containerID="57a268494a1003be5c101f50f3916acf6045ad8092d797dc8025cacb83e9778b" Dec 03 13:56:50 crc kubenswrapper[4986]: E1203 13:56:50.866171 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57a268494a1003be5c101f50f3916acf6045ad8092d797dc8025cacb83e9778b\": container with ID starting with 57a268494a1003be5c101f50f3916acf6045ad8092d797dc8025cacb83e9778b not found: ID does not exist" containerID="57a268494a1003be5c101f50f3916acf6045ad8092d797dc8025cacb83e9778b" Dec 03 13:56:50 crc kubenswrapper[4986]: I1203 13:56:50.866218 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57a268494a1003be5c101f50f3916acf6045ad8092d797dc8025cacb83e9778b"} err="failed to get container status \"57a268494a1003be5c101f50f3916acf6045ad8092d797dc8025cacb83e9778b\": rpc error: code = NotFound desc = could not find container \"57a268494a1003be5c101f50f3916acf6045ad8092d797dc8025cacb83e9778b\": container with ID starting with 57a268494a1003be5c101f50f3916acf6045ad8092d797dc8025cacb83e9778b not found: ID does not exist" Dec 03 13:56:50 crc kubenswrapper[4986]: I1203 13:56:50.866246 4986 scope.go:117] "RemoveContainer" containerID="87cec8285f2334dc1294f9ab130a56c583a304a7aa45d99602911b617503ac51" Dec 03 13:56:50 crc kubenswrapper[4986]: E1203 13:56:50.866644 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87cec8285f2334dc1294f9ab130a56c583a304a7aa45d99602911b617503ac51\": container with ID starting with 87cec8285f2334dc1294f9ab130a56c583a304a7aa45d99602911b617503ac51 not found: ID does not exist" containerID="87cec8285f2334dc1294f9ab130a56c583a304a7aa45d99602911b617503ac51" Dec 03 13:56:50 crc kubenswrapper[4986]: I1203 13:56:50.866677 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87cec8285f2334dc1294f9ab130a56c583a304a7aa45d99602911b617503ac51"} err="failed to get container status \"87cec8285f2334dc1294f9ab130a56c583a304a7aa45d99602911b617503ac51\": rpc error: code = NotFound desc = could not find container \"87cec8285f2334dc1294f9ab130a56c583a304a7aa45d99602911b617503ac51\": container with ID starting with 87cec8285f2334dc1294f9ab130a56c583a304a7aa45d99602911b617503ac51 not found: ID does not exist" Dec 03 13:56:50 crc kubenswrapper[4986]: I1203 13:56:50.959525 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87eefbf7-dc1a-41ea-9a63-818aea56e77f" path="/var/lib/kubelet/pods/87eefbf7-dc1a-41ea-9a63-818aea56e77f/volumes" Dec 03 13:57:03 crc kubenswrapper[4986]: I1203 13:57:03.491140 4986 patch_prober.go:28] interesting pod/machine-config-daemon-xggpj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 13:57:03 crc kubenswrapper[4986]: I1203 13:57:03.491807 4986 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 13:57:03 crc kubenswrapper[4986]: I1203 13:57:03.491867 4986 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" Dec 03 13:57:03 crc kubenswrapper[4986]: I1203 13:57:03.492752 4986 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f4aa16a78e6dc7399cb3f0bba100cc3ca3d479a66dc5b2e6c12ee775b73c7cc4"} pod="openshift-machine-config-operator/machine-config-daemon-xggpj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 13:57:03 crc kubenswrapper[4986]: I1203 13:57:03.493471 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" containerName="machine-config-daemon" containerID="cri-o://f4aa16a78e6dc7399cb3f0bba100cc3ca3d479a66dc5b2e6c12ee775b73c7cc4" gracePeriod=600 Dec 03 13:57:03 crc kubenswrapper[4986]: E1203 13:57:03.625703 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 13:57:03 crc kubenswrapper[4986]: I1203 13:57:03.867166 4986 generic.go:334] "Generic (PLEG): container finished" podID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" containerID="f4aa16a78e6dc7399cb3f0bba100cc3ca3d479a66dc5b2e6c12ee775b73c7cc4" exitCode=0 Dec 03 13:57:03 crc kubenswrapper[4986]: I1203 13:57:03.867207 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" event={"ID":"f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c","Type":"ContainerDied","Data":"f4aa16a78e6dc7399cb3f0bba100cc3ca3d479a66dc5b2e6c12ee775b73c7cc4"} Dec 03 13:57:03 crc kubenswrapper[4986]: I1203 13:57:03.867560 4986 scope.go:117] "RemoveContainer" containerID="958267ff706c66b6b925066f50b19f9b728e7a63b4775cbe35b93517f57da144" Dec 03 13:57:03 crc kubenswrapper[4986]: I1203 13:57:03.868101 4986 scope.go:117] "RemoveContainer" containerID="f4aa16a78e6dc7399cb3f0bba100cc3ca3d479a66dc5b2e6c12ee775b73c7cc4" Dec 03 13:57:03 crc kubenswrapper[4986]: E1203 13:57:03.868589 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 13:57:18 crc kubenswrapper[4986]: I1203 13:57:18.944330 4986 scope.go:117] "RemoveContainer" containerID="f4aa16a78e6dc7399cb3f0bba100cc3ca3d479a66dc5b2e6c12ee775b73c7cc4" Dec 03 13:57:18 crc kubenswrapper[4986]: E1203 13:57:18.945628 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 13:57:31 crc kubenswrapper[4986]: I1203 13:57:31.944083 4986 scope.go:117] "RemoveContainer" containerID="f4aa16a78e6dc7399cb3f0bba100cc3ca3d479a66dc5b2e6c12ee775b73c7cc4" Dec 03 13:57:31 crc kubenswrapper[4986]: E1203 13:57:31.945405 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 13:57:44 crc kubenswrapper[4986]: I1203 13:57:44.944011 4986 scope.go:117] "RemoveContainer" containerID="f4aa16a78e6dc7399cb3f0bba100cc3ca3d479a66dc5b2e6c12ee775b73c7cc4" Dec 03 13:57:44 crc kubenswrapper[4986]: E1203 13:57:44.944802 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 13:57:59 crc kubenswrapper[4986]: I1203 13:57:59.943865 4986 scope.go:117] "RemoveContainer" containerID="f4aa16a78e6dc7399cb3f0bba100cc3ca3d479a66dc5b2e6c12ee775b73c7cc4" Dec 03 13:57:59 crc kubenswrapper[4986]: E1203 13:57:59.944716 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 13:58:13 crc kubenswrapper[4986]: I1203 13:58:13.943242 4986 scope.go:117] "RemoveContainer" containerID="f4aa16a78e6dc7399cb3f0bba100cc3ca3d479a66dc5b2e6c12ee775b73c7cc4" Dec 03 13:58:13 crc kubenswrapper[4986]: E1203 13:58:13.944093 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 13:58:25 crc kubenswrapper[4986]: I1203 13:58:25.943948 4986 scope.go:117] "RemoveContainer" containerID="f4aa16a78e6dc7399cb3f0bba100cc3ca3d479a66dc5b2e6c12ee775b73c7cc4" Dec 03 13:58:25 crc kubenswrapper[4986]: E1203 13:58:25.944748 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 13:58:37 crc kubenswrapper[4986]: I1203 13:58:37.943252 4986 scope.go:117] "RemoveContainer" containerID="f4aa16a78e6dc7399cb3f0bba100cc3ca3d479a66dc5b2e6c12ee775b73c7cc4" Dec 03 13:58:37 crc kubenswrapper[4986]: E1203 13:58:37.944135 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 13:58:51 crc kubenswrapper[4986]: I1203 13:58:51.943560 4986 scope.go:117] "RemoveContainer" containerID="f4aa16a78e6dc7399cb3f0bba100cc3ca3d479a66dc5b2e6c12ee775b73c7cc4" Dec 03 13:58:51 crc kubenswrapper[4986]: E1203 13:58:51.944579 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 13:59:06 crc kubenswrapper[4986]: I1203 13:59:06.943513 4986 scope.go:117] "RemoveContainer" containerID="f4aa16a78e6dc7399cb3f0bba100cc3ca3d479a66dc5b2e6c12ee775b73c7cc4" Dec 03 13:59:06 crc kubenswrapper[4986]: E1203 13:59:06.944318 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 13:59:18 crc kubenswrapper[4986]: I1203 13:59:18.944193 4986 scope.go:117] "RemoveContainer" containerID="f4aa16a78e6dc7399cb3f0bba100cc3ca3d479a66dc5b2e6c12ee775b73c7cc4" Dec 03 13:59:18 crc kubenswrapper[4986]: E1203 13:59:18.945338 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 13:59:29 crc kubenswrapper[4986]: I1203 13:59:29.209980 4986 generic.go:334] "Generic (PLEG): container finished" podID="4159adcb-0a7a-4765-ac54-186effebee8e" containerID="4172b99320d799d9259c74f1c5320b3163d2de38b22bf2e9e3a7d62653b419e4" exitCode=0 Dec 03 13:59:29 crc kubenswrapper[4986]: I1203 13:59:29.210110 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"4159adcb-0a7a-4765-ac54-186effebee8e","Type":"ContainerDied","Data":"4172b99320d799d9259c74f1c5320b3163d2de38b22bf2e9e3a7d62653b419e4"} Dec 03 13:59:29 crc kubenswrapper[4986]: I1203 13:59:29.944675 4986 scope.go:117] "RemoveContainer" containerID="f4aa16a78e6dc7399cb3f0bba100cc3ca3d479a66dc5b2e6c12ee775b73c7cc4" Dec 03 13:59:29 crc kubenswrapper[4986]: E1203 13:59:29.945413 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 13:59:30 crc kubenswrapper[4986]: I1203 13:59:30.619445 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 03 13:59:30 crc kubenswrapper[4986]: I1203 13:59:30.777022 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/4159adcb-0a7a-4765-ac54-186effebee8e-test-operator-ephemeral-workdir\") pod \"4159adcb-0a7a-4765-ac54-186effebee8e\" (UID: \"4159adcb-0a7a-4765-ac54-186effebee8e\") " Dec 03 13:59:30 crc kubenswrapper[4986]: I1203 13:59:30.777338 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4159adcb-0a7a-4765-ac54-186effebee8e-openstack-config-secret\") pod \"4159adcb-0a7a-4765-ac54-186effebee8e\" (UID: \"4159adcb-0a7a-4765-ac54-186effebee8e\") " Dec 03 13:59:30 crc kubenswrapper[4986]: I1203 13:59:30.777516 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4159adcb-0a7a-4765-ac54-186effebee8e-config-data\") pod \"4159adcb-0a7a-4765-ac54-186effebee8e\" (UID: \"4159adcb-0a7a-4765-ac54-186effebee8e\") " Dec 03 13:59:30 crc kubenswrapper[4986]: I1203 13:59:30.777611 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4159adcb-0a7a-4765-ac54-186effebee8e-ssh-key\") pod \"4159adcb-0a7a-4765-ac54-186effebee8e\" (UID: \"4159adcb-0a7a-4765-ac54-186effebee8e\") " Dec 03 13:59:30 crc kubenswrapper[4986]: I1203 13:59:30.777721 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g4qv5\" (UniqueName: \"kubernetes.io/projected/4159adcb-0a7a-4765-ac54-186effebee8e-kube-api-access-g4qv5\") pod \"4159adcb-0a7a-4765-ac54-186effebee8e\" (UID: \"4159adcb-0a7a-4765-ac54-186effebee8e\") " Dec 03 13:59:30 crc kubenswrapper[4986]: I1203 13:59:30.777864 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/4159adcb-0a7a-4765-ac54-186effebee8e-test-operator-ephemeral-temporary\") pod \"4159adcb-0a7a-4765-ac54-186effebee8e\" (UID: \"4159adcb-0a7a-4765-ac54-186effebee8e\") " Dec 03 13:59:30 crc kubenswrapper[4986]: I1203 13:59:30.777975 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4159adcb-0a7a-4765-ac54-186effebee8e-openstack-config\") pod \"4159adcb-0a7a-4765-ac54-186effebee8e\" (UID: \"4159adcb-0a7a-4765-ac54-186effebee8e\") " Dec 03 13:59:30 crc kubenswrapper[4986]: I1203 13:59:30.778118 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"4159adcb-0a7a-4765-ac54-186effebee8e\" (UID: \"4159adcb-0a7a-4765-ac54-186effebee8e\") " Dec 03 13:59:30 crc kubenswrapper[4986]: I1203 13:59:30.778385 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/4159adcb-0a7a-4765-ac54-186effebee8e-ca-certs\") pod \"4159adcb-0a7a-4765-ac54-186effebee8e\" (UID: \"4159adcb-0a7a-4765-ac54-186effebee8e\") " Dec 03 13:59:30 crc kubenswrapper[4986]: I1203 13:59:30.778454 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4159adcb-0a7a-4765-ac54-186effebee8e-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "4159adcb-0a7a-4765-ac54-186effebee8e" (UID: "4159adcb-0a7a-4765-ac54-186effebee8e"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 13:59:30 crc kubenswrapper[4986]: I1203 13:59:30.779075 4986 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/4159adcb-0a7a-4765-ac54-186effebee8e-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Dec 03 13:59:30 crc kubenswrapper[4986]: I1203 13:59:30.779677 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4159adcb-0a7a-4765-ac54-186effebee8e-config-data" (OuterVolumeSpecName: "config-data") pod "4159adcb-0a7a-4765-ac54-186effebee8e" (UID: "4159adcb-0a7a-4765-ac54-186effebee8e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:59:30 crc kubenswrapper[4986]: I1203 13:59:30.779900 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4159adcb-0a7a-4765-ac54-186effebee8e-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "4159adcb-0a7a-4765-ac54-186effebee8e" (UID: "4159adcb-0a7a-4765-ac54-186effebee8e"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 13:59:30 crc kubenswrapper[4986]: I1203 13:59:30.792854 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "test-operator-logs") pod "4159adcb-0a7a-4765-ac54-186effebee8e" (UID: "4159adcb-0a7a-4765-ac54-186effebee8e"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 03 13:59:30 crc kubenswrapper[4986]: I1203 13:59:30.794136 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4159adcb-0a7a-4765-ac54-186effebee8e-kube-api-access-g4qv5" (OuterVolumeSpecName: "kube-api-access-g4qv5") pod "4159adcb-0a7a-4765-ac54-186effebee8e" (UID: "4159adcb-0a7a-4765-ac54-186effebee8e"). InnerVolumeSpecName "kube-api-access-g4qv5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:59:30 crc kubenswrapper[4986]: I1203 13:59:30.809121 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4159adcb-0a7a-4765-ac54-186effebee8e-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "4159adcb-0a7a-4765-ac54-186effebee8e" (UID: "4159adcb-0a7a-4765-ac54-186effebee8e"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:59:30 crc kubenswrapper[4986]: I1203 13:59:30.827658 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4159adcb-0a7a-4765-ac54-186effebee8e-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "4159adcb-0a7a-4765-ac54-186effebee8e" (UID: "4159adcb-0a7a-4765-ac54-186effebee8e"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:59:30 crc kubenswrapper[4986]: I1203 13:59:30.830955 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4159adcb-0a7a-4765-ac54-186effebee8e-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "4159adcb-0a7a-4765-ac54-186effebee8e" (UID: "4159adcb-0a7a-4765-ac54-186effebee8e"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:59:30 crc kubenswrapper[4986]: I1203 13:59:30.838582 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4159adcb-0a7a-4765-ac54-186effebee8e-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "4159adcb-0a7a-4765-ac54-186effebee8e" (UID: "4159adcb-0a7a-4765-ac54-186effebee8e"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:59:30 crc kubenswrapper[4986]: I1203 13:59:30.880653 4986 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/4159adcb-0a7a-4765-ac54-186effebee8e-ca-certs\") on node \"crc\" DevicePath \"\"" Dec 03 13:59:30 crc kubenswrapper[4986]: I1203 13:59:30.880686 4986 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/4159adcb-0a7a-4765-ac54-186effebee8e-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Dec 03 13:59:30 crc kubenswrapper[4986]: I1203 13:59:30.880697 4986 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4159adcb-0a7a-4765-ac54-186effebee8e-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Dec 03 13:59:30 crc kubenswrapper[4986]: I1203 13:59:30.880707 4986 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4159adcb-0a7a-4765-ac54-186effebee8e-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 13:59:30 crc kubenswrapper[4986]: I1203 13:59:30.880716 4986 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4159adcb-0a7a-4765-ac54-186effebee8e-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 03 13:59:30 crc kubenswrapper[4986]: I1203 13:59:30.880724 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g4qv5\" (UniqueName: \"kubernetes.io/projected/4159adcb-0a7a-4765-ac54-186effebee8e-kube-api-access-g4qv5\") on node \"crc\" DevicePath \"\"" Dec 03 13:59:30 crc kubenswrapper[4986]: I1203 13:59:30.880734 4986 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4159adcb-0a7a-4765-ac54-186effebee8e-openstack-config\") on node \"crc\" DevicePath \"\"" Dec 03 13:59:30 crc kubenswrapper[4986]: I1203 13:59:30.880770 4986 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Dec 03 13:59:30 crc kubenswrapper[4986]: I1203 13:59:30.901088 4986 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Dec 03 13:59:30 crc kubenswrapper[4986]: I1203 13:59:30.983086 4986 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Dec 03 13:59:31 crc kubenswrapper[4986]: I1203 13:59:31.231917 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"4159adcb-0a7a-4765-ac54-186effebee8e","Type":"ContainerDied","Data":"064547c1186df606640774d3213cf27d8bd21113e38f31ccb1c2ab464170f277"} Dec 03 13:59:31 crc kubenswrapper[4986]: I1203 13:59:31.232261 4986 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="064547c1186df606640774d3213cf27d8bd21113e38f31ccb1c2ab464170f277" Dec 03 13:59:31 crc kubenswrapper[4986]: I1203 13:59:31.232019 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 03 13:59:43 crc kubenswrapper[4986]: I1203 13:59:43.395367 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Dec 03 13:59:43 crc kubenswrapper[4986]: E1203 13:59:43.396619 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87eefbf7-dc1a-41ea-9a63-818aea56e77f" containerName="extract-utilities" Dec 03 13:59:43 crc kubenswrapper[4986]: I1203 13:59:43.396639 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="87eefbf7-dc1a-41ea-9a63-818aea56e77f" containerName="extract-utilities" Dec 03 13:59:43 crc kubenswrapper[4986]: E1203 13:59:43.396654 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87eefbf7-dc1a-41ea-9a63-818aea56e77f" containerName="extract-content" Dec 03 13:59:43 crc kubenswrapper[4986]: I1203 13:59:43.396663 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="87eefbf7-dc1a-41ea-9a63-818aea56e77f" containerName="extract-content" Dec 03 13:59:43 crc kubenswrapper[4986]: E1203 13:59:43.396680 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87eefbf7-dc1a-41ea-9a63-818aea56e77f" containerName="registry-server" Dec 03 13:59:43 crc kubenswrapper[4986]: I1203 13:59:43.396687 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="87eefbf7-dc1a-41ea-9a63-818aea56e77f" containerName="registry-server" Dec 03 13:59:43 crc kubenswrapper[4986]: E1203 13:59:43.396712 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4159adcb-0a7a-4765-ac54-186effebee8e" containerName="tempest-tests-tempest-tests-runner" Dec 03 13:59:43 crc kubenswrapper[4986]: I1203 13:59:43.396721 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="4159adcb-0a7a-4765-ac54-186effebee8e" containerName="tempest-tests-tempest-tests-runner" Dec 03 13:59:43 crc kubenswrapper[4986]: I1203 13:59:43.397023 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="4159adcb-0a7a-4765-ac54-186effebee8e" containerName="tempest-tests-tempest-tests-runner" Dec 03 13:59:43 crc kubenswrapper[4986]: I1203 13:59:43.397045 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="87eefbf7-dc1a-41ea-9a63-818aea56e77f" containerName="registry-server" Dec 03 13:59:43 crc kubenswrapper[4986]: I1203 13:59:43.397851 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 03 13:59:43 crc kubenswrapper[4986]: I1203 13:59:43.401417 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-lvd4x" Dec 03 13:59:43 crc kubenswrapper[4986]: I1203 13:59:43.412129 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Dec 03 13:59:43 crc kubenswrapper[4986]: I1203 13:59:43.530179 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"e8d1d8d5-6041-4e8a-bf89-d4811ef1a2bb\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 03 13:59:43 crc kubenswrapper[4986]: I1203 13:59:43.530328 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtngb\" (UniqueName: \"kubernetes.io/projected/e8d1d8d5-6041-4e8a-bf89-d4811ef1a2bb-kube-api-access-vtngb\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"e8d1d8d5-6041-4e8a-bf89-d4811ef1a2bb\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 03 13:59:43 crc kubenswrapper[4986]: I1203 13:59:43.632175 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtngb\" (UniqueName: \"kubernetes.io/projected/e8d1d8d5-6041-4e8a-bf89-d4811ef1a2bb-kube-api-access-vtngb\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"e8d1d8d5-6041-4e8a-bf89-d4811ef1a2bb\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 03 13:59:43 crc kubenswrapper[4986]: I1203 13:59:43.632383 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"e8d1d8d5-6041-4e8a-bf89-d4811ef1a2bb\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 03 13:59:43 crc kubenswrapper[4986]: I1203 13:59:43.632964 4986 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"e8d1d8d5-6041-4e8a-bf89-d4811ef1a2bb\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 03 13:59:43 crc kubenswrapper[4986]: I1203 13:59:43.649703 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtngb\" (UniqueName: \"kubernetes.io/projected/e8d1d8d5-6041-4e8a-bf89-d4811ef1a2bb-kube-api-access-vtngb\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"e8d1d8d5-6041-4e8a-bf89-d4811ef1a2bb\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 03 13:59:43 crc kubenswrapper[4986]: I1203 13:59:43.658988 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"e8d1d8d5-6041-4e8a-bf89-d4811ef1a2bb\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 03 13:59:43 crc kubenswrapper[4986]: I1203 13:59:43.729964 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 03 13:59:43 crc kubenswrapper[4986]: I1203 13:59:43.943568 4986 scope.go:117] "RemoveContainer" containerID="f4aa16a78e6dc7399cb3f0bba100cc3ca3d479a66dc5b2e6c12ee775b73c7cc4" Dec 03 13:59:43 crc kubenswrapper[4986]: E1203 13:59:43.944536 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 13:59:44 crc kubenswrapper[4986]: I1203 13:59:44.224060 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Dec 03 13:59:44 crc kubenswrapper[4986]: I1203 13:59:44.224833 4986 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 03 13:59:44 crc kubenswrapper[4986]: I1203 13:59:44.356194 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"e8d1d8d5-6041-4e8a-bf89-d4811ef1a2bb","Type":"ContainerStarted","Data":"9202bee42f35a1e733bd3114648f89696c93e3a5204b1a5ebaa50ffa9baa09e1"} Dec 03 13:59:45 crc kubenswrapper[4986]: I1203 13:59:45.378064 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"e8d1d8d5-6041-4e8a-bf89-d4811ef1a2bb","Type":"ContainerStarted","Data":"d189eb6f514910889061905e52515dc49006b95f52e0cb19c402e749c30fd24b"} Dec 03 13:59:45 crc kubenswrapper[4986]: I1203 13:59:45.394732 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=1.530860327 podStartE2EDuration="2.394710312s" podCreationTimestamp="2025-12-03 13:59:43 +0000 UTC" firstStartedPulling="2025-12-03 13:59:44.224548025 +0000 UTC m=+3843.690979216" lastFinishedPulling="2025-12-03 13:59:45.08839801 +0000 UTC m=+3844.554829201" observedRunningTime="2025-12-03 13:59:45.390209421 +0000 UTC m=+3844.856640632" watchObservedRunningTime="2025-12-03 13:59:45.394710312 +0000 UTC m=+3844.861141503" Dec 03 13:59:56 crc kubenswrapper[4986]: I1203 13:59:56.943324 4986 scope.go:117] "RemoveContainer" containerID="f4aa16a78e6dc7399cb3f0bba100cc3ca3d479a66dc5b2e6c12ee775b73c7cc4" Dec 03 13:59:56 crc kubenswrapper[4986]: E1203 13:59:56.944141 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 13:59:58 crc kubenswrapper[4986]: I1203 13:59:58.263666 4986 scope.go:117] "RemoveContainer" containerID="e692d45fe624986aa8a2c814d699164199c3cc6b92157e898694a95130caf921" Dec 03 13:59:58 crc kubenswrapper[4986]: I1203 13:59:58.292085 4986 scope.go:117] "RemoveContainer" containerID="0429c44fc9475a1e4f0d6d1967123e277b3c9db0b28f3a639ea900bb4531d43c" Dec 03 13:59:58 crc kubenswrapper[4986]: I1203 13:59:58.338623 4986 scope.go:117] "RemoveContainer" containerID="b389c9db139e9cdf93dddcef6ad5cd0ad8d82d97f5ddae08a24eb82de329b227" Dec 03 14:00:00 crc kubenswrapper[4986]: I1203 14:00:00.189917 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412840-dzpkw"] Dec 03 14:00:00 crc kubenswrapper[4986]: I1203 14:00:00.192385 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412840-dzpkw" Dec 03 14:00:00 crc kubenswrapper[4986]: I1203 14:00:00.194399 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 03 14:00:00 crc kubenswrapper[4986]: I1203 14:00:00.194602 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 03 14:00:00 crc kubenswrapper[4986]: I1203 14:00:00.206529 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412840-dzpkw"] Dec 03 14:00:00 crc kubenswrapper[4986]: I1203 14:00:00.385782 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l22s2\" (UniqueName: \"kubernetes.io/projected/c0e071d4-4519-43e5-a31d-be19793c127d-kube-api-access-l22s2\") pod \"collect-profiles-29412840-dzpkw\" (UID: \"c0e071d4-4519-43e5-a31d-be19793c127d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412840-dzpkw" Dec 03 14:00:00 crc kubenswrapper[4986]: I1203 14:00:00.386124 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c0e071d4-4519-43e5-a31d-be19793c127d-secret-volume\") pod \"collect-profiles-29412840-dzpkw\" (UID: \"c0e071d4-4519-43e5-a31d-be19793c127d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412840-dzpkw" Dec 03 14:00:00 crc kubenswrapper[4986]: I1203 14:00:00.386206 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c0e071d4-4519-43e5-a31d-be19793c127d-config-volume\") pod \"collect-profiles-29412840-dzpkw\" (UID: \"c0e071d4-4519-43e5-a31d-be19793c127d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412840-dzpkw" Dec 03 14:00:00 crc kubenswrapper[4986]: I1203 14:00:00.487821 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l22s2\" (UniqueName: \"kubernetes.io/projected/c0e071d4-4519-43e5-a31d-be19793c127d-kube-api-access-l22s2\") pod \"collect-profiles-29412840-dzpkw\" (UID: \"c0e071d4-4519-43e5-a31d-be19793c127d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412840-dzpkw" Dec 03 14:00:00 crc kubenswrapper[4986]: I1203 14:00:00.487865 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c0e071d4-4519-43e5-a31d-be19793c127d-secret-volume\") pod \"collect-profiles-29412840-dzpkw\" (UID: \"c0e071d4-4519-43e5-a31d-be19793c127d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412840-dzpkw" Dec 03 14:00:00 crc kubenswrapper[4986]: I1203 14:00:00.487940 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c0e071d4-4519-43e5-a31d-be19793c127d-config-volume\") pod \"collect-profiles-29412840-dzpkw\" (UID: \"c0e071d4-4519-43e5-a31d-be19793c127d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412840-dzpkw" Dec 03 14:00:00 crc kubenswrapper[4986]: I1203 14:00:00.488873 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c0e071d4-4519-43e5-a31d-be19793c127d-config-volume\") pod \"collect-profiles-29412840-dzpkw\" (UID: \"c0e071d4-4519-43e5-a31d-be19793c127d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412840-dzpkw" Dec 03 14:00:00 crc kubenswrapper[4986]: I1203 14:00:00.495037 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c0e071d4-4519-43e5-a31d-be19793c127d-secret-volume\") pod \"collect-profiles-29412840-dzpkw\" (UID: \"c0e071d4-4519-43e5-a31d-be19793c127d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412840-dzpkw" Dec 03 14:00:00 crc kubenswrapper[4986]: I1203 14:00:00.504044 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l22s2\" (UniqueName: \"kubernetes.io/projected/c0e071d4-4519-43e5-a31d-be19793c127d-kube-api-access-l22s2\") pod \"collect-profiles-29412840-dzpkw\" (UID: \"c0e071d4-4519-43e5-a31d-be19793c127d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412840-dzpkw" Dec 03 14:00:00 crc kubenswrapper[4986]: I1203 14:00:00.516221 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412840-dzpkw" Dec 03 14:00:00 crc kubenswrapper[4986]: I1203 14:00:00.978452 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412840-dzpkw"] Dec 03 14:00:01 crc kubenswrapper[4986]: I1203 14:00:01.536468 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412840-dzpkw" event={"ID":"c0e071d4-4519-43e5-a31d-be19793c127d","Type":"ContainerStarted","Data":"cd37e490bad2efa42bac553bf1b04ee85bc0b4be7521b9d8814a12eeee6b4b31"} Dec 03 14:00:01 crc kubenswrapper[4986]: I1203 14:00:01.536787 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412840-dzpkw" event={"ID":"c0e071d4-4519-43e5-a31d-be19793c127d","Type":"ContainerStarted","Data":"af60b931591d1d5eee60d2583eb0bcd2d1862407fb047e03a8289c7adcaf1adc"} Dec 03 14:00:01 crc kubenswrapper[4986]: I1203 14:00:01.561226 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29412840-dzpkw" podStartSLOduration=1.561207858 podStartE2EDuration="1.561207858s" podCreationTimestamp="2025-12-03 14:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 14:00:01.555320458 +0000 UTC m=+3861.021751669" watchObservedRunningTime="2025-12-03 14:00:01.561207858 +0000 UTC m=+3861.027639049" Dec 03 14:00:02 crc kubenswrapper[4986]: I1203 14:00:02.551265 4986 generic.go:334] "Generic (PLEG): container finished" podID="c0e071d4-4519-43e5-a31d-be19793c127d" containerID="cd37e490bad2efa42bac553bf1b04ee85bc0b4be7521b9d8814a12eeee6b4b31" exitCode=0 Dec 03 14:00:02 crc kubenswrapper[4986]: I1203 14:00:02.551421 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412840-dzpkw" event={"ID":"c0e071d4-4519-43e5-a31d-be19793c127d","Type":"ContainerDied","Data":"cd37e490bad2efa42bac553bf1b04ee85bc0b4be7521b9d8814a12eeee6b4b31"} Dec 03 14:00:03 crc kubenswrapper[4986]: I1203 14:00:03.875983 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412840-dzpkw" Dec 03 14:00:03 crc kubenswrapper[4986]: I1203 14:00:03.956002 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l22s2\" (UniqueName: \"kubernetes.io/projected/c0e071d4-4519-43e5-a31d-be19793c127d-kube-api-access-l22s2\") pod \"c0e071d4-4519-43e5-a31d-be19793c127d\" (UID: \"c0e071d4-4519-43e5-a31d-be19793c127d\") " Dec 03 14:00:03 crc kubenswrapper[4986]: I1203 14:00:03.956094 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c0e071d4-4519-43e5-a31d-be19793c127d-secret-volume\") pod \"c0e071d4-4519-43e5-a31d-be19793c127d\" (UID: \"c0e071d4-4519-43e5-a31d-be19793c127d\") " Dec 03 14:00:03 crc kubenswrapper[4986]: I1203 14:00:03.956140 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c0e071d4-4519-43e5-a31d-be19793c127d-config-volume\") pod \"c0e071d4-4519-43e5-a31d-be19793c127d\" (UID: \"c0e071d4-4519-43e5-a31d-be19793c127d\") " Dec 03 14:00:03 crc kubenswrapper[4986]: I1203 14:00:03.957585 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0e071d4-4519-43e5-a31d-be19793c127d-config-volume" (OuterVolumeSpecName: "config-volume") pod "c0e071d4-4519-43e5-a31d-be19793c127d" (UID: "c0e071d4-4519-43e5-a31d-be19793c127d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 14:00:03 crc kubenswrapper[4986]: I1203 14:00:03.962529 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0e071d4-4519-43e5-a31d-be19793c127d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c0e071d4-4519-43e5-a31d-be19793c127d" (UID: "c0e071d4-4519-43e5-a31d-be19793c127d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 14:00:03 crc kubenswrapper[4986]: I1203 14:00:03.964749 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0e071d4-4519-43e5-a31d-be19793c127d-kube-api-access-l22s2" (OuterVolumeSpecName: "kube-api-access-l22s2") pod "c0e071d4-4519-43e5-a31d-be19793c127d" (UID: "c0e071d4-4519-43e5-a31d-be19793c127d"). InnerVolumeSpecName "kube-api-access-l22s2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 14:00:04 crc kubenswrapper[4986]: I1203 14:00:04.057612 4986 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c0e071d4-4519-43e5-a31d-be19793c127d-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 03 14:00:04 crc kubenswrapper[4986]: I1203 14:00:04.057645 4986 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c0e071d4-4519-43e5-a31d-be19793c127d-config-volume\") on node \"crc\" DevicePath \"\"" Dec 03 14:00:04 crc kubenswrapper[4986]: I1203 14:00:04.057654 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l22s2\" (UniqueName: \"kubernetes.io/projected/c0e071d4-4519-43e5-a31d-be19793c127d-kube-api-access-l22s2\") on node \"crc\" DevicePath \"\"" Dec 03 14:00:04 crc kubenswrapper[4986]: I1203 14:00:04.573918 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412840-dzpkw" event={"ID":"c0e071d4-4519-43e5-a31d-be19793c127d","Type":"ContainerDied","Data":"af60b931591d1d5eee60d2583eb0bcd2d1862407fb047e03a8289c7adcaf1adc"} Dec 03 14:00:04 crc kubenswrapper[4986]: I1203 14:00:04.574536 4986 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af60b931591d1d5eee60d2583eb0bcd2d1862407fb047e03a8289c7adcaf1adc" Dec 03 14:00:04 crc kubenswrapper[4986]: I1203 14:00:04.573984 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412840-dzpkw" Dec 03 14:00:04 crc kubenswrapper[4986]: I1203 14:00:04.625932 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412795-l86x6"] Dec 03 14:00:04 crc kubenswrapper[4986]: I1203 14:00:04.633732 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412795-l86x6"] Dec 03 14:00:04 crc kubenswrapper[4986]: I1203 14:00:04.956230 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd86e79f-37fe-4393-8bd8-28a14d6d5537" path="/var/lib/kubelet/pods/dd86e79f-37fe-4393-8bd8-28a14d6d5537/volumes" Dec 03 14:00:07 crc kubenswrapper[4986]: I1203 14:00:07.943216 4986 scope.go:117] "RemoveContainer" containerID="f4aa16a78e6dc7399cb3f0bba100cc3ca3d479a66dc5b2e6c12ee775b73c7cc4" Dec 03 14:00:07 crc kubenswrapper[4986]: E1203 14:00:07.943763 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 14:00:08 crc kubenswrapper[4986]: I1203 14:00:08.461750 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-6lpvf/must-gather-jtw4t"] Dec 03 14:00:08 crc kubenswrapper[4986]: E1203 14:00:08.462526 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0e071d4-4519-43e5-a31d-be19793c127d" containerName="collect-profiles" Dec 03 14:00:08 crc kubenswrapper[4986]: I1203 14:00:08.462614 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0e071d4-4519-43e5-a31d-be19793c127d" containerName="collect-profiles" Dec 03 14:00:08 crc kubenswrapper[4986]: I1203 14:00:08.462942 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0e071d4-4519-43e5-a31d-be19793c127d" containerName="collect-profiles" Dec 03 14:00:08 crc kubenswrapper[4986]: I1203 14:00:08.464256 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6lpvf/must-gather-jtw4t" Dec 03 14:00:08 crc kubenswrapper[4986]: I1203 14:00:08.466896 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-6lpvf"/"openshift-service-ca.crt" Dec 03 14:00:08 crc kubenswrapper[4986]: I1203 14:00:08.467321 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-6lpvf"/"default-dockercfg-xf776" Dec 03 14:00:08 crc kubenswrapper[4986]: I1203 14:00:08.469427 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-6lpvf"/"kube-root-ca.crt" Dec 03 14:00:08 crc kubenswrapper[4986]: I1203 14:00:08.487906 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-6lpvf/must-gather-jtw4t"] Dec 03 14:00:08 crc kubenswrapper[4986]: I1203 14:00:08.564047 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9ppp\" (UniqueName: \"kubernetes.io/projected/deff12eb-bb21-44aa-bd09-0b8401893ea4-kube-api-access-n9ppp\") pod \"must-gather-jtw4t\" (UID: \"deff12eb-bb21-44aa-bd09-0b8401893ea4\") " pod="openshift-must-gather-6lpvf/must-gather-jtw4t" Dec 03 14:00:08 crc kubenswrapper[4986]: I1203 14:00:08.564684 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/deff12eb-bb21-44aa-bd09-0b8401893ea4-must-gather-output\") pod \"must-gather-jtw4t\" (UID: \"deff12eb-bb21-44aa-bd09-0b8401893ea4\") " pod="openshift-must-gather-6lpvf/must-gather-jtw4t" Dec 03 14:00:08 crc kubenswrapper[4986]: I1203 14:00:08.666820 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9ppp\" (UniqueName: \"kubernetes.io/projected/deff12eb-bb21-44aa-bd09-0b8401893ea4-kube-api-access-n9ppp\") pod \"must-gather-jtw4t\" (UID: \"deff12eb-bb21-44aa-bd09-0b8401893ea4\") " pod="openshift-must-gather-6lpvf/must-gather-jtw4t" Dec 03 14:00:08 crc kubenswrapper[4986]: I1203 14:00:08.666895 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/deff12eb-bb21-44aa-bd09-0b8401893ea4-must-gather-output\") pod \"must-gather-jtw4t\" (UID: \"deff12eb-bb21-44aa-bd09-0b8401893ea4\") " pod="openshift-must-gather-6lpvf/must-gather-jtw4t" Dec 03 14:00:08 crc kubenswrapper[4986]: I1203 14:00:08.667387 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/deff12eb-bb21-44aa-bd09-0b8401893ea4-must-gather-output\") pod \"must-gather-jtw4t\" (UID: \"deff12eb-bb21-44aa-bd09-0b8401893ea4\") " pod="openshift-must-gather-6lpvf/must-gather-jtw4t" Dec 03 14:00:08 crc kubenswrapper[4986]: I1203 14:00:08.690104 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9ppp\" (UniqueName: \"kubernetes.io/projected/deff12eb-bb21-44aa-bd09-0b8401893ea4-kube-api-access-n9ppp\") pod \"must-gather-jtw4t\" (UID: \"deff12eb-bb21-44aa-bd09-0b8401893ea4\") " pod="openshift-must-gather-6lpvf/must-gather-jtw4t" Dec 03 14:00:08 crc kubenswrapper[4986]: I1203 14:00:08.789832 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6lpvf/must-gather-jtw4t" Dec 03 14:00:09 crc kubenswrapper[4986]: I1203 14:00:09.266930 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-6lpvf/must-gather-jtw4t"] Dec 03 14:00:09 crc kubenswrapper[4986]: I1203 14:00:09.629838 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6lpvf/must-gather-jtw4t" event={"ID":"deff12eb-bb21-44aa-bd09-0b8401893ea4","Type":"ContainerStarted","Data":"888a75405a69d5dd72355131a59cb6a058f11cf1e7555c1cf033b495fd5eccf9"} Dec 03 14:00:13 crc kubenswrapper[4986]: I1203 14:00:13.662790 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6lpvf/must-gather-jtw4t" event={"ID":"deff12eb-bb21-44aa-bd09-0b8401893ea4","Type":"ContainerStarted","Data":"b3a62ad32323604fbf6ab93f239ce5dc0481309e2f6e7eb63523489e3394988e"} Dec 03 14:00:13 crc kubenswrapper[4986]: I1203 14:00:13.663160 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6lpvf/must-gather-jtw4t" event={"ID":"deff12eb-bb21-44aa-bd09-0b8401893ea4","Type":"ContainerStarted","Data":"429cac0520aa52a5fe9949f74e500cfd02526e983ce18467641ed01fe90f309b"} Dec 03 14:00:13 crc kubenswrapper[4986]: I1203 14:00:13.682131 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-6lpvf/must-gather-jtw4t" podStartSLOduration=2.030890127 podStartE2EDuration="5.682110687s" podCreationTimestamp="2025-12-03 14:00:08 +0000 UTC" firstStartedPulling="2025-12-03 14:00:09.282128377 +0000 UTC m=+3868.748559568" lastFinishedPulling="2025-12-03 14:00:12.933348947 +0000 UTC m=+3872.399780128" observedRunningTime="2025-12-03 14:00:13.679960359 +0000 UTC m=+3873.146391540" watchObservedRunningTime="2025-12-03 14:00:13.682110687 +0000 UTC m=+3873.148541878" Dec 03 14:00:15 crc kubenswrapper[4986]: E1203 14:00:15.218083 4986 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.129.56.112:33750->38.129.56.112:43629: write tcp 38.129.56.112:33750->38.129.56.112:43629: write: connection reset by peer Dec 03 14:00:16 crc kubenswrapper[4986]: I1203 14:00:16.655617 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-6lpvf/crc-debug-tcgr8"] Dec 03 14:00:16 crc kubenswrapper[4986]: I1203 14:00:16.657894 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6lpvf/crc-debug-tcgr8" Dec 03 14:00:16 crc kubenswrapper[4986]: I1203 14:00:16.810140 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/00dfe4ba-c856-4daf-97ee-4ce094f34d1c-host\") pod \"crc-debug-tcgr8\" (UID: \"00dfe4ba-c856-4daf-97ee-4ce094f34d1c\") " pod="openshift-must-gather-6lpvf/crc-debug-tcgr8" Dec 03 14:00:16 crc kubenswrapper[4986]: I1203 14:00:16.810642 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vvzh\" (UniqueName: \"kubernetes.io/projected/00dfe4ba-c856-4daf-97ee-4ce094f34d1c-kube-api-access-5vvzh\") pod \"crc-debug-tcgr8\" (UID: \"00dfe4ba-c856-4daf-97ee-4ce094f34d1c\") " pod="openshift-must-gather-6lpvf/crc-debug-tcgr8" Dec 03 14:00:16 crc kubenswrapper[4986]: I1203 14:00:16.912563 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/00dfe4ba-c856-4daf-97ee-4ce094f34d1c-host\") pod \"crc-debug-tcgr8\" (UID: \"00dfe4ba-c856-4daf-97ee-4ce094f34d1c\") " pod="openshift-must-gather-6lpvf/crc-debug-tcgr8" Dec 03 14:00:16 crc kubenswrapper[4986]: I1203 14:00:16.912698 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vvzh\" (UniqueName: \"kubernetes.io/projected/00dfe4ba-c856-4daf-97ee-4ce094f34d1c-kube-api-access-5vvzh\") pod \"crc-debug-tcgr8\" (UID: \"00dfe4ba-c856-4daf-97ee-4ce094f34d1c\") " pod="openshift-must-gather-6lpvf/crc-debug-tcgr8" Dec 03 14:00:16 crc kubenswrapper[4986]: I1203 14:00:16.913043 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/00dfe4ba-c856-4daf-97ee-4ce094f34d1c-host\") pod \"crc-debug-tcgr8\" (UID: \"00dfe4ba-c856-4daf-97ee-4ce094f34d1c\") " pod="openshift-must-gather-6lpvf/crc-debug-tcgr8" Dec 03 14:00:16 crc kubenswrapper[4986]: I1203 14:00:16.948399 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vvzh\" (UniqueName: \"kubernetes.io/projected/00dfe4ba-c856-4daf-97ee-4ce094f34d1c-kube-api-access-5vvzh\") pod \"crc-debug-tcgr8\" (UID: \"00dfe4ba-c856-4daf-97ee-4ce094f34d1c\") " pod="openshift-must-gather-6lpvf/crc-debug-tcgr8" Dec 03 14:00:16 crc kubenswrapper[4986]: I1203 14:00:16.978378 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6lpvf/crc-debug-tcgr8" Dec 03 14:00:17 crc kubenswrapper[4986]: I1203 14:00:17.699472 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6lpvf/crc-debug-tcgr8" event={"ID":"00dfe4ba-c856-4daf-97ee-4ce094f34d1c","Type":"ContainerStarted","Data":"6823625d61f2902228b5e26145307547ee4b19ec6e3b8aaa33bb4e1f608db807"} Dec 03 14:00:18 crc kubenswrapper[4986]: I1203 14:00:18.944756 4986 scope.go:117] "RemoveContainer" containerID="f4aa16a78e6dc7399cb3f0bba100cc3ca3d479a66dc5b2e6c12ee775b73c7cc4" Dec 03 14:00:18 crc kubenswrapper[4986]: E1203 14:00:18.945313 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 14:00:28 crc kubenswrapper[4986]: I1203 14:00:28.823690 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6lpvf/crc-debug-tcgr8" event={"ID":"00dfe4ba-c856-4daf-97ee-4ce094f34d1c","Type":"ContainerStarted","Data":"235aea7a6e3e2a5f7f14f3f53f94abdace508b38c0d922fea5e38c18052e1bf2"} Dec 03 14:00:28 crc kubenswrapper[4986]: I1203 14:00:28.851097 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-6lpvf/crc-debug-tcgr8" podStartSLOduration=1.963617 podStartE2EDuration="12.851079639s" podCreationTimestamp="2025-12-03 14:00:16 +0000 UTC" firstStartedPulling="2025-12-03 14:00:17.020446527 +0000 UTC m=+3876.486877718" lastFinishedPulling="2025-12-03 14:00:27.907909166 +0000 UTC m=+3887.374340357" observedRunningTime="2025-12-03 14:00:28.84779356 +0000 UTC m=+3888.314224761" watchObservedRunningTime="2025-12-03 14:00:28.851079639 +0000 UTC m=+3888.317510830" Dec 03 14:00:29 crc kubenswrapper[4986]: I1203 14:00:29.943652 4986 scope.go:117] "RemoveContainer" containerID="f4aa16a78e6dc7399cb3f0bba100cc3ca3d479a66dc5b2e6c12ee775b73c7cc4" Dec 03 14:00:29 crc kubenswrapper[4986]: E1203 14:00:29.944358 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 14:00:42 crc kubenswrapper[4986]: I1203 14:00:42.953909 4986 scope.go:117] "RemoveContainer" containerID="f4aa16a78e6dc7399cb3f0bba100cc3ca3d479a66dc5b2e6c12ee775b73c7cc4" Dec 03 14:00:42 crc kubenswrapper[4986]: E1203 14:00:42.954908 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 14:00:53 crc kubenswrapper[4986]: I1203 14:00:53.943802 4986 scope.go:117] "RemoveContainer" containerID="f4aa16a78e6dc7399cb3f0bba100cc3ca3d479a66dc5b2e6c12ee775b73c7cc4" Dec 03 14:00:53 crc kubenswrapper[4986]: E1203 14:00:53.944810 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 14:00:58 crc kubenswrapper[4986]: I1203 14:00:58.409412 4986 scope.go:117] "RemoveContainer" containerID="0ece90572c8bcfc6111a92ea297f6581fb5268fbe4e96f263b4af737c4687b04" Dec 03 14:01:00 crc kubenswrapper[4986]: I1203 14:01:00.154383 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29412841-m7k2b"] Dec 03 14:01:00 crc kubenswrapper[4986]: I1203 14:01:00.156429 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29412841-m7k2b" Dec 03 14:01:00 crc kubenswrapper[4986]: I1203 14:01:00.165922 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29412841-m7k2b"] Dec 03 14:01:00 crc kubenswrapper[4986]: I1203 14:01:00.273952 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c12d1f52-dc0d-4deb-9f05-089d1a21267c-fernet-keys\") pod \"keystone-cron-29412841-m7k2b\" (UID: \"c12d1f52-dc0d-4deb-9f05-089d1a21267c\") " pod="openstack/keystone-cron-29412841-m7k2b" Dec 03 14:01:00 crc kubenswrapper[4986]: I1203 14:01:00.274273 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c12d1f52-dc0d-4deb-9f05-089d1a21267c-config-data\") pod \"keystone-cron-29412841-m7k2b\" (UID: \"c12d1f52-dc0d-4deb-9f05-089d1a21267c\") " pod="openstack/keystone-cron-29412841-m7k2b" Dec 03 14:01:00 crc kubenswrapper[4986]: I1203 14:01:00.274457 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62xkb\" (UniqueName: \"kubernetes.io/projected/c12d1f52-dc0d-4deb-9f05-089d1a21267c-kube-api-access-62xkb\") pod \"keystone-cron-29412841-m7k2b\" (UID: \"c12d1f52-dc0d-4deb-9f05-089d1a21267c\") " pod="openstack/keystone-cron-29412841-m7k2b" Dec 03 14:01:00 crc kubenswrapper[4986]: I1203 14:01:00.274599 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c12d1f52-dc0d-4deb-9f05-089d1a21267c-combined-ca-bundle\") pod \"keystone-cron-29412841-m7k2b\" (UID: \"c12d1f52-dc0d-4deb-9f05-089d1a21267c\") " pod="openstack/keystone-cron-29412841-m7k2b" Dec 03 14:01:00 crc kubenswrapper[4986]: I1203 14:01:00.375996 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62xkb\" (UniqueName: \"kubernetes.io/projected/c12d1f52-dc0d-4deb-9f05-089d1a21267c-kube-api-access-62xkb\") pod \"keystone-cron-29412841-m7k2b\" (UID: \"c12d1f52-dc0d-4deb-9f05-089d1a21267c\") " pod="openstack/keystone-cron-29412841-m7k2b" Dec 03 14:01:00 crc kubenswrapper[4986]: I1203 14:01:00.376097 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c12d1f52-dc0d-4deb-9f05-089d1a21267c-combined-ca-bundle\") pod \"keystone-cron-29412841-m7k2b\" (UID: \"c12d1f52-dc0d-4deb-9f05-089d1a21267c\") " pod="openstack/keystone-cron-29412841-m7k2b" Dec 03 14:01:00 crc kubenswrapper[4986]: I1203 14:01:00.376163 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c12d1f52-dc0d-4deb-9f05-089d1a21267c-fernet-keys\") pod \"keystone-cron-29412841-m7k2b\" (UID: \"c12d1f52-dc0d-4deb-9f05-089d1a21267c\") " pod="openstack/keystone-cron-29412841-m7k2b" Dec 03 14:01:00 crc kubenswrapper[4986]: I1203 14:01:00.376302 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c12d1f52-dc0d-4deb-9f05-089d1a21267c-config-data\") pod \"keystone-cron-29412841-m7k2b\" (UID: \"c12d1f52-dc0d-4deb-9f05-089d1a21267c\") " pod="openstack/keystone-cron-29412841-m7k2b" Dec 03 14:01:00 crc kubenswrapper[4986]: I1203 14:01:00.381938 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c12d1f52-dc0d-4deb-9f05-089d1a21267c-fernet-keys\") pod \"keystone-cron-29412841-m7k2b\" (UID: \"c12d1f52-dc0d-4deb-9f05-089d1a21267c\") " pod="openstack/keystone-cron-29412841-m7k2b" Dec 03 14:01:00 crc kubenswrapper[4986]: I1203 14:01:00.382678 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c12d1f52-dc0d-4deb-9f05-089d1a21267c-combined-ca-bundle\") pod \"keystone-cron-29412841-m7k2b\" (UID: \"c12d1f52-dc0d-4deb-9f05-089d1a21267c\") " pod="openstack/keystone-cron-29412841-m7k2b" Dec 03 14:01:00 crc kubenswrapper[4986]: I1203 14:01:00.385861 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c12d1f52-dc0d-4deb-9f05-089d1a21267c-config-data\") pod \"keystone-cron-29412841-m7k2b\" (UID: \"c12d1f52-dc0d-4deb-9f05-089d1a21267c\") " pod="openstack/keystone-cron-29412841-m7k2b" Dec 03 14:01:00 crc kubenswrapper[4986]: I1203 14:01:00.397160 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62xkb\" (UniqueName: \"kubernetes.io/projected/c12d1f52-dc0d-4deb-9f05-089d1a21267c-kube-api-access-62xkb\") pod \"keystone-cron-29412841-m7k2b\" (UID: \"c12d1f52-dc0d-4deb-9f05-089d1a21267c\") " pod="openstack/keystone-cron-29412841-m7k2b" Dec 03 14:01:00 crc kubenswrapper[4986]: I1203 14:01:00.477788 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29412841-m7k2b" Dec 03 14:01:01 crc kubenswrapper[4986]: I1203 14:01:01.084588 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29412841-m7k2b"] Dec 03 14:01:01 crc kubenswrapper[4986]: I1203 14:01:01.125253 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29412841-m7k2b" event={"ID":"c12d1f52-dc0d-4deb-9f05-089d1a21267c","Type":"ContainerStarted","Data":"259e9eedd791f61b53d577ffcbc049d66f3adbd73458a30b20c21f9407f94541"} Dec 03 14:01:02 crc kubenswrapper[4986]: I1203 14:01:02.136922 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29412841-m7k2b" event={"ID":"c12d1f52-dc0d-4deb-9f05-089d1a21267c","Type":"ContainerStarted","Data":"9fd46a048b5b6601e33ba9c425dbb7c979581f5ab696dd60d8f08fb5e932613a"} Dec 03 14:01:02 crc kubenswrapper[4986]: I1203 14:01:02.163862 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29412841-m7k2b" podStartSLOduration=2.1638288709999998 podStartE2EDuration="2.163828871s" podCreationTimestamp="2025-12-03 14:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 14:01:02.156543754 +0000 UTC m=+3921.622974965" watchObservedRunningTime="2025-12-03 14:01:02.163828871 +0000 UTC m=+3921.630260062" Dec 03 14:01:04 crc kubenswrapper[4986]: I1203 14:01:04.943182 4986 scope.go:117] "RemoveContainer" containerID="f4aa16a78e6dc7399cb3f0bba100cc3ca3d479a66dc5b2e6c12ee775b73c7cc4" Dec 03 14:01:04 crc kubenswrapper[4986]: E1203 14:01:04.944040 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 14:01:05 crc kubenswrapper[4986]: I1203 14:01:05.175164 4986 generic.go:334] "Generic (PLEG): container finished" podID="c12d1f52-dc0d-4deb-9f05-089d1a21267c" containerID="9fd46a048b5b6601e33ba9c425dbb7c979581f5ab696dd60d8f08fb5e932613a" exitCode=0 Dec 03 14:01:05 crc kubenswrapper[4986]: I1203 14:01:05.175212 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29412841-m7k2b" event={"ID":"c12d1f52-dc0d-4deb-9f05-089d1a21267c","Type":"ContainerDied","Data":"9fd46a048b5b6601e33ba9c425dbb7c979581f5ab696dd60d8f08fb5e932613a"} Dec 03 14:01:06 crc kubenswrapper[4986]: I1203 14:01:06.668607 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29412841-m7k2b" Dec 03 14:01:06 crc kubenswrapper[4986]: I1203 14:01:06.815937 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c12d1f52-dc0d-4deb-9f05-089d1a21267c-fernet-keys\") pod \"c12d1f52-dc0d-4deb-9f05-089d1a21267c\" (UID: \"c12d1f52-dc0d-4deb-9f05-089d1a21267c\") " Dec 03 14:01:06 crc kubenswrapper[4986]: I1203 14:01:06.815982 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c12d1f52-dc0d-4deb-9f05-089d1a21267c-config-data\") pod \"c12d1f52-dc0d-4deb-9f05-089d1a21267c\" (UID: \"c12d1f52-dc0d-4deb-9f05-089d1a21267c\") " Dec 03 14:01:06 crc kubenswrapper[4986]: I1203 14:01:06.816107 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-62xkb\" (UniqueName: \"kubernetes.io/projected/c12d1f52-dc0d-4deb-9f05-089d1a21267c-kube-api-access-62xkb\") pod \"c12d1f52-dc0d-4deb-9f05-089d1a21267c\" (UID: \"c12d1f52-dc0d-4deb-9f05-089d1a21267c\") " Dec 03 14:01:06 crc kubenswrapper[4986]: I1203 14:01:06.816135 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c12d1f52-dc0d-4deb-9f05-089d1a21267c-combined-ca-bundle\") pod \"c12d1f52-dc0d-4deb-9f05-089d1a21267c\" (UID: \"c12d1f52-dc0d-4deb-9f05-089d1a21267c\") " Dec 03 14:01:06 crc kubenswrapper[4986]: I1203 14:01:06.824187 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c12d1f52-dc0d-4deb-9f05-089d1a21267c-kube-api-access-62xkb" (OuterVolumeSpecName: "kube-api-access-62xkb") pod "c12d1f52-dc0d-4deb-9f05-089d1a21267c" (UID: "c12d1f52-dc0d-4deb-9f05-089d1a21267c"). InnerVolumeSpecName "kube-api-access-62xkb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 14:01:06 crc kubenswrapper[4986]: I1203 14:01:06.831618 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c12d1f52-dc0d-4deb-9f05-089d1a21267c-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "c12d1f52-dc0d-4deb-9f05-089d1a21267c" (UID: "c12d1f52-dc0d-4deb-9f05-089d1a21267c"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 14:01:06 crc kubenswrapper[4986]: I1203 14:01:06.864106 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c12d1f52-dc0d-4deb-9f05-089d1a21267c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c12d1f52-dc0d-4deb-9f05-089d1a21267c" (UID: "c12d1f52-dc0d-4deb-9f05-089d1a21267c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 14:01:06 crc kubenswrapper[4986]: I1203 14:01:06.911431 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c12d1f52-dc0d-4deb-9f05-089d1a21267c-config-data" (OuterVolumeSpecName: "config-data") pod "c12d1f52-dc0d-4deb-9f05-089d1a21267c" (UID: "c12d1f52-dc0d-4deb-9f05-089d1a21267c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 14:01:06 crc kubenswrapper[4986]: I1203 14:01:06.918674 4986 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c12d1f52-dc0d-4deb-9f05-089d1a21267c-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 03 14:01:06 crc kubenswrapper[4986]: I1203 14:01:06.918723 4986 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c12d1f52-dc0d-4deb-9f05-089d1a21267c-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 14:01:06 crc kubenswrapper[4986]: I1203 14:01:06.918733 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-62xkb\" (UniqueName: \"kubernetes.io/projected/c12d1f52-dc0d-4deb-9f05-089d1a21267c-kube-api-access-62xkb\") on node \"crc\" DevicePath \"\"" Dec 03 14:01:06 crc kubenswrapper[4986]: I1203 14:01:06.918742 4986 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c12d1f52-dc0d-4deb-9f05-089d1a21267c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 14:01:07 crc kubenswrapper[4986]: I1203 14:01:07.193124 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29412841-m7k2b" event={"ID":"c12d1f52-dc0d-4deb-9f05-089d1a21267c","Type":"ContainerDied","Data":"259e9eedd791f61b53d577ffcbc049d66f3adbd73458a30b20c21f9407f94541"} Dec 03 14:01:07 crc kubenswrapper[4986]: I1203 14:01:07.193483 4986 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="259e9eedd791f61b53d577ffcbc049d66f3adbd73458a30b20c21f9407f94541" Dec 03 14:01:07 crc kubenswrapper[4986]: I1203 14:01:07.193180 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29412841-m7k2b" Dec 03 14:01:12 crc kubenswrapper[4986]: I1203 14:01:12.251599 4986 generic.go:334] "Generic (PLEG): container finished" podID="00dfe4ba-c856-4daf-97ee-4ce094f34d1c" containerID="235aea7a6e3e2a5f7f14f3f53f94abdace508b38c0d922fea5e38c18052e1bf2" exitCode=0 Dec 03 14:01:12 crc kubenswrapper[4986]: I1203 14:01:12.251676 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6lpvf/crc-debug-tcgr8" event={"ID":"00dfe4ba-c856-4daf-97ee-4ce094f34d1c","Type":"ContainerDied","Data":"235aea7a6e3e2a5f7f14f3f53f94abdace508b38c0d922fea5e38c18052e1bf2"} Dec 03 14:01:13 crc kubenswrapper[4986]: I1203 14:01:13.385423 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6lpvf/crc-debug-tcgr8" Dec 03 14:01:13 crc kubenswrapper[4986]: I1203 14:01:13.417398 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-6lpvf/crc-debug-tcgr8"] Dec 03 14:01:13 crc kubenswrapper[4986]: I1203 14:01:13.428066 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-6lpvf/crc-debug-tcgr8"] Dec 03 14:01:13 crc kubenswrapper[4986]: I1203 14:01:13.436856 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/00dfe4ba-c856-4daf-97ee-4ce094f34d1c-host\") pod \"00dfe4ba-c856-4daf-97ee-4ce094f34d1c\" (UID: \"00dfe4ba-c856-4daf-97ee-4ce094f34d1c\") " Dec 03 14:01:13 crc kubenswrapper[4986]: I1203 14:01:13.436972 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5vvzh\" (UniqueName: \"kubernetes.io/projected/00dfe4ba-c856-4daf-97ee-4ce094f34d1c-kube-api-access-5vvzh\") pod \"00dfe4ba-c856-4daf-97ee-4ce094f34d1c\" (UID: \"00dfe4ba-c856-4daf-97ee-4ce094f34d1c\") " Dec 03 14:01:13 crc kubenswrapper[4986]: I1203 14:01:13.437004 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/00dfe4ba-c856-4daf-97ee-4ce094f34d1c-host" (OuterVolumeSpecName: "host") pod "00dfe4ba-c856-4daf-97ee-4ce094f34d1c" (UID: "00dfe4ba-c856-4daf-97ee-4ce094f34d1c"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 14:01:13 crc kubenswrapper[4986]: I1203 14:01:13.437672 4986 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/00dfe4ba-c856-4daf-97ee-4ce094f34d1c-host\") on node \"crc\" DevicePath \"\"" Dec 03 14:01:13 crc kubenswrapper[4986]: I1203 14:01:13.442299 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00dfe4ba-c856-4daf-97ee-4ce094f34d1c-kube-api-access-5vvzh" (OuterVolumeSpecName: "kube-api-access-5vvzh") pod "00dfe4ba-c856-4daf-97ee-4ce094f34d1c" (UID: "00dfe4ba-c856-4daf-97ee-4ce094f34d1c"). InnerVolumeSpecName "kube-api-access-5vvzh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 14:01:13 crc kubenswrapper[4986]: I1203 14:01:13.539965 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5vvzh\" (UniqueName: \"kubernetes.io/projected/00dfe4ba-c856-4daf-97ee-4ce094f34d1c-kube-api-access-5vvzh\") on node \"crc\" DevicePath \"\"" Dec 03 14:01:14 crc kubenswrapper[4986]: I1203 14:01:14.268947 4986 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6823625d61f2902228b5e26145307547ee4b19ec6e3b8aaa33bb4e1f608db807" Dec 03 14:01:14 crc kubenswrapper[4986]: I1203 14:01:14.269035 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6lpvf/crc-debug-tcgr8" Dec 03 14:01:14 crc kubenswrapper[4986]: I1203 14:01:14.655981 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-6lpvf/crc-debug-njdt6"] Dec 03 14:01:14 crc kubenswrapper[4986]: E1203 14:01:14.656350 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c12d1f52-dc0d-4deb-9f05-089d1a21267c" containerName="keystone-cron" Dec 03 14:01:14 crc kubenswrapper[4986]: I1203 14:01:14.656361 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="c12d1f52-dc0d-4deb-9f05-089d1a21267c" containerName="keystone-cron" Dec 03 14:01:14 crc kubenswrapper[4986]: E1203 14:01:14.656393 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00dfe4ba-c856-4daf-97ee-4ce094f34d1c" containerName="container-00" Dec 03 14:01:14 crc kubenswrapper[4986]: I1203 14:01:14.656400 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="00dfe4ba-c856-4daf-97ee-4ce094f34d1c" containerName="container-00" Dec 03 14:01:14 crc kubenswrapper[4986]: I1203 14:01:14.656563 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="00dfe4ba-c856-4daf-97ee-4ce094f34d1c" containerName="container-00" Dec 03 14:01:14 crc kubenswrapper[4986]: I1203 14:01:14.656589 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="c12d1f52-dc0d-4deb-9f05-089d1a21267c" containerName="keystone-cron" Dec 03 14:01:14 crc kubenswrapper[4986]: I1203 14:01:14.657144 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6lpvf/crc-debug-njdt6" Dec 03 14:01:14 crc kubenswrapper[4986]: I1203 14:01:14.761199 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/346774d9-310b-49a9-810b-04745ec05076-host\") pod \"crc-debug-njdt6\" (UID: \"346774d9-310b-49a9-810b-04745ec05076\") " pod="openshift-must-gather-6lpvf/crc-debug-njdt6" Dec 03 14:01:14 crc kubenswrapper[4986]: I1203 14:01:14.761321 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhjsw\" (UniqueName: \"kubernetes.io/projected/346774d9-310b-49a9-810b-04745ec05076-kube-api-access-bhjsw\") pod \"crc-debug-njdt6\" (UID: \"346774d9-310b-49a9-810b-04745ec05076\") " pod="openshift-must-gather-6lpvf/crc-debug-njdt6" Dec 03 14:01:14 crc kubenswrapper[4986]: I1203 14:01:14.863875 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhjsw\" (UniqueName: \"kubernetes.io/projected/346774d9-310b-49a9-810b-04745ec05076-kube-api-access-bhjsw\") pod \"crc-debug-njdt6\" (UID: \"346774d9-310b-49a9-810b-04745ec05076\") " pod="openshift-must-gather-6lpvf/crc-debug-njdt6" Dec 03 14:01:14 crc kubenswrapper[4986]: I1203 14:01:14.864156 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/346774d9-310b-49a9-810b-04745ec05076-host\") pod \"crc-debug-njdt6\" (UID: \"346774d9-310b-49a9-810b-04745ec05076\") " pod="openshift-must-gather-6lpvf/crc-debug-njdt6" Dec 03 14:01:14 crc kubenswrapper[4986]: I1203 14:01:14.864504 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/346774d9-310b-49a9-810b-04745ec05076-host\") pod \"crc-debug-njdt6\" (UID: \"346774d9-310b-49a9-810b-04745ec05076\") " pod="openshift-must-gather-6lpvf/crc-debug-njdt6" Dec 03 14:01:14 crc kubenswrapper[4986]: I1203 14:01:14.892900 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhjsw\" (UniqueName: \"kubernetes.io/projected/346774d9-310b-49a9-810b-04745ec05076-kube-api-access-bhjsw\") pod \"crc-debug-njdt6\" (UID: \"346774d9-310b-49a9-810b-04745ec05076\") " pod="openshift-must-gather-6lpvf/crc-debug-njdt6" Dec 03 14:01:14 crc kubenswrapper[4986]: I1203 14:01:14.958325 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00dfe4ba-c856-4daf-97ee-4ce094f34d1c" path="/var/lib/kubelet/pods/00dfe4ba-c856-4daf-97ee-4ce094f34d1c/volumes" Dec 03 14:01:14 crc kubenswrapper[4986]: I1203 14:01:14.989846 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6lpvf/crc-debug-njdt6" Dec 03 14:01:15 crc kubenswrapper[4986]: I1203 14:01:15.279370 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6lpvf/crc-debug-njdt6" event={"ID":"346774d9-310b-49a9-810b-04745ec05076","Type":"ContainerStarted","Data":"8c2ce53dca36b4befed705580659ef945a46b1fa34f00f0c50d97adcb7baab77"} Dec 03 14:01:16 crc kubenswrapper[4986]: I1203 14:01:16.292718 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6lpvf/crc-debug-njdt6" event={"ID":"346774d9-310b-49a9-810b-04745ec05076","Type":"ContainerStarted","Data":"12c8aa00c75f2bc297cac0e59488ccb1df8965994db408b9916eba56ab64e402"} Dec 03 14:01:17 crc kubenswrapper[4986]: I1203 14:01:17.302121 4986 generic.go:334] "Generic (PLEG): container finished" podID="346774d9-310b-49a9-810b-04745ec05076" containerID="12c8aa00c75f2bc297cac0e59488ccb1df8965994db408b9916eba56ab64e402" exitCode=0 Dec 03 14:01:17 crc kubenswrapper[4986]: I1203 14:01:17.302228 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6lpvf/crc-debug-njdt6" event={"ID":"346774d9-310b-49a9-810b-04745ec05076","Type":"ContainerDied","Data":"12c8aa00c75f2bc297cac0e59488ccb1df8965994db408b9916eba56ab64e402"} Dec 03 14:01:17 crc kubenswrapper[4986]: I1203 14:01:17.799732 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-6lpvf/crc-debug-njdt6"] Dec 03 14:01:17 crc kubenswrapper[4986]: I1203 14:01:17.808029 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-6lpvf/crc-debug-njdt6"] Dec 03 14:01:18 crc kubenswrapper[4986]: I1203 14:01:18.409724 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6lpvf/crc-debug-njdt6" Dec 03 14:01:18 crc kubenswrapper[4986]: I1203 14:01:18.435422 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/346774d9-310b-49a9-810b-04745ec05076-host\") pod \"346774d9-310b-49a9-810b-04745ec05076\" (UID: \"346774d9-310b-49a9-810b-04745ec05076\") " Dec 03 14:01:18 crc kubenswrapper[4986]: I1203 14:01:18.435499 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bhjsw\" (UniqueName: \"kubernetes.io/projected/346774d9-310b-49a9-810b-04745ec05076-kube-api-access-bhjsw\") pod \"346774d9-310b-49a9-810b-04745ec05076\" (UID: \"346774d9-310b-49a9-810b-04745ec05076\") " Dec 03 14:01:18 crc kubenswrapper[4986]: I1203 14:01:18.435550 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/346774d9-310b-49a9-810b-04745ec05076-host" (OuterVolumeSpecName: "host") pod "346774d9-310b-49a9-810b-04745ec05076" (UID: "346774d9-310b-49a9-810b-04745ec05076"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 14:01:18 crc kubenswrapper[4986]: I1203 14:01:18.436003 4986 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/346774d9-310b-49a9-810b-04745ec05076-host\") on node \"crc\" DevicePath \"\"" Dec 03 14:01:18 crc kubenswrapper[4986]: I1203 14:01:18.440146 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/346774d9-310b-49a9-810b-04745ec05076-kube-api-access-bhjsw" (OuterVolumeSpecName: "kube-api-access-bhjsw") pod "346774d9-310b-49a9-810b-04745ec05076" (UID: "346774d9-310b-49a9-810b-04745ec05076"). InnerVolumeSpecName "kube-api-access-bhjsw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 14:01:18 crc kubenswrapper[4986]: I1203 14:01:18.538069 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bhjsw\" (UniqueName: \"kubernetes.io/projected/346774d9-310b-49a9-810b-04745ec05076-kube-api-access-bhjsw\") on node \"crc\" DevicePath \"\"" Dec 03 14:01:18 crc kubenswrapper[4986]: I1203 14:01:18.955506 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="346774d9-310b-49a9-810b-04745ec05076" path="/var/lib/kubelet/pods/346774d9-310b-49a9-810b-04745ec05076/volumes" Dec 03 14:01:19 crc kubenswrapper[4986]: I1203 14:01:19.115622 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-6lpvf/crc-debug-h6mwz"] Dec 03 14:01:19 crc kubenswrapper[4986]: E1203 14:01:19.116623 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="346774d9-310b-49a9-810b-04745ec05076" containerName="container-00" Dec 03 14:01:19 crc kubenswrapper[4986]: I1203 14:01:19.116676 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="346774d9-310b-49a9-810b-04745ec05076" containerName="container-00" Dec 03 14:01:19 crc kubenswrapper[4986]: I1203 14:01:19.117000 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="346774d9-310b-49a9-810b-04745ec05076" containerName="container-00" Dec 03 14:01:19 crc kubenswrapper[4986]: I1203 14:01:19.117933 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6lpvf/crc-debug-h6mwz" Dec 03 14:01:19 crc kubenswrapper[4986]: I1203 14:01:19.148355 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/04ee8e0b-f508-42ae-aad3-3a7c1001e8ea-host\") pod \"crc-debug-h6mwz\" (UID: \"04ee8e0b-f508-42ae-aad3-3a7c1001e8ea\") " pod="openshift-must-gather-6lpvf/crc-debug-h6mwz" Dec 03 14:01:19 crc kubenswrapper[4986]: I1203 14:01:19.148428 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgpn2\" (UniqueName: \"kubernetes.io/projected/04ee8e0b-f508-42ae-aad3-3a7c1001e8ea-kube-api-access-lgpn2\") pod \"crc-debug-h6mwz\" (UID: \"04ee8e0b-f508-42ae-aad3-3a7c1001e8ea\") " pod="openshift-must-gather-6lpvf/crc-debug-h6mwz" Dec 03 14:01:19 crc kubenswrapper[4986]: I1203 14:01:19.250469 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/04ee8e0b-f508-42ae-aad3-3a7c1001e8ea-host\") pod \"crc-debug-h6mwz\" (UID: \"04ee8e0b-f508-42ae-aad3-3a7c1001e8ea\") " pod="openshift-must-gather-6lpvf/crc-debug-h6mwz" Dec 03 14:01:19 crc kubenswrapper[4986]: I1203 14:01:19.250552 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgpn2\" (UniqueName: \"kubernetes.io/projected/04ee8e0b-f508-42ae-aad3-3a7c1001e8ea-kube-api-access-lgpn2\") pod \"crc-debug-h6mwz\" (UID: \"04ee8e0b-f508-42ae-aad3-3a7c1001e8ea\") " pod="openshift-must-gather-6lpvf/crc-debug-h6mwz" Dec 03 14:01:19 crc kubenswrapper[4986]: I1203 14:01:19.250608 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/04ee8e0b-f508-42ae-aad3-3a7c1001e8ea-host\") pod \"crc-debug-h6mwz\" (UID: \"04ee8e0b-f508-42ae-aad3-3a7c1001e8ea\") " pod="openshift-must-gather-6lpvf/crc-debug-h6mwz" Dec 03 14:01:19 crc kubenswrapper[4986]: I1203 14:01:19.321839 4986 scope.go:117] "RemoveContainer" containerID="12c8aa00c75f2bc297cac0e59488ccb1df8965994db408b9916eba56ab64e402" Dec 03 14:01:19 crc kubenswrapper[4986]: I1203 14:01:19.321882 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6lpvf/crc-debug-njdt6" Dec 03 14:01:19 crc kubenswrapper[4986]: I1203 14:01:19.437114 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgpn2\" (UniqueName: \"kubernetes.io/projected/04ee8e0b-f508-42ae-aad3-3a7c1001e8ea-kube-api-access-lgpn2\") pod \"crc-debug-h6mwz\" (UID: \"04ee8e0b-f508-42ae-aad3-3a7c1001e8ea\") " pod="openshift-must-gather-6lpvf/crc-debug-h6mwz" Dec 03 14:01:19 crc kubenswrapper[4986]: I1203 14:01:19.736163 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6lpvf/crc-debug-h6mwz" Dec 03 14:01:19 crc kubenswrapper[4986]: I1203 14:01:19.943130 4986 scope.go:117] "RemoveContainer" containerID="f4aa16a78e6dc7399cb3f0bba100cc3ca3d479a66dc5b2e6c12ee775b73c7cc4" Dec 03 14:01:19 crc kubenswrapper[4986]: E1203 14:01:19.943763 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 14:01:20 crc kubenswrapper[4986]: I1203 14:01:20.334794 4986 generic.go:334] "Generic (PLEG): container finished" podID="04ee8e0b-f508-42ae-aad3-3a7c1001e8ea" containerID="e91bf82ccc209538c047e2fdfb89bfd98310c28857c59e8bb677d1a8d8e193ad" exitCode=0 Dec 03 14:01:20 crc kubenswrapper[4986]: I1203 14:01:20.334873 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6lpvf/crc-debug-h6mwz" event={"ID":"04ee8e0b-f508-42ae-aad3-3a7c1001e8ea","Type":"ContainerDied","Data":"e91bf82ccc209538c047e2fdfb89bfd98310c28857c59e8bb677d1a8d8e193ad"} Dec 03 14:01:20 crc kubenswrapper[4986]: I1203 14:01:20.335115 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6lpvf/crc-debug-h6mwz" event={"ID":"04ee8e0b-f508-42ae-aad3-3a7c1001e8ea","Type":"ContainerStarted","Data":"842a052054e5ecca9ff574b34e255465a40fd2157a30b80205fad0ff149587e6"} Dec 03 14:01:20 crc kubenswrapper[4986]: I1203 14:01:20.379587 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-6lpvf/crc-debug-h6mwz"] Dec 03 14:01:20 crc kubenswrapper[4986]: I1203 14:01:20.388349 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-6lpvf/crc-debug-h6mwz"] Dec 03 14:01:21 crc kubenswrapper[4986]: I1203 14:01:21.445247 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6lpvf/crc-debug-h6mwz" Dec 03 14:01:21 crc kubenswrapper[4986]: I1203 14:01:21.505964 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lgpn2\" (UniqueName: \"kubernetes.io/projected/04ee8e0b-f508-42ae-aad3-3a7c1001e8ea-kube-api-access-lgpn2\") pod \"04ee8e0b-f508-42ae-aad3-3a7c1001e8ea\" (UID: \"04ee8e0b-f508-42ae-aad3-3a7c1001e8ea\") " Dec 03 14:01:21 crc kubenswrapper[4986]: I1203 14:01:21.506124 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/04ee8e0b-f508-42ae-aad3-3a7c1001e8ea-host\") pod \"04ee8e0b-f508-42ae-aad3-3a7c1001e8ea\" (UID: \"04ee8e0b-f508-42ae-aad3-3a7c1001e8ea\") " Dec 03 14:01:21 crc kubenswrapper[4986]: I1203 14:01:21.506686 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04ee8e0b-f508-42ae-aad3-3a7c1001e8ea-host" (OuterVolumeSpecName: "host") pod "04ee8e0b-f508-42ae-aad3-3a7c1001e8ea" (UID: "04ee8e0b-f508-42ae-aad3-3a7c1001e8ea"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 14:01:21 crc kubenswrapper[4986]: I1203 14:01:21.524497 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04ee8e0b-f508-42ae-aad3-3a7c1001e8ea-kube-api-access-lgpn2" (OuterVolumeSpecName: "kube-api-access-lgpn2") pod "04ee8e0b-f508-42ae-aad3-3a7c1001e8ea" (UID: "04ee8e0b-f508-42ae-aad3-3a7c1001e8ea"). InnerVolumeSpecName "kube-api-access-lgpn2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 14:01:21 crc kubenswrapper[4986]: I1203 14:01:21.608161 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lgpn2\" (UniqueName: \"kubernetes.io/projected/04ee8e0b-f508-42ae-aad3-3a7c1001e8ea-kube-api-access-lgpn2\") on node \"crc\" DevicePath \"\"" Dec 03 14:01:21 crc kubenswrapper[4986]: I1203 14:01:21.608202 4986 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/04ee8e0b-f508-42ae-aad3-3a7c1001e8ea-host\") on node \"crc\" DevicePath \"\"" Dec 03 14:01:22 crc kubenswrapper[4986]: I1203 14:01:22.357058 4986 scope.go:117] "RemoveContainer" containerID="e91bf82ccc209538c047e2fdfb89bfd98310c28857c59e8bb677d1a8d8e193ad" Dec 03 14:01:22 crc kubenswrapper[4986]: I1203 14:01:22.357528 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6lpvf/crc-debug-h6mwz" Dec 03 14:01:22 crc kubenswrapper[4986]: I1203 14:01:22.955998 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04ee8e0b-f508-42ae-aad3-3a7c1001e8ea" path="/var/lib/kubelet/pods/04ee8e0b-f508-42ae-aad3-3a7c1001e8ea/volumes" Dec 03 14:01:33 crc kubenswrapper[4986]: I1203 14:01:33.943649 4986 scope.go:117] "RemoveContainer" containerID="f4aa16a78e6dc7399cb3f0bba100cc3ca3d479a66dc5b2e6c12ee775b73c7cc4" Dec 03 14:01:33 crc kubenswrapper[4986]: E1203 14:01:33.944448 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 14:01:36 crc kubenswrapper[4986]: I1203 14:01:36.645682 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5744ccfcbb-rcmx5_37ccb095-f90f-4383-88e9-05d2d82cab28/barbican-api/0.log" Dec 03 14:01:36 crc kubenswrapper[4986]: I1203 14:01:36.878116 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5744ccfcbb-rcmx5_37ccb095-f90f-4383-88e9-05d2d82cab28/barbican-api-log/0.log" Dec 03 14:01:36 crc kubenswrapper[4986]: I1203 14:01:36.888004 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-747896b766-b8kzr_ebfa5ced-0a56-44db-ba24-d5f663d65920/barbican-keystone-listener/0.log" Dec 03 14:01:36 crc kubenswrapper[4986]: I1203 14:01:36.964516 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-747896b766-b8kzr_ebfa5ced-0a56-44db-ba24-d5f663d65920/barbican-keystone-listener-log/0.log" Dec 03 14:01:37 crc kubenswrapper[4986]: I1203 14:01:37.074708 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-787dc78df5-jtcv6_17f8ed93-1afd-41c1-a52b-addefeb38ab0/barbican-worker/0.log" Dec 03 14:01:37 crc kubenswrapper[4986]: I1203 14:01:37.121300 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-787dc78df5-jtcv6_17f8ed93-1afd-41c1-a52b-addefeb38ab0/barbican-worker-log/0.log" Dec 03 14:01:37 crc kubenswrapper[4986]: I1203 14:01:37.338925 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-f4dgg_b4057123-895b-4436-93ac-02902a78df76/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 14:01:37 crc kubenswrapper[4986]: I1203 14:01:37.412741 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_bc0fd902-4092-4036-b7ba-1b6ede68bf04/ceilometer-central-agent/0.log" Dec 03 14:01:37 crc kubenswrapper[4986]: I1203 14:01:37.460055 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_bc0fd902-4092-4036-b7ba-1b6ede68bf04/ceilometer-notification-agent/0.log" Dec 03 14:01:37 crc kubenswrapper[4986]: I1203 14:01:37.565430 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_bc0fd902-4092-4036-b7ba-1b6ede68bf04/proxy-httpd/0.log" Dec 03 14:01:37 crc kubenswrapper[4986]: I1203 14:01:37.634517 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_bc0fd902-4092-4036-b7ba-1b6ede68bf04/sg-core/0.log" Dec 03 14:01:37 crc kubenswrapper[4986]: I1203 14:01:37.770884 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_297881ac-9b99-4d7d-9e59-4fb75c103648/cinder-api-log/0.log" Dec 03 14:01:37 crc kubenswrapper[4986]: I1203 14:01:37.788761 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_297881ac-9b99-4d7d-9e59-4fb75c103648/cinder-api/0.log" Dec 03 14:01:37 crc kubenswrapper[4986]: I1203 14:01:37.968617 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_cf917ae1-694f-4e1e-8d85-6452ac6c4e0e/cinder-scheduler/0.log" Dec 03 14:01:38 crc kubenswrapper[4986]: I1203 14:01:38.012128 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_cf917ae1-694f-4e1e-8d85-6452ac6c4e0e/probe/0.log" Dec 03 14:01:38 crc kubenswrapper[4986]: I1203 14:01:38.169208 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-8smpl_8fec002b-a660-4a80-8a57-51a2ce32cf29/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 14:01:38 crc kubenswrapper[4986]: I1203 14:01:38.321581 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-plm9h_1d933f9e-e44c-4d6b-ab8b-0186020b5b28/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 14:01:38 crc kubenswrapper[4986]: I1203 14:01:38.415828 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cb6ffcf87-pklsb_33180d69-fad9-4b8f-877a-f68644b85da8/init/0.log" Dec 03 14:01:38 crc kubenswrapper[4986]: I1203 14:01:38.984703 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cb6ffcf87-pklsb_33180d69-fad9-4b8f-877a-f68644b85da8/init/0.log" Dec 03 14:01:39 crc kubenswrapper[4986]: I1203 14:01:39.035120 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-2696c_4dc44651-c2df-4ca1-abf4-f9093fa3f70d/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 14:01:39 crc kubenswrapper[4986]: I1203 14:01:39.062404 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cb6ffcf87-pklsb_33180d69-fad9-4b8f-877a-f68644b85da8/dnsmasq-dns/0.log" Dec 03 14:01:39 crc kubenswrapper[4986]: I1203 14:01:39.399156 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_515375fd-69ab-4f66-9fa3-ac72e0eeb97b/glance-log/0.log" Dec 03 14:01:39 crc kubenswrapper[4986]: I1203 14:01:39.466611 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_515375fd-69ab-4f66-9fa3-ac72e0eeb97b/glance-httpd/0.log" Dec 03 14:01:39 crc kubenswrapper[4986]: I1203 14:01:39.615750 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_91b841cf-0b35-4065-8268-3d018757b029/glance-httpd/0.log" Dec 03 14:01:39 crc kubenswrapper[4986]: I1203 14:01:39.670198 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_91b841cf-0b35-4065-8268-3d018757b029/glance-log/0.log" Dec 03 14:01:39 crc kubenswrapper[4986]: I1203 14:01:39.793428 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-7797f969d4-6c2wn_4d67e23b-bda4-42d5-81b6-be58c643861d/horizon/0.log" Dec 03 14:01:40 crc kubenswrapper[4986]: I1203 14:01:40.023192 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-l29sb_7e34b649-2740-4d76-9aff-598b66d301b7/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 14:01:40 crc kubenswrapper[4986]: I1203 14:01:40.153450 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-7797f969d4-6c2wn_4d67e23b-bda4-42d5-81b6-be58c643861d/horizon-log/0.log" Dec 03 14:01:40 crc kubenswrapper[4986]: I1203 14:01:40.443236 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-gpv76_5436a47f-49ff-42be-b125-e3d98fbce1e9/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 14:01:40 crc kubenswrapper[4986]: I1203 14:01:40.656164 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-868bb78845-npjxs_652fbb43-2a64-471b-9123-cd6734de8993/keystone-api/0.log" Dec 03 14:01:40 crc kubenswrapper[4986]: I1203 14:01:40.768058 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29412841-m7k2b_c12d1f52-dc0d-4deb-9f05-089d1a21267c/keystone-cron/0.log" Dec 03 14:01:40 crc kubenswrapper[4986]: I1203 14:01:40.931464 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_33238cb5-2bde-4244-aa0d-cc11080f57fb/kube-state-metrics/0.log" Dec 03 14:01:41 crc kubenswrapper[4986]: I1203 14:01:41.063516 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-fdb9l_38870d92-6fb1-40ac-8763-a8c8bfbbdd77/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 14:01:41 crc kubenswrapper[4986]: I1203 14:01:41.361807 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-59f49d79c7-qt4rk_5a350421-5f01-4d60-92b8-edc85e4ef3c5/neutron-httpd/0.log" Dec 03 14:01:41 crc kubenswrapper[4986]: I1203 14:01:41.425802 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-59f49d79c7-qt4rk_5a350421-5f01-4d60-92b8-edc85e4ef3c5/neutron-api/0.log" Dec 03 14:01:41 crc kubenswrapper[4986]: I1203 14:01:41.550889 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-s5hc5_2a01e184-500d-44fe-9561-d971fb030c77/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 14:01:41 crc kubenswrapper[4986]: I1203 14:01:41.962654 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_39eb69a6-11cd-41da-956d-a9697ef88d67/nova-api-log/0.log" Dec 03 14:01:42 crc kubenswrapper[4986]: I1203 14:01:42.041139 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_d0e9f871-dd3e-4b2b-813a-01ef0428cb44/nova-cell0-conductor-conductor/0.log" Dec 03 14:01:42 crc kubenswrapper[4986]: I1203 14:01:42.182021 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_39eb69a6-11cd-41da-956d-a9697ef88d67/nova-api-api/0.log" Dec 03 14:01:42 crc kubenswrapper[4986]: I1203 14:01:42.288162 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_70518715-2cd8-4268-ae57-aaa98fa28843/nova-cell1-conductor-conductor/0.log" Dec 03 14:01:42 crc kubenswrapper[4986]: I1203 14:01:42.355174 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_4b4f56d6-d999-4f05-ace4-61b79327feec/nova-cell1-novncproxy-novncproxy/0.log" Dec 03 14:01:42 crc kubenswrapper[4986]: I1203 14:01:42.776733 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-g6sdc_2d43935a-d3d4-4e5e-b92a-dacb88b12f26/nova-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 14:01:42 crc kubenswrapper[4986]: I1203 14:01:42.828719 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_3bb61071-904d-46d6-8594-2312383a8a06/nova-metadata-log/0.log" Dec 03 14:01:43 crc kubenswrapper[4986]: I1203 14:01:43.188714 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_74dfd7ae-e1e8-4fe0-9563-0b6a8aaa60f4/nova-scheduler-scheduler/0.log" Dec 03 14:01:43 crc kubenswrapper[4986]: I1203 14:01:43.195524 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_a0496538-1ab2-45a2-94ab-fc3474533ec3/mysql-bootstrap/0.log" Dec 03 14:01:43 crc kubenswrapper[4986]: I1203 14:01:43.454669 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_a0496538-1ab2-45a2-94ab-fc3474533ec3/mysql-bootstrap/0.log" Dec 03 14:01:43 crc kubenswrapper[4986]: I1203 14:01:43.455708 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_a0496538-1ab2-45a2-94ab-fc3474533ec3/galera/0.log" Dec 03 14:01:43 crc kubenswrapper[4986]: I1203 14:01:43.660189 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_50659f56-763b-4cac-9ab4-d660c7d777af/mysql-bootstrap/0.log" Dec 03 14:01:43 crc kubenswrapper[4986]: I1203 14:01:43.900686 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_50659f56-763b-4cac-9ab4-d660c7d777af/mysql-bootstrap/0.log" Dec 03 14:01:43 crc kubenswrapper[4986]: I1203 14:01:43.919063 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_50659f56-763b-4cac-9ab4-d660c7d777af/galera/0.log" Dec 03 14:01:44 crc kubenswrapper[4986]: I1203 14:01:44.116252 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_a2644ca5-1db7-491d-949e-e8810934a296/openstackclient/0.log" Dec 03 14:01:44 crc kubenswrapper[4986]: I1203 14:01:44.140515 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_3bb61071-904d-46d6-8594-2312383a8a06/nova-metadata-metadata/0.log" Dec 03 14:01:44 crc kubenswrapper[4986]: I1203 14:01:44.160355 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-kqn7t_d06f8249-00a2-4e59-a055-82ab737c7b92/ovn-controller/0.log" Dec 03 14:01:44 crc kubenswrapper[4986]: I1203 14:01:44.325817 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-nm5xm_0798ce7a-8eef-4450-900d-d89e2ab41858/openstack-network-exporter/0.log" Dec 03 14:01:44 crc kubenswrapper[4986]: I1203 14:01:44.461067 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-45czf_5510fce4-e81b-4089-a5c3-4c4b6c72d9e0/ovsdb-server-init/0.log" Dec 03 14:01:44 crc kubenswrapper[4986]: I1203 14:01:44.659334 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-45czf_5510fce4-e81b-4089-a5c3-4c4b6c72d9e0/ovsdb-server-init/0.log" Dec 03 14:01:44 crc kubenswrapper[4986]: I1203 14:01:44.690882 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-45czf_5510fce4-e81b-4089-a5c3-4c4b6c72d9e0/ovsdb-server/0.log" Dec 03 14:01:44 crc kubenswrapper[4986]: I1203 14:01:44.741338 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-45czf_5510fce4-e81b-4089-a5c3-4c4b6c72d9e0/ovs-vswitchd/0.log" Dec 03 14:01:45 crc kubenswrapper[4986]: I1203 14:01:45.002566 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-c9nd5_ff750414-499f-4652-9627-3e45a82b6cf3/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 14:01:45 crc kubenswrapper[4986]: I1203 14:01:45.015045 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_2c06d0ad-4862-4c0f-9cad-5f29aa8af72a/openstack-network-exporter/0.log" Dec 03 14:01:45 crc kubenswrapper[4986]: I1203 14:01:45.020412 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_2c06d0ad-4862-4c0f-9cad-5f29aa8af72a/ovn-northd/0.log" Dec 03 14:01:45 crc kubenswrapper[4986]: I1203 14:01:45.239398 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_905d78e0-0235-400d-8004-1f612a11b60a/openstack-network-exporter/0.log" Dec 03 14:01:45 crc kubenswrapper[4986]: I1203 14:01:45.255373 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_905d78e0-0235-400d-8004-1f612a11b60a/ovsdbserver-nb/0.log" Dec 03 14:01:45 crc kubenswrapper[4986]: I1203 14:01:45.408112 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_40aba5da-7d4c-49e1-a054-6e6789aca293/openstack-network-exporter/0.log" Dec 03 14:01:45 crc kubenswrapper[4986]: I1203 14:01:45.495075 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_40aba5da-7d4c-49e1-a054-6e6789aca293/ovsdbserver-sb/0.log" Dec 03 14:01:45 crc kubenswrapper[4986]: I1203 14:01:45.943029 4986 scope.go:117] "RemoveContainer" containerID="f4aa16a78e6dc7399cb3f0bba100cc3ca3d479a66dc5b2e6c12ee775b73c7cc4" Dec 03 14:01:45 crc kubenswrapper[4986]: E1203 14:01:45.943542 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 14:01:45 crc kubenswrapper[4986]: I1203 14:01:45.947684 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-db8cd8b46-ffl2g_c622f06b-5b3c-45e4-890e-9f7ba2283ab3/placement-api/0.log" Dec 03 14:01:46 crc kubenswrapper[4986]: I1203 14:01:46.014341 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_e06b4596-d4ac-4524-a521-ae6edfc239be/setup-container/0.log" Dec 03 14:01:46 crc kubenswrapper[4986]: I1203 14:01:46.074115 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-db8cd8b46-ffl2g_c622f06b-5b3c-45e4-890e-9f7ba2283ab3/placement-log/0.log" Dec 03 14:01:46 crc kubenswrapper[4986]: I1203 14:01:46.237622 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_e06b4596-d4ac-4524-a521-ae6edfc239be/rabbitmq/0.log" Dec 03 14:01:46 crc kubenswrapper[4986]: I1203 14:01:46.281145 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_67da7713-f27f-48cb-a2f1-4ebea4d2f939/setup-container/0.log" Dec 03 14:01:46 crc kubenswrapper[4986]: I1203 14:01:46.316274 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_e06b4596-d4ac-4524-a521-ae6edfc239be/setup-container/0.log" Dec 03 14:01:46 crc kubenswrapper[4986]: I1203 14:01:46.480418 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_67da7713-f27f-48cb-a2f1-4ebea4d2f939/setup-container/0.log" Dec 03 14:01:46 crc kubenswrapper[4986]: I1203 14:01:46.557040 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_67da7713-f27f-48cb-a2f1-4ebea4d2f939/rabbitmq/0.log" Dec 03 14:01:46 crc kubenswrapper[4986]: I1203 14:01:46.635696 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-qjdgt_859dd2e9-8a4b-4b51-8718-9d8b5837d098/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 14:01:46 crc kubenswrapper[4986]: I1203 14:01:46.815040 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-c9qzr_931d4925-ed6c-4a1f-8b14-4e726641d115/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 14:01:46 crc kubenswrapper[4986]: I1203 14:01:46.863756 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-qpl5d_d9886573-7ee1-4a0b-a6d2-f8621fdabf83/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 14:01:47 crc kubenswrapper[4986]: I1203 14:01:47.161601 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-n9vwn_8c21eccd-73c5-4d10-9bfe-ff9530e7627b/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 14:01:47 crc kubenswrapper[4986]: I1203 14:01:47.179411 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-5x8bl_614b1cb6-38ce-43ac-a5f3-abc66d1dd088/ssh-known-hosts-edpm-deployment/0.log" Dec 03 14:01:47 crc kubenswrapper[4986]: I1203 14:01:47.416813 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6d976bf467-mjgvz_d70793fc-c91d-4ddc-8a21-bcd243434f73/proxy-server/0.log" Dec 03 14:01:47 crc kubenswrapper[4986]: I1203 14:01:47.577964 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6d976bf467-mjgvz_d70793fc-c91d-4ddc-8a21-bcd243434f73/proxy-httpd/0.log" Dec 03 14:01:47 crc kubenswrapper[4986]: I1203 14:01:47.636356 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-sxrg6_95163eb6-a8f2-45d5-b816-84dd6ffbdab2/swift-ring-rebalance/0.log" Dec 03 14:01:47 crc kubenswrapper[4986]: I1203 14:01:47.752053 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe/account-auditor/0.log" Dec 03 14:01:47 crc kubenswrapper[4986]: I1203 14:01:47.811937 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe/account-reaper/0.log" Dec 03 14:01:48 crc kubenswrapper[4986]: I1203 14:01:48.550661 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe/container-auditor/0.log" Dec 03 14:01:48 crc kubenswrapper[4986]: I1203 14:01:48.556456 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe/account-server/0.log" Dec 03 14:01:48 crc kubenswrapper[4986]: I1203 14:01:48.569013 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe/container-replicator/0.log" Dec 03 14:01:48 crc kubenswrapper[4986]: I1203 14:01:48.579523 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe/account-replicator/0.log" Dec 03 14:01:48 crc kubenswrapper[4986]: I1203 14:01:48.781897 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe/object-auditor/0.log" Dec 03 14:01:48 crc kubenswrapper[4986]: I1203 14:01:48.796315 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe/container-server/0.log" Dec 03 14:01:48 crc kubenswrapper[4986]: I1203 14:01:48.811676 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe/container-updater/0.log" Dec 03 14:01:48 crc kubenswrapper[4986]: I1203 14:01:48.813691 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe/object-expirer/0.log" Dec 03 14:01:48 crc kubenswrapper[4986]: I1203 14:01:48.997819 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe/object-replicator/0.log" Dec 03 14:01:49 crc kubenswrapper[4986]: I1203 14:01:49.038726 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe/object-server/0.log" Dec 03 14:01:49 crc kubenswrapper[4986]: I1203 14:01:49.047067 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe/object-updater/0.log" Dec 03 14:01:49 crc kubenswrapper[4986]: I1203 14:01:49.122176 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe/rsync/0.log" Dec 03 14:01:49 crc kubenswrapper[4986]: I1203 14:01:49.231725 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe/swift-recon-cron/0.log" Dec 03 14:01:49 crc kubenswrapper[4986]: I1203 14:01:49.335246 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-2vnss_963319ab-2780-4d81-bf46-9b6dee690eeb/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 14:01:49 crc kubenswrapper[4986]: I1203 14:01:49.675154 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_e8d1d8d5-6041-4e8a-bf89-d4811ef1a2bb/test-operator-logs-container/0.log" Dec 03 14:01:49 crc kubenswrapper[4986]: I1203 14:01:49.715093 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_4159adcb-0a7a-4765-ac54-186effebee8e/tempest-tests-tempest-tests-runner/0.log" Dec 03 14:01:50 crc kubenswrapper[4986]: I1203 14:01:50.214664 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-mq9d2_6d8473a8-d750-4ff5-84be-96088a3eea45/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 14:01:56 crc kubenswrapper[4986]: I1203 14:01:56.275141 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_cb2b35c1-7c5f-4ffc-a178-4f7ade5cc313/memcached/0.log" Dec 03 14:01:59 crc kubenswrapper[4986]: I1203 14:01:59.943467 4986 scope.go:117] "RemoveContainer" containerID="f4aa16a78e6dc7399cb3f0bba100cc3ca3d479a66dc5b2e6c12ee775b73c7cc4" Dec 03 14:01:59 crc kubenswrapper[4986]: E1203 14:01:59.944538 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 14:02:12 crc kubenswrapper[4986]: I1203 14:02:12.943255 4986 scope.go:117] "RemoveContainer" containerID="f4aa16a78e6dc7399cb3f0bba100cc3ca3d479a66dc5b2e6c12ee775b73c7cc4" Dec 03 14:02:13 crc kubenswrapper[4986]: I1203 14:02:13.797722 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" event={"ID":"f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c","Type":"ContainerStarted","Data":"94098f17055dc83680391bf29240e7586ef8a8e160e65ff114312434843cab43"} Dec 03 14:02:16 crc kubenswrapper[4986]: I1203 14:02:16.684609 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_824d2a61b6100fd280a9a09bd63ce89af5b90b0d18a4f901e4852c2491zpwrf_b7926b50-9c30-44b4-ac3f-058edec517b9/util/0.log" Dec 03 14:02:16 crc kubenswrapper[4986]: I1203 14:02:16.873318 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_824d2a61b6100fd280a9a09bd63ce89af5b90b0d18a4f901e4852c2491zpwrf_b7926b50-9c30-44b4-ac3f-058edec517b9/util/0.log" Dec 03 14:02:16 crc kubenswrapper[4986]: I1203 14:02:16.883691 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_824d2a61b6100fd280a9a09bd63ce89af5b90b0d18a4f901e4852c2491zpwrf_b7926b50-9c30-44b4-ac3f-058edec517b9/pull/0.log" Dec 03 14:02:16 crc kubenswrapper[4986]: I1203 14:02:16.921792 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_824d2a61b6100fd280a9a09bd63ce89af5b90b0d18a4f901e4852c2491zpwrf_b7926b50-9c30-44b4-ac3f-058edec517b9/pull/0.log" Dec 03 14:02:17 crc kubenswrapper[4986]: I1203 14:02:17.173347 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_824d2a61b6100fd280a9a09bd63ce89af5b90b0d18a4f901e4852c2491zpwrf_b7926b50-9c30-44b4-ac3f-058edec517b9/extract/0.log" Dec 03 14:02:17 crc kubenswrapper[4986]: I1203 14:02:17.195467 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_824d2a61b6100fd280a9a09bd63ce89af5b90b0d18a4f901e4852c2491zpwrf_b7926b50-9c30-44b4-ac3f-058edec517b9/util/0.log" Dec 03 14:02:17 crc kubenswrapper[4986]: I1203 14:02:17.205497 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_824d2a61b6100fd280a9a09bd63ce89af5b90b0d18a4f901e4852c2491zpwrf_b7926b50-9c30-44b4-ac3f-058edec517b9/pull/0.log" Dec 03 14:02:17 crc kubenswrapper[4986]: I1203 14:02:17.388874 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-8z7gr_69fed752-e65d-4007-a731-3faee6335366/kube-rbac-proxy/0.log" Dec 03 14:02:17 crc kubenswrapper[4986]: I1203 14:02:17.397058 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-859b6ccc6-l48wq_f3328b2b-d4e4-4b39-a949-bfd1463596f0/kube-rbac-proxy/0.log" Dec 03 14:02:17 crc kubenswrapper[4986]: I1203 14:02:17.478183 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-8z7gr_69fed752-e65d-4007-a731-3faee6335366/manager/0.log" Dec 03 14:02:18 crc kubenswrapper[4986]: I1203 14:02:18.265940 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-859b6ccc6-l48wq_f3328b2b-d4e4-4b39-a949-bfd1463596f0/manager/0.log" Dec 03 14:02:18 crc kubenswrapper[4986]: I1203 14:02:18.314648 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-78b4bc895b-fw7nw_924573df-b6fe-4d17-add4-376f76084fab/manager/0.log" Dec 03 14:02:18 crc kubenswrapper[4986]: I1203 14:02:18.343694 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-78b4bc895b-fw7nw_924573df-b6fe-4d17-add4-376f76084fab/kube-rbac-proxy/0.log" Dec 03 14:02:18 crc kubenswrapper[4986]: I1203 14:02:18.490568 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987cd8cd-5vc5p_8af63121-8727-4c23-b872-554fe679fc2f/kube-rbac-proxy/0.log" Dec 03 14:02:18 crc kubenswrapper[4986]: I1203 14:02:18.613220 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987cd8cd-5vc5p_8af63121-8727-4c23-b872-554fe679fc2f/manager/0.log" Dec 03 14:02:18 crc kubenswrapper[4986]: I1203 14:02:18.696798 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-nnhsz_f6268841-12af-4fa7-a9ab-54927e3256cf/kube-rbac-proxy/0.log" Dec 03 14:02:18 crc kubenswrapper[4986]: I1203 14:02:18.772023 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-nnhsz_f6268841-12af-4fa7-a9ab-54927e3256cf/manager/0.log" Dec 03 14:02:18 crc kubenswrapper[4986]: I1203 14:02:18.831878 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-h49tf_f2be2f63-f6d7-425a-8ce1-d2bc205e24f0/kube-rbac-proxy/0.log" Dec 03 14:02:18 crc kubenswrapper[4986]: I1203 14:02:18.901251 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-h49tf_f2be2f63-f6d7-425a-8ce1-d2bc205e24f0/manager/0.log" Dec 03 14:02:19 crc kubenswrapper[4986]: I1203 14:02:19.120108 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-cgjf2_b47ead63-1562-466f-887b-54c155983ebf/kube-rbac-proxy/0.log" Dec 03 14:02:19 crc kubenswrapper[4986]: I1203 14:02:19.195519 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-cgjf2_b47ead63-1562-466f-887b-54c155983ebf/manager/0.log" Dec 03 14:02:19 crc kubenswrapper[4986]: I1203 14:02:19.250074 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6c548fd776-jgxqz_ac6af48b-36c2-427c-93ad-090cc34434f7/kube-rbac-proxy/0.log" Dec 03 14:02:19 crc kubenswrapper[4986]: I1203 14:02:19.376140 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6c548fd776-jgxqz_ac6af48b-36c2-427c-93ad-090cc34434f7/manager/0.log" Dec 03 14:02:19 crc kubenswrapper[4986]: I1203 14:02:19.415701 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7765d96ddf-bm5lk_29ac6999-88ff-472f-a03e-0b95f1042d38/kube-rbac-proxy/0.log" Dec 03 14:02:20 crc kubenswrapper[4986]: I1203 14:02:20.124500 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7c79b5df47-h5sqr_f01db271-4787-4af7-b37b-5ba6e4e2e5b7/kube-rbac-proxy/0.log" Dec 03 14:02:20 crc kubenswrapper[4986]: I1203 14:02:20.211645 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7c79b5df47-h5sqr_f01db271-4787-4af7-b37b-5ba6e4e2e5b7/manager/0.log" Dec 03 14:02:20 crc kubenswrapper[4986]: I1203 14:02:20.254497 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7765d96ddf-bm5lk_29ac6999-88ff-472f-a03e-0b95f1042d38/manager/0.log" Dec 03 14:02:20 crc kubenswrapper[4986]: I1203 14:02:20.386186 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-56bbcc9d85-bkf5w_3d88c3a6-1643-4fff-acbe-2327b9878103/kube-rbac-proxy/0.log" Dec 03 14:02:20 crc kubenswrapper[4986]: I1203 14:02:20.505096 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-56bbcc9d85-bkf5w_3d88c3a6-1643-4fff-acbe-2327b9878103/manager/0.log" Dec 03 14:02:20 crc kubenswrapper[4986]: I1203 14:02:20.529932 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-blk7z_1ecc0034-a740-410d-a135-6b65d34ce64d/kube-rbac-proxy/0.log" Dec 03 14:02:20 crc kubenswrapper[4986]: I1203 14:02:20.651205 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-blk7z_1ecc0034-a740-410d-a135-6b65d34ce64d/manager/0.log" Dec 03 14:02:20 crc kubenswrapper[4986]: I1203 14:02:20.720954 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-7t7xl_d601bb24-2bd9-478a-96d1-ed2001bd53b6/kube-rbac-proxy/0.log" Dec 03 14:02:20 crc kubenswrapper[4986]: I1203 14:02:20.821362 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-xcs9s_be530205-b10b-4d4b-9fa3-4d9d0548054c/kube-rbac-proxy/0.log" Dec 03 14:02:20 crc kubenswrapper[4986]: I1203 14:02:20.823568 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-7t7xl_d601bb24-2bd9-478a-96d1-ed2001bd53b6/manager/0.log" Dec 03 14:02:20 crc kubenswrapper[4986]: I1203 14:02:20.894422 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-xcs9s_be530205-b10b-4d4b-9fa3-4d9d0548054c/manager/0.log" Dec 03 14:02:20 crc kubenswrapper[4986]: I1203 14:02:20.971443 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-64bc77cfd4fw9mm_400b4a35-c3f1-409e-83fe-019ff145c65a/kube-rbac-proxy/0.log" Dec 03 14:02:21 crc kubenswrapper[4986]: I1203 14:02:21.009602 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-64bc77cfd4fw9mm_400b4a35-c3f1-409e-83fe-019ff145c65a/manager/0.log" Dec 03 14:02:21 crc kubenswrapper[4986]: I1203 14:02:21.258668 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-rxgnt_dbe624e6-2210-4ead-ac45-77704177e0a4/registry-server/0.log" Dec 03 14:02:21 crc kubenswrapper[4986]: I1203 14:02:21.368323 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-64576b8bc7-w9775_bbe2108e-e5e6-4482-91b9-148932254640/operator/0.log" Dec 03 14:02:21 crc kubenswrapper[4986]: I1203 14:02:21.491737 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-lvpbh_42fe5051-78e1-45ab-9766-dbd119c4e060/kube-rbac-proxy/0.log" Dec 03 14:02:21 crc kubenswrapper[4986]: I1203 14:02:21.591244 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-lvpbh_42fe5051-78e1-45ab-9766-dbd119c4e060/manager/0.log" Dec 03 14:02:21 crc kubenswrapper[4986]: I1203 14:02:21.604617 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-j4bg2_c6bc4b54-a3be-4a2c-813e-fa19eea0dbe8/kube-rbac-proxy/0.log" Dec 03 14:02:21 crc kubenswrapper[4986]: I1203 14:02:21.817112 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-j4bg2_c6bc4b54-a3be-4a2c-813e-fa19eea0dbe8/manager/0.log" Dec 03 14:02:21 crc kubenswrapper[4986]: I1203 14:02:21.840265 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-9ghmd_b9df4fcc-97aa-4a32-acbf-25f42addf8cc/operator/0.log" Dec 03 14:02:22 crc kubenswrapper[4986]: I1203 14:02:22.056361 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f8c65bbfc-c6wk7_a5090dbe-8e6f-4865-92a3-28720422db9f/kube-rbac-proxy/0.log" Dec 03 14:02:22 crc kubenswrapper[4986]: I1203 14:02:22.137843 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-76cc84c6bb-726wj_a7413edd-cf2a-4756-b6b7-afe4e4e42fe6/kube-rbac-proxy/0.log" Dec 03 14:02:22 crc kubenswrapper[4986]: I1203 14:02:22.157921 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f8c65bbfc-c6wk7_a5090dbe-8e6f-4865-92a3-28720422db9f/manager/0.log" Dec 03 14:02:22 crc kubenswrapper[4986]: I1203 14:02:22.329924 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-5cd9cc65cb-tjrw7_cd923320-06e2-4933-bc26-a4c947ab732b/manager/0.log" Dec 03 14:02:22 crc kubenswrapper[4986]: I1203 14:02:22.347861 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-76cc84c6bb-726wj_a7413edd-cf2a-4756-b6b7-afe4e4e42fe6/manager/0.log" Dec 03 14:02:22 crc kubenswrapper[4986]: I1203 14:02:22.350107 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-bg4vv_276259ca-95c1-41c2-803f-b82904067552/kube-rbac-proxy/0.log" Dec 03 14:02:22 crc kubenswrapper[4986]: I1203 14:02:22.418060 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-bg4vv_276259ca-95c1-41c2-803f-b82904067552/manager/0.log" Dec 03 14:02:22 crc kubenswrapper[4986]: I1203 14:02:22.555796 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-769dc69bc-ph2qf_2d441496-72d9-462b-aea6-e2588499fbf0/manager/0.log" Dec 03 14:02:22 crc kubenswrapper[4986]: I1203 14:02:22.564489 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-769dc69bc-ph2qf_2d441496-72d9-462b-aea6-e2588499fbf0/kube-rbac-proxy/0.log" Dec 03 14:02:43 crc kubenswrapper[4986]: I1203 14:02:43.336307 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-s6wq8_05a3b920-eb04-4864-81ac-924ba7c63d4e/control-plane-machine-set-operator/0.log" Dec 03 14:02:43 crc kubenswrapper[4986]: I1203 14:02:43.487002 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-xn84j_2588cb3b-8139-4529-a6e1-c57532afdfa7/kube-rbac-proxy/0.log" Dec 03 14:02:43 crc kubenswrapper[4986]: I1203 14:02:43.508928 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-xn84j_2588cb3b-8139-4529-a6e1-c57532afdfa7/machine-api-operator/0.log" Dec 03 14:02:57 crc kubenswrapper[4986]: I1203 14:02:57.528463 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-dzxc8_f4e3f3b7-bc75-4d36-9278-773bdf1109df/cert-manager-cainjector/0.log" Dec 03 14:02:57 crc kubenswrapper[4986]: I1203 14:02:57.691560 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-ftncc_c0dbd0c1-fbde-463c-917f-d7d101f6c6e8/cert-manager-controller/0.log" Dec 03 14:02:57 crc kubenswrapper[4986]: I1203 14:02:57.763104 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-2wgk2_e54cc3fa-08d8-433f-9db5-bcff8c8e43fe/cert-manager-webhook/0.log" Dec 03 14:03:11 crc kubenswrapper[4986]: I1203 14:03:11.080346 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7fbb5f6569-8lb6z_c4ff9001-bea2-41b9-8820-0c46e15b2fbb/nmstate-console-plugin/0.log" Dec 03 14:03:11 crc kubenswrapper[4986]: I1203 14:03:11.222263 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-rgwgv_bd89b324-ae85-4e4e-b40b-a76a7ae8e498/nmstate-handler/0.log" Dec 03 14:03:11 crc kubenswrapper[4986]: I1203 14:03:11.262759 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-ll678_cb8d169c-9c96-403d-9f6b-357dd8ccc78a/kube-rbac-proxy/0.log" Dec 03 14:03:11 crc kubenswrapper[4986]: I1203 14:03:11.324742 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-ll678_cb8d169c-9c96-403d-9f6b-357dd8ccc78a/nmstate-metrics/0.log" Dec 03 14:03:11 crc kubenswrapper[4986]: I1203 14:03:11.451340 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-5b5b58f5c8-zznsq_eed2e928-3f77-4b48-8f9d-9cd923d4f708/nmstate-operator/0.log" Dec 03 14:03:11 crc kubenswrapper[4986]: I1203 14:03:11.563692 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f6d4c5ccb-zztq9_c233dd25-afb2-4ee7-b907-c79d08e02af6/nmstate-webhook/0.log" Dec 03 14:03:25 crc kubenswrapper[4986]: I1203 14:03:25.592551 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-qshgg_0e60a343-90aa-4c8b-a745-020b111c0b76/kube-rbac-proxy/0.log" Dec 03 14:03:25 crc kubenswrapper[4986]: I1203 14:03:25.783560 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-qshgg_0e60a343-90aa-4c8b-a745-020b111c0b76/controller/0.log" Dec 03 14:03:25 crc kubenswrapper[4986]: I1203 14:03:25.821572 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7fcb986d4-bvs5x_56c900c4-e165-4ada-a70f-3ab4f267441d/frr-k8s-webhook-server/0.log" Dec 03 14:03:25 crc kubenswrapper[4986]: I1203 14:03:25.943444 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zv59f_2db2fcc5-1b0c-48df-a5e9-321d28b4efb3/cp-frr-files/0.log" Dec 03 14:03:26 crc kubenswrapper[4986]: I1203 14:03:26.132567 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zv59f_2db2fcc5-1b0c-48df-a5e9-321d28b4efb3/cp-frr-files/0.log" Dec 03 14:03:26 crc kubenswrapper[4986]: I1203 14:03:26.187556 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zv59f_2db2fcc5-1b0c-48df-a5e9-321d28b4efb3/cp-reloader/0.log" Dec 03 14:03:26 crc kubenswrapper[4986]: I1203 14:03:26.191041 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zv59f_2db2fcc5-1b0c-48df-a5e9-321d28b4efb3/cp-reloader/0.log" Dec 03 14:03:26 crc kubenswrapper[4986]: I1203 14:03:26.222696 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zv59f_2db2fcc5-1b0c-48df-a5e9-321d28b4efb3/cp-metrics/0.log" Dec 03 14:03:26 crc kubenswrapper[4986]: I1203 14:03:26.457236 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zv59f_2db2fcc5-1b0c-48df-a5e9-321d28b4efb3/cp-reloader/0.log" Dec 03 14:03:26 crc kubenswrapper[4986]: I1203 14:03:26.461576 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zv59f_2db2fcc5-1b0c-48df-a5e9-321d28b4efb3/cp-metrics/0.log" Dec 03 14:03:26 crc kubenswrapper[4986]: I1203 14:03:26.461770 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zv59f_2db2fcc5-1b0c-48df-a5e9-321d28b4efb3/cp-metrics/0.log" Dec 03 14:03:26 crc kubenswrapper[4986]: I1203 14:03:26.465373 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zv59f_2db2fcc5-1b0c-48df-a5e9-321d28b4efb3/cp-frr-files/0.log" Dec 03 14:03:26 crc kubenswrapper[4986]: I1203 14:03:26.663270 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zv59f_2db2fcc5-1b0c-48df-a5e9-321d28b4efb3/cp-reloader/0.log" Dec 03 14:03:26 crc kubenswrapper[4986]: I1203 14:03:26.674724 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zv59f_2db2fcc5-1b0c-48df-a5e9-321d28b4efb3/cp-metrics/0.log" Dec 03 14:03:26 crc kubenswrapper[4986]: I1203 14:03:26.704191 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zv59f_2db2fcc5-1b0c-48df-a5e9-321d28b4efb3/cp-frr-files/0.log" Dec 03 14:03:26 crc kubenswrapper[4986]: I1203 14:03:26.705359 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zv59f_2db2fcc5-1b0c-48df-a5e9-321d28b4efb3/controller/0.log" Dec 03 14:03:26 crc kubenswrapper[4986]: I1203 14:03:26.858818 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zv59f_2db2fcc5-1b0c-48df-a5e9-321d28b4efb3/frr-metrics/0.log" Dec 03 14:03:26 crc kubenswrapper[4986]: I1203 14:03:26.894966 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zv59f_2db2fcc5-1b0c-48df-a5e9-321d28b4efb3/kube-rbac-proxy/0.log" Dec 03 14:03:26 crc kubenswrapper[4986]: I1203 14:03:26.921444 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zv59f_2db2fcc5-1b0c-48df-a5e9-321d28b4efb3/kube-rbac-proxy-frr/0.log" Dec 03 14:03:27 crc kubenswrapper[4986]: I1203 14:03:27.087180 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zv59f_2db2fcc5-1b0c-48df-a5e9-321d28b4efb3/reloader/0.log" Dec 03 14:03:27 crc kubenswrapper[4986]: I1203 14:03:27.177984 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-5b7fcdf964-xx85j_c6eca6fc-6cf0-4862-b2a8-caa0c4ec2c8a/manager/0.log" Dec 03 14:03:27 crc kubenswrapper[4986]: I1203 14:03:27.428667 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-ddbdbd445-x6ccv_c786d2ef-19b1-4e12-a803-3cf1c459f6a7/webhook-server/0.log" Dec 03 14:03:27 crc kubenswrapper[4986]: I1203 14:03:27.566899 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-d7x7x_559a743d-b60c-4a89-b256-4842e829043c/kube-rbac-proxy/0.log" Dec 03 14:03:28 crc kubenswrapper[4986]: I1203 14:03:28.093131 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-d7x7x_559a743d-b60c-4a89-b256-4842e829043c/speaker/0.log" Dec 03 14:03:28 crc kubenswrapper[4986]: I1203 14:03:28.338397 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zv59f_2db2fcc5-1b0c-48df-a5e9-321d28b4efb3/frr/0.log" Dec 03 14:03:41 crc kubenswrapper[4986]: I1203 14:03:41.308640 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212flxqdt_3e31ff03-19d4-45b9-a2a2-c80add3e095a/util/0.log" Dec 03 14:03:41 crc kubenswrapper[4986]: I1203 14:03:41.458211 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212flxqdt_3e31ff03-19d4-45b9-a2a2-c80add3e095a/util/0.log" Dec 03 14:03:41 crc kubenswrapper[4986]: I1203 14:03:41.518795 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212flxqdt_3e31ff03-19d4-45b9-a2a2-c80add3e095a/pull/0.log" Dec 03 14:03:41 crc kubenswrapper[4986]: I1203 14:03:41.518942 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212flxqdt_3e31ff03-19d4-45b9-a2a2-c80add3e095a/pull/0.log" Dec 03 14:03:41 crc kubenswrapper[4986]: I1203 14:03:41.659998 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212flxqdt_3e31ff03-19d4-45b9-a2a2-c80add3e095a/pull/0.log" Dec 03 14:03:41 crc kubenswrapper[4986]: I1203 14:03:41.689892 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212flxqdt_3e31ff03-19d4-45b9-a2a2-c80add3e095a/util/0.log" Dec 03 14:03:41 crc kubenswrapper[4986]: I1203 14:03:41.780802 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212flxqdt_3e31ff03-19d4-45b9-a2a2-c80add3e095a/extract/0.log" Dec 03 14:03:41 crc kubenswrapper[4986]: I1203 14:03:41.880422 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83lg6jv_8ec4e0a9-2294-4df2-b849-9c32dba275f9/util/0.log" Dec 03 14:03:42 crc kubenswrapper[4986]: I1203 14:03:42.018371 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83lg6jv_8ec4e0a9-2294-4df2-b849-9c32dba275f9/pull/0.log" Dec 03 14:03:42 crc kubenswrapper[4986]: I1203 14:03:42.034757 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83lg6jv_8ec4e0a9-2294-4df2-b849-9c32dba275f9/util/0.log" Dec 03 14:03:42 crc kubenswrapper[4986]: I1203 14:03:42.064784 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83lg6jv_8ec4e0a9-2294-4df2-b849-9c32dba275f9/pull/0.log" Dec 03 14:03:42 crc kubenswrapper[4986]: I1203 14:03:42.228067 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83lg6jv_8ec4e0a9-2294-4df2-b849-9c32dba275f9/pull/0.log" Dec 03 14:03:42 crc kubenswrapper[4986]: I1203 14:03:42.234043 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83lg6jv_8ec4e0a9-2294-4df2-b849-9c32dba275f9/extract/0.log" Dec 03 14:03:42 crc kubenswrapper[4986]: I1203 14:03:42.238338 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83lg6jv_8ec4e0a9-2294-4df2-b849-9c32dba275f9/util/0.log" Dec 03 14:03:42 crc kubenswrapper[4986]: I1203 14:03:42.413432 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-x6bw7_c13ffdcc-9a64-45ba-8fec-96d700c3a387/extract-utilities/0.log" Dec 03 14:03:42 crc kubenswrapper[4986]: I1203 14:03:42.594417 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-x6bw7_c13ffdcc-9a64-45ba-8fec-96d700c3a387/extract-content/0.log" Dec 03 14:03:42 crc kubenswrapper[4986]: I1203 14:03:42.599168 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-x6bw7_c13ffdcc-9a64-45ba-8fec-96d700c3a387/extract-content/0.log" Dec 03 14:03:42 crc kubenswrapper[4986]: I1203 14:03:42.617543 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-x6bw7_c13ffdcc-9a64-45ba-8fec-96d700c3a387/extract-utilities/0.log" Dec 03 14:03:42 crc kubenswrapper[4986]: I1203 14:03:42.765543 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-x6bw7_c13ffdcc-9a64-45ba-8fec-96d700c3a387/extract-utilities/0.log" Dec 03 14:03:42 crc kubenswrapper[4986]: I1203 14:03:42.768179 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-x6bw7_c13ffdcc-9a64-45ba-8fec-96d700c3a387/extract-content/0.log" Dec 03 14:03:42 crc kubenswrapper[4986]: I1203 14:03:42.993093 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-cwp4r_d5ee5a5d-5ed3-46fa-acee-0fa1d5f5048b/extract-utilities/0.log" Dec 03 14:03:43 crc kubenswrapper[4986]: I1203 14:03:43.229973 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-cwp4r_d5ee5a5d-5ed3-46fa-acee-0fa1d5f5048b/extract-content/0.log" Dec 03 14:03:43 crc kubenswrapper[4986]: I1203 14:03:43.248122 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-x6bw7_c13ffdcc-9a64-45ba-8fec-96d700c3a387/registry-server/0.log" Dec 03 14:03:43 crc kubenswrapper[4986]: I1203 14:03:43.254467 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-cwp4r_d5ee5a5d-5ed3-46fa-acee-0fa1d5f5048b/extract-utilities/0.log" Dec 03 14:03:43 crc kubenswrapper[4986]: I1203 14:03:43.316428 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-cwp4r_d5ee5a5d-5ed3-46fa-acee-0fa1d5f5048b/extract-content/0.log" Dec 03 14:03:43 crc kubenswrapper[4986]: I1203 14:03:43.478340 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-cwp4r_d5ee5a5d-5ed3-46fa-acee-0fa1d5f5048b/extract-utilities/0.log" Dec 03 14:03:43 crc kubenswrapper[4986]: I1203 14:03:43.478345 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-cwp4r_d5ee5a5d-5ed3-46fa-acee-0fa1d5f5048b/extract-content/0.log" Dec 03 14:03:43 crc kubenswrapper[4986]: I1203 14:03:43.783473 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-7lnmw_4083ec9d-ae1e-4b92-955d-7b2c3ee874c7/marketplace-operator/0.log" Dec 03 14:03:43 crc kubenswrapper[4986]: I1203 14:03:43.810747 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-84w5m_57e3297d-ec2e-4cc1-8939-4d2dd78c6a8e/extract-utilities/0.log" Dec 03 14:03:44 crc kubenswrapper[4986]: I1203 14:03:44.052942 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-84w5m_57e3297d-ec2e-4cc1-8939-4d2dd78c6a8e/extract-content/0.log" Dec 03 14:03:44 crc kubenswrapper[4986]: I1203 14:03:44.091717 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-cwp4r_d5ee5a5d-5ed3-46fa-acee-0fa1d5f5048b/registry-server/0.log" Dec 03 14:03:44 crc kubenswrapper[4986]: I1203 14:03:44.101060 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-84w5m_57e3297d-ec2e-4cc1-8939-4d2dd78c6a8e/extract-utilities/0.log" Dec 03 14:03:44 crc kubenswrapper[4986]: I1203 14:03:44.113545 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-84w5m_57e3297d-ec2e-4cc1-8939-4d2dd78c6a8e/extract-content/0.log" Dec 03 14:03:44 crc kubenswrapper[4986]: I1203 14:03:44.247754 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-84w5m_57e3297d-ec2e-4cc1-8939-4d2dd78c6a8e/extract-utilities/0.log" Dec 03 14:03:44 crc kubenswrapper[4986]: I1203 14:03:44.273757 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-84w5m_57e3297d-ec2e-4cc1-8939-4d2dd78c6a8e/extract-content/0.log" Dec 03 14:03:44 crc kubenswrapper[4986]: I1203 14:03:44.433957 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-84w5m_57e3297d-ec2e-4cc1-8939-4d2dd78c6a8e/registry-server/0.log" Dec 03 14:03:44 crc kubenswrapper[4986]: I1203 14:03:44.500769 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5jd64_d90ad1cf-edbe-42b5-84c6-0c0568fafd43/extract-utilities/0.log" Dec 03 14:03:44 crc kubenswrapper[4986]: I1203 14:03:44.643053 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5jd64_d90ad1cf-edbe-42b5-84c6-0c0568fafd43/extract-utilities/0.log" Dec 03 14:03:44 crc kubenswrapper[4986]: I1203 14:03:44.651373 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5jd64_d90ad1cf-edbe-42b5-84c6-0c0568fafd43/extract-content/0.log" Dec 03 14:03:44 crc kubenswrapper[4986]: I1203 14:03:44.670492 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5jd64_d90ad1cf-edbe-42b5-84c6-0c0568fafd43/extract-content/0.log" Dec 03 14:03:44 crc kubenswrapper[4986]: I1203 14:03:44.871697 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5jd64_d90ad1cf-edbe-42b5-84c6-0c0568fafd43/extract-content/0.log" Dec 03 14:03:44 crc kubenswrapper[4986]: I1203 14:03:44.872612 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5jd64_d90ad1cf-edbe-42b5-84c6-0c0568fafd43/extract-utilities/0.log" Dec 03 14:03:45 crc kubenswrapper[4986]: I1203 14:03:45.286844 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5jd64_d90ad1cf-edbe-42b5-84c6-0c0568fafd43/registry-server/0.log" Dec 03 14:04:22 crc kubenswrapper[4986]: I1203 14:04:22.072984 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4lcht"] Dec 03 14:04:22 crc kubenswrapper[4986]: E1203 14:04:22.074008 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04ee8e0b-f508-42ae-aad3-3a7c1001e8ea" containerName="container-00" Dec 03 14:04:22 crc kubenswrapper[4986]: I1203 14:04:22.074023 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="04ee8e0b-f508-42ae-aad3-3a7c1001e8ea" containerName="container-00" Dec 03 14:04:22 crc kubenswrapper[4986]: I1203 14:04:22.074255 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="04ee8e0b-f508-42ae-aad3-3a7c1001e8ea" containerName="container-00" Dec 03 14:04:22 crc kubenswrapper[4986]: I1203 14:04:22.076251 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4lcht" Dec 03 14:04:22 crc kubenswrapper[4986]: I1203 14:04:22.103429 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4lcht"] Dec 03 14:04:22 crc kubenswrapper[4986]: I1203 14:04:22.183041 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e192b4a1-251e-44aa-a86c-4ad897c23d93-utilities\") pod \"redhat-marketplace-4lcht\" (UID: \"e192b4a1-251e-44aa-a86c-4ad897c23d93\") " pod="openshift-marketplace/redhat-marketplace-4lcht" Dec 03 14:04:22 crc kubenswrapper[4986]: I1203 14:04:22.183098 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-682kc\" (UniqueName: \"kubernetes.io/projected/e192b4a1-251e-44aa-a86c-4ad897c23d93-kube-api-access-682kc\") pod \"redhat-marketplace-4lcht\" (UID: \"e192b4a1-251e-44aa-a86c-4ad897c23d93\") " pod="openshift-marketplace/redhat-marketplace-4lcht" Dec 03 14:04:22 crc kubenswrapper[4986]: I1203 14:04:22.183256 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e192b4a1-251e-44aa-a86c-4ad897c23d93-catalog-content\") pod \"redhat-marketplace-4lcht\" (UID: \"e192b4a1-251e-44aa-a86c-4ad897c23d93\") " pod="openshift-marketplace/redhat-marketplace-4lcht" Dec 03 14:04:22 crc kubenswrapper[4986]: I1203 14:04:22.285055 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-682kc\" (UniqueName: \"kubernetes.io/projected/e192b4a1-251e-44aa-a86c-4ad897c23d93-kube-api-access-682kc\") pod \"redhat-marketplace-4lcht\" (UID: \"e192b4a1-251e-44aa-a86c-4ad897c23d93\") " pod="openshift-marketplace/redhat-marketplace-4lcht" Dec 03 14:04:22 crc kubenswrapper[4986]: I1203 14:04:22.285138 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e192b4a1-251e-44aa-a86c-4ad897c23d93-catalog-content\") pod \"redhat-marketplace-4lcht\" (UID: \"e192b4a1-251e-44aa-a86c-4ad897c23d93\") " pod="openshift-marketplace/redhat-marketplace-4lcht" Dec 03 14:04:22 crc kubenswrapper[4986]: I1203 14:04:22.285367 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e192b4a1-251e-44aa-a86c-4ad897c23d93-utilities\") pod \"redhat-marketplace-4lcht\" (UID: \"e192b4a1-251e-44aa-a86c-4ad897c23d93\") " pod="openshift-marketplace/redhat-marketplace-4lcht" Dec 03 14:04:22 crc kubenswrapper[4986]: I1203 14:04:22.285883 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e192b4a1-251e-44aa-a86c-4ad897c23d93-utilities\") pod \"redhat-marketplace-4lcht\" (UID: \"e192b4a1-251e-44aa-a86c-4ad897c23d93\") " pod="openshift-marketplace/redhat-marketplace-4lcht" Dec 03 14:04:22 crc kubenswrapper[4986]: I1203 14:04:22.286519 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e192b4a1-251e-44aa-a86c-4ad897c23d93-catalog-content\") pod \"redhat-marketplace-4lcht\" (UID: \"e192b4a1-251e-44aa-a86c-4ad897c23d93\") " pod="openshift-marketplace/redhat-marketplace-4lcht" Dec 03 14:04:22 crc kubenswrapper[4986]: I1203 14:04:22.308214 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-682kc\" (UniqueName: \"kubernetes.io/projected/e192b4a1-251e-44aa-a86c-4ad897c23d93-kube-api-access-682kc\") pod \"redhat-marketplace-4lcht\" (UID: \"e192b4a1-251e-44aa-a86c-4ad897c23d93\") " pod="openshift-marketplace/redhat-marketplace-4lcht" Dec 03 14:04:22 crc kubenswrapper[4986]: I1203 14:04:22.407656 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4lcht" Dec 03 14:04:22 crc kubenswrapper[4986]: I1203 14:04:22.903539 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4lcht"] Dec 03 14:04:23 crc kubenswrapper[4986]: I1203 14:04:23.949945 4986 generic.go:334] "Generic (PLEG): container finished" podID="e192b4a1-251e-44aa-a86c-4ad897c23d93" containerID="3bc180b788e9b7e27df658473405851904615befdf1fd26493eb77af9eeefaef" exitCode=0 Dec 03 14:04:23 crc kubenswrapper[4986]: I1203 14:04:23.950359 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4lcht" event={"ID":"e192b4a1-251e-44aa-a86c-4ad897c23d93","Type":"ContainerDied","Data":"3bc180b788e9b7e27df658473405851904615befdf1fd26493eb77af9eeefaef"} Dec 03 14:04:23 crc kubenswrapper[4986]: I1203 14:04:23.950404 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4lcht" event={"ID":"e192b4a1-251e-44aa-a86c-4ad897c23d93","Type":"ContainerStarted","Data":"fd2520854d3e7bc8865cdbdc9a4c4bb30cbb05f23011425260321c7df90cb64f"} Dec 03 14:04:24 crc kubenswrapper[4986]: I1203 14:04:24.961055 4986 generic.go:334] "Generic (PLEG): container finished" podID="e192b4a1-251e-44aa-a86c-4ad897c23d93" containerID="c56282f2e9ae11f8387de4564f026f98e8f30688ba68facab16dc282e3c70cdd" exitCode=0 Dec 03 14:04:24 crc kubenswrapper[4986]: I1203 14:04:24.961727 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4lcht" event={"ID":"e192b4a1-251e-44aa-a86c-4ad897c23d93","Type":"ContainerDied","Data":"c56282f2e9ae11f8387de4564f026f98e8f30688ba68facab16dc282e3c70cdd"} Dec 03 14:04:25 crc kubenswrapper[4986]: I1203 14:04:25.977802 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4lcht" event={"ID":"e192b4a1-251e-44aa-a86c-4ad897c23d93","Type":"ContainerStarted","Data":"65ab19795c5b540f353e7b7f1d671c603d064163c08df1b887e389e5a2e62dcd"} Dec 03 14:04:26 crc kubenswrapper[4986]: I1203 14:04:26.014577 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4lcht" podStartSLOduration=2.579353285 podStartE2EDuration="4.014554616s" podCreationTimestamp="2025-12-03 14:04:22 +0000 UTC" firstStartedPulling="2025-12-03 14:04:23.95267699 +0000 UTC m=+4123.419108181" lastFinishedPulling="2025-12-03 14:04:25.387878321 +0000 UTC m=+4124.854309512" observedRunningTime="2025-12-03 14:04:26.004085492 +0000 UTC m=+4125.470516693" watchObservedRunningTime="2025-12-03 14:04:26.014554616 +0000 UTC m=+4125.480985807" Dec 03 14:04:32 crc kubenswrapper[4986]: I1203 14:04:32.409552 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4lcht" Dec 03 14:04:32 crc kubenswrapper[4986]: I1203 14:04:32.410238 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4lcht" Dec 03 14:04:32 crc kubenswrapper[4986]: I1203 14:04:32.463062 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4lcht" Dec 03 14:04:33 crc kubenswrapper[4986]: I1203 14:04:33.097686 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4lcht" Dec 03 14:04:33 crc kubenswrapper[4986]: I1203 14:04:33.156206 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4lcht"] Dec 03 14:04:33 crc kubenswrapper[4986]: I1203 14:04:33.491546 4986 patch_prober.go:28] interesting pod/machine-config-daemon-xggpj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 14:04:33 crc kubenswrapper[4986]: I1203 14:04:33.491619 4986 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 14:04:35 crc kubenswrapper[4986]: I1203 14:04:35.063379 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4lcht" podUID="e192b4a1-251e-44aa-a86c-4ad897c23d93" containerName="registry-server" containerID="cri-o://65ab19795c5b540f353e7b7f1d671c603d064163c08df1b887e389e5a2e62dcd" gracePeriod=2 Dec 03 14:04:35 crc kubenswrapper[4986]: I1203 14:04:35.562520 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4lcht" Dec 03 14:04:35 crc kubenswrapper[4986]: I1203 14:04:35.671096 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-682kc\" (UniqueName: \"kubernetes.io/projected/e192b4a1-251e-44aa-a86c-4ad897c23d93-kube-api-access-682kc\") pod \"e192b4a1-251e-44aa-a86c-4ad897c23d93\" (UID: \"e192b4a1-251e-44aa-a86c-4ad897c23d93\") " Dec 03 14:04:35 crc kubenswrapper[4986]: I1203 14:04:35.671552 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e192b4a1-251e-44aa-a86c-4ad897c23d93-catalog-content\") pod \"e192b4a1-251e-44aa-a86c-4ad897c23d93\" (UID: \"e192b4a1-251e-44aa-a86c-4ad897c23d93\") " Dec 03 14:04:35 crc kubenswrapper[4986]: I1203 14:04:35.671649 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e192b4a1-251e-44aa-a86c-4ad897c23d93-utilities\") pod \"e192b4a1-251e-44aa-a86c-4ad897c23d93\" (UID: \"e192b4a1-251e-44aa-a86c-4ad897c23d93\") " Dec 03 14:04:35 crc kubenswrapper[4986]: I1203 14:04:35.673491 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e192b4a1-251e-44aa-a86c-4ad897c23d93-utilities" (OuterVolumeSpecName: "utilities") pod "e192b4a1-251e-44aa-a86c-4ad897c23d93" (UID: "e192b4a1-251e-44aa-a86c-4ad897c23d93"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 14:04:35 crc kubenswrapper[4986]: I1203 14:04:35.680832 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e192b4a1-251e-44aa-a86c-4ad897c23d93-kube-api-access-682kc" (OuterVolumeSpecName: "kube-api-access-682kc") pod "e192b4a1-251e-44aa-a86c-4ad897c23d93" (UID: "e192b4a1-251e-44aa-a86c-4ad897c23d93"). InnerVolumeSpecName "kube-api-access-682kc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 14:04:35 crc kubenswrapper[4986]: I1203 14:04:35.690771 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e192b4a1-251e-44aa-a86c-4ad897c23d93-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e192b4a1-251e-44aa-a86c-4ad897c23d93" (UID: "e192b4a1-251e-44aa-a86c-4ad897c23d93"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 14:04:35 crc kubenswrapper[4986]: I1203 14:04:35.775516 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-682kc\" (UniqueName: \"kubernetes.io/projected/e192b4a1-251e-44aa-a86c-4ad897c23d93-kube-api-access-682kc\") on node \"crc\" DevicePath \"\"" Dec 03 14:04:35 crc kubenswrapper[4986]: I1203 14:04:35.775571 4986 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e192b4a1-251e-44aa-a86c-4ad897c23d93-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 14:04:35 crc kubenswrapper[4986]: I1203 14:04:35.775597 4986 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e192b4a1-251e-44aa-a86c-4ad897c23d93-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 14:04:36 crc kubenswrapper[4986]: I1203 14:04:36.076426 4986 generic.go:334] "Generic (PLEG): container finished" podID="e192b4a1-251e-44aa-a86c-4ad897c23d93" containerID="65ab19795c5b540f353e7b7f1d671c603d064163c08df1b887e389e5a2e62dcd" exitCode=0 Dec 03 14:04:36 crc kubenswrapper[4986]: I1203 14:04:36.076481 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4lcht" event={"ID":"e192b4a1-251e-44aa-a86c-4ad897c23d93","Type":"ContainerDied","Data":"65ab19795c5b540f353e7b7f1d671c603d064163c08df1b887e389e5a2e62dcd"} Dec 03 14:04:36 crc kubenswrapper[4986]: I1203 14:04:36.076539 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4lcht" event={"ID":"e192b4a1-251e-44aa-a86c-4ad897c23d93","Type":"ContainerDied","Data":"fd2520854d3e7bc8865cdbdc9a4c4bb30cbb05f23011425260321c7df90cb64f"} Dec 03 14:04:36 crc kubenswrapper[4986]: I1203 14:04:36.076562 4986 scope.go:117] "RemoveContainer" containerID="65ab19795c5b540f353e7b7f1d671c603d064163c08df1b887e389e5a2e62dcd" Dec 03 14:04:36 crc kubenswrapper[4986]: I1203 14:04:36.076611 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4lcht" Dec 03 14:04:36 crc kubenswrapper[4986]: I1203 14:04:36.105707 4986 scope.go:117] "RemoveContainer" containerID="c56282f2e9ae11f8387de4564f026f98e8f30688ba68facab16dc282e3c70cdd" Dec 03 14:04:36 crc kubenswrapper[4986]: I1203 14:04:36.127997 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4lcht"] Dec 03 14:04:36 crc kubenswrapper[4986]: I1203 14:04:36.137735 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4lcht"] Dec 03 14:04:36 crc kubenswrapper[4986]: I1203 14:04:36.146601 4986 scope.go:117] "RemoveContainer" containerID="3bc180b788e9b7e27df658473405851904615befdf1fd26493eb77af9eeefaef" Dec 03 14:04:36 crc kubenswrapper[4986]: I1203 14:04:36.190920 4986 scope.go:117] "RemoveContainer" containerID="65ab19795c5b540f353e7b7f1d671c603d064163c08df1b887e389e5a2e62dcd" Dec 03 14:04:36 crc kubenswrapper[4986]: E1203 14:04:36.191592 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65ab19795c5b540f353e7b7f1d671c603d064163c08df1b887e389e5a2e62dcd\": container with ID starting with 65ab19795c5b540f353e7b7f1d671c603d064163c08df1b887e389e5a2e62dcd not found: ID does not exist" containerID="65ab19795c5b540f353e7b7f1d671c603d064163c08df1b887e389e5a2e62dcd" Dec 03 14:04:36 crc kubenswrapper[4986]: I1203 14:04:36.191644 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65ab19795c5b540f353e7b7f1d671c603d064163c08df1b887e389e5a2e62dcd"} err="failed to get container status \"65ab19795c5b540f353e7b7f1d671c603d064163c08df1b887e389e5a2e62dcd\": rpc error: code = NotFound desc = could not find container \"65ab19795c5b540f353e7b7f1d671c603d064163c08df1b887e389e5a2e62dcd\": container with ID starting with 65ab19795c5b540f353e7b7f1d671c603d064163c08df1b887e389e5a2e62dcd not found: ID does not exist" Dec 03 14:04:36 crc kubenswrapper[4986]: I1203 14:04:36.191674 4986 scope.go:117] "RemoveContainer" containerID="c56282f2e9ae11f8387de4564f026f98e8f30688ba68facab16dc282e3c70cdd" Dec 03 14:04:36 crc kubenswrapper[4986]: E1203 14:04:36.192928 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c56282f2e9ae11f8387de4564f026f98e8f30688ba68facab16dc282e3c70cdd\": container with ID starting with c56282f2e9ae11f8387de4564f026f98e8f30688ba68facab16dc282e3c70cdd not found: ID does not exist" containerID="c56282f2e9ae11f8387de4564f026f98e8f30688ba68facab16dc282e3c70cdd" Dec 03 14:04:36 crc kubenswrapper[4986]: I1203 14:04:36.192986 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c56282f2e9ae11f8387de4564f026f98e8f30688ba68facab16dc282e3c70cdd"} err="failed to get container status \"c56282f2e9ae11f8387de4564f026f98e8f30688ba68facab16dc282e3c70cdd\": rpc error: code = NotFound desc = could not find container \"c56282f2e9ae11f8387de4564f026f98e8f30688ba68facab16dc282e3c70cdd\": container with ID starting with c56282f2e9ae11f8387de4564f026f98e8f30688ba68facab16dc282e3c70cdd not found: ID does not exist" Dec 03 14:04:36 crc kubenswrapper[4986]: I1203 14:04:36.193018 4986 scope.go:117] "RemoveContainer" containerID="3bc180b788e9b7e27df658473405851904615befdf1fd26493eb77af9eeefaef" Dec 03 14:04:36 crc kubenswrapper[4986]: E1203 14:04:36.193388 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3bc180b788e9b7e27df658473405851904615befdf1fd26493eb77af9eeefaef\": container with ID starting with 3bc180b788e9b7e27df658473405851904615befdf1fd26493eb77af9eeefaef not found: ID does not exist" containerID="3bc180b788e9b7e27df658473405851904615befdf1fd26493eb77af9eeefaef" Dec 03 14:04:36 crc kubenswrapper[4986]: I1203 14:04:36.193423 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bc180b788e9b7e27df658473405851904615befdf1fd26493eb77af9eeefaef"} err="failed to get container status \"3bc180b788e9b7e27df658473405851904615befdf1fd26493eb77af9eeefaef\": rpc error: code = NotFound desc = could not find container \"3bc180b788e9b7e27df658473405851904615befdf1fd26493eb77af9eeefaef\": container with ID starting with 3bc180b788e9b7e27df658473405851904615befdf1fd26493eb77af9eeefaef not found: ID does not exist" Dec 03 14:04:36 crc kubenswrapper[4986]: I1203 14:04:36.963660 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e192b4a1-251e-44aa-a86c-4ad897c23d93" path="/var/lib/kubelet/pods/e192b4a1-251e-44aa-a86c-4ad897c23d93/volumes" Dec 03 14:05:03 crc kubenswrapper[4986]: I1203 14:05:03.491269 4986 patch_prober.go:28] interesting pod/machine-config-daemon-xggpj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 14:05:03 crc kubenswrapper[4986]: I1203 14:05:03.491777 4986 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 14:05:28 crc kubenswrapper[4986]: I1203 14:05:28.618835 4986 generic.go:334] "Generic (PLEG): container finished" podID="deff12eb-bb21-44aa-bd09-0b8401893ea4" containerID="429cac0520aa52a5fe9949f74e500cfd02526e983ce18467641ed01fe90f309b" exitCode=0 Dec 03 14:05:28 crc kubenswrapper[4986]: I1203 14:05:28.618955 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6lpvf/must-gather-jtw4t" event={"ID":"deff12eb-bb21-44aa-bd09-0b8401893ea4","Type":"ContainerDied","Data":"429cac0520aa52a5fe9949f74e500cfd02526e983ce18467641ed01fe90f309b"} Dec 03 14:05:28 crc kubenswrapper[4986]: I1203 14:05:28.621154 4986 scope.go:117] "RemoveContainer" containerID="429cac0520aa52a5fe9949f74e500cfd02526e983ce18467641ed01fe90f309b" Dec 03 14:05:28 crc kubenswrapper[4986]: I1203 14:05:28.970002 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-6lpvf_must-gather-jtw4t_deff12eb-bb21-44aa-bd09-0b8401893ea4/gather/0.log" Dec 03 14:05:33 crc kubenswrapper[4986]: I1203 14:05:33.491010 4986 patch_prober.go:28] interesting pod/machine-config-daemon-xggpj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 14:05:33 crc kubenswrapper[4986]: I1203 14:05:33.492396 4986 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 14:05:33 crc kubenswrapper[4986]: I1203 14:05:33.492462 4986 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" Dec 03 14:05:33 crc kubenswrapper[4986]: I1203 14:05:33.493250 4986 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"94098f17055dc83680391bf29240e7586ef8a8e160e65ff114312434843cab43"} pod="openshift-machine-config-operator/machine-config-daemon-xggpj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 14:05:33 crc kubenswrapper[4986]: I1203 14:05:33.493339 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" containerName="machine-config-daemon" containerID="cri-o://94098f17055dc83680391bf29240e7586ef8a8e160e65ff114312434843cab43" gracePeriod=600 Dec 03 14:05:33 crc kubenswrapper[4986]: I1203 14:05:33.670149 4986 generic.go:334] "Generic (PLEG): container finished" podID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" containerID="94098f17055dc83680391bf29240e7586ef8a8e160e65ff114312434843cab43" exitCode=0 Dec 03 14:05:33 crc kubenswrapper[4986]: I1203 14:05:33.670189 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" event={"ID":"f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c","Type":"ContainerDied","Data":"94098f17055dc83680391bf29240e7586ef8a8e160e65ff114312434843cab43"} Dec 03 14:05:33 crc kubenswrapper[4986]: I1203 14:05:33.670233 4986 scope.go:117] "RemoveContainer" containerID="f4aa16a78e6dc7399cb3f0bba100cc3ca3d479a66dc5b2e6c12ee775b73c7cc4" Dec 03 14:05:34 crc kubenswrapper[4986]: I1203 14:05:34.680986 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" event={"ID":"f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c","Type":"ContainerStarted","Data":"9908c5bc4e1f9c54ddf6781265a9653bd441c26c6cebabaf9a35e940adc79994"} Dec 03 14:05:36 crc kubenswrapper[4986]: I1203 14:05:36.156543 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-6lpvf/must-gather-jtw4t"] Dec 03 14:05:36 crc kubenswrapper[4986]: I1203 14:05:36.156989 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-6lpvf/must-gather-jtw4t" podUID="deff12eb-bb21-44aa-bd09-0b8401893ea4" containerName="copy" containerID="cri-o://b3a62ad32323604fbf6ab93f239ce5dc0481309e2f6e7eb63523489e3394988e" gracePeriod=2 Dec 03 14:05:36 crc kubenswrapper[4986]: I1203 14:05:36.163675 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-6lpvf/must-gather-jtw4t"] Dec 03 14:05:36 crc kubenswrapper[4986]: I1203 14:05:36.587251 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-6lpvf_must-gather-jtw4t_deff12eb-bb21-44aa-bd09-0b8401893ea4/copy/0.log" Dec 03 14:05:36 crc kubenswrapper[4986]: I1203 14:05:36.588165 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6lpvf/must-gather-jtw4t" Dec 03 14:05:36 crc kubenswrapper[4986]: I1203 14:05:36.761395 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/deff12eb-bb21-44aa-bd09-0b8401893ea4-must-gather-output\") pod \"deff12eb-bb21-44aa-bd09-0b8401893ea4\" (UID: \"deff12eb-bb21-44aa-bd09-0b8401893ea4\") " Dec 03 14:05:36 crc kubenswrapper[4986]: I1203 14:05:36.761730 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9ppp\" (UniqueName: \"kubernetes.io/projected/deff12eb-bb21-44aa-bd09-0b8401893ea4-kube-api-access-n9ppp\") pod \"deff12eb-bb21-44aa-bd09-0b8401893ea4\" (UID: \"deff12eb-bb21-44aa-bd09-0b8401893ea4\") " Dec 03 14:05:36 crc kubenswrapper[4986]: I1203 14:05:36.768455 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/deff12eb-bb21-44aa-bd09-0b8401893ea4-kube-api-access-n9ppp" (OuterVolumeSpecName: "kube-api-access-n9ppp") pod "deff12eb-bb21-44aa-bd09-0b8401893ea4" (UID: "deff12eb-bb21-44aa-bd09-0b8401893ea4"). InnerVolumeSpecName "kube-api-access-n9ppp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 14:05:36 crc kubenswrapper[4986]: I1203 14:05:36.863614 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9ppp\" (UniqueName: \"kubernetes.io/projected/deff12eb-bb21-44aa-bd09-0b8401893ea4-kube-api-access-n9ppp\") on node \"crc\" DevicePath \"\"" Dec 03 14:05:36 crc kubenswrapper[4986]: I1203 14:05:36.901387 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/deff12eb-bb21-44aa-bd09-0b8401893ea4-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "deff12eb-bb21-44aa-bd09-0b8401893ea4" (UID: "deff12eb-bb21-44aa-bd09-0b8401893ea4"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 14:05:36 crc kubenswrapper[4986]: I1203 14:05:36.957725 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="deff12eb-bb21-44aa-bd09-0b8401893ea4" path="/var/lib/kubelet/pods/deff12eb-bb21-44aa-bd09-0b8401893ea4/volumes" Dec 03 14:05:36 crc kubenswrapper[4986]: I1203 14:05:36.965243 4986 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/deff12eb-bb21-44aa-bd09-0b8401893ea4-must-gather-output\") on node \"crc\" DevicePath \"\"" Dec 03 14:05:36 crc kubenswrapper[4986]: I1203 14:05:36.984166 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-6lpvf_must-gather-jtw4t_deff12eb-bb21-44aa-bd09-0b8401893ea4/copy/0.log" Dec 03 14:05:36 crc kubenswrapper[4986]: I1203 14:05:36.984610 4986 generic.go:334] "Generic (PLEG): container finished" podID="deff12eb-bb21-44aa-bd09-0b8401893ea4" containerID="b3a62ad32323604fbf6ab93f239ce5dc0481309e2f6e7eb63523489e3394988e" exitCode=143 Dec 03 14:05:36 crc kubenswrapper[4986]: I1203 14:05:36.984654 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6lpvf/must-gather-jtw4t" Dec 03 14:05:36 crc kubenswrapper[4986]: I1203 14:05:36.984693 4986 scope.go:117] "RemoveContainer" containerID="b3a62ad32323604fbf6ab93f239ce5dc0481309e2f6e7eb63523489e3394988e" Dec 03 14:05:37 crc kubenswrapper[4986]: I1203 14:05:37.006586 4986 scope.go:117] "RemoveContainer" containerID="429cac0520aa52a5fe9949f74e500cfd02526e983ce18467641ed01fe90f309b" Dec 03 14:05:37 crc kubenswrapper[4986]: I1203 14:05:37.074905 4986 scope.go:117] "RemoveContainer" containerID="b3a62ad32323604fbf6ab93f239ce5dc0481309e2f6e7eb63523489e3394988e" Dec 03 14:05:37 crc kubenswrapper[4986]: E1203 14:05:37.075764 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3a62ad32323604fbf6ab93f239ce5dc0481309e2f6e7eb63523489e3394988e\": container with ID starting with b3a62ad32323604fbf6ab93f239ce5dc0481309e2f6e7eb63523489e3394988e not found: ID does not exist" containerID="b3a62ad32323604fbf6ab93f239ce5dc0481309e2f6e7eb63523489e3394988e" Dec 03 14:05:37 crc kubenswrapper[4986]: I1203 14:05:37.075804 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3a62ad32323604fbf6ab93f239ce5dc0481309e2f6e7eb63523489e3394988e"} err="failed to get container status \"b3a62ad32323604fbf6ab93f239ce5dc0481309e2f6e7eb63523489e3394988e\": rpc error: code = NotFound desc = could not find container \"b3a62ad32323604fbf6ab93f239ce5dc0481309e2f6e7eb63523489e3394988e\": container with ID starting with b3a62ad32323604fbf6ab93f239ce5dc0481309e2f6e7eb63523489e3394988e not found: ID does not exist" Dec 03 14:05:37 crc kubenswrapper[4986]: I1203 14:05:37.075829 4986 scope.go:117] "RemoveContainer" containerID="429cac0520aa52a5fe9949f74e500cfd02526e983ce18467641ed01fe90f309b" Dec 03 14:05:37 crc kubenswrapper[4986]: E1203 14:05:37.076198 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"429cac0520aa52a5fe9949f74e500cfd02526e983ce18467641ed01fe90f309b\": container with ID starting with 429cac0520aa52a5fe9949f74e500cfd02526e983ce18467641ed01fe90f309b not found: ID does not exist" containerID="429cac0520aa52a5fe9949f74e500cfd02526e983ce18467641ed01fe90f309b" Dec 03 14:05:37 crc kubenswrapper[4986]: I1203 14:05:37.076227 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"429cac0520aa52a5fe9949f74e500cfd02526e983ce18467641ed01fe90f309b"} err="failed to get container status \"429cac0520aa52a5fe9949f74e500cfd02526e983ce18467641ed01fe90f309b\": rpc error: code = NotFound desc = could not find container \"429cac0520aa52a5fe9949f74e500cfd02526e983ce18467641ed01fe90f309b\": container with ID starting with 429cac0520aa52a5fe9949f74e500cfd02526e983ce18467641ed01fe90f309b not found: ID does not exist" Dec 03 14:06:32 crc kubenswrapper[4986]: I1203 14:06:32.720962 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9sqlw"] Dec 03 14:06:32 crc kubenswrapper[4986]: E1203 14:06:32.722029 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="deff12eb-bb21-44aa-bd09-0b8401893ea4" containerName="gather" Dec 03 14:06:32 crc kubenswrapper[4986]: I1203 14:06:32.722047 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="deff12eb-bb21-44aa-bd09-0b8401893ea4" containerName="gather" Dec 03 14:06:32 crc kubenswrapper[4986]: E1203 14:06:32.722081 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e192b4a1-251e-44aa-a86c-4ad897c23d93" containerName="extract-content" Dec 03 14:06:32 crc kubenswrapper[4986]: I1203 14:06:32.722090 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="e192b4a1-251e-44aa-a86c-4ad897c23d93" containerName="extract-content" Dec 03 14:06:32 crc kubenswrapper[4986]: E1203 14:06:32.722099 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e192b4a1-251e-44aa-a86c-4ad897c23d93" containerName="registry-server" Dec 03 14:06:32 crc kubenswrapper[4986]: I1203 14:06:32.722108 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="e192b4a1-251e-44aa-a86c-4ad897c23d93" containerName="registry-server" Dec 03 14:06:32 crc kubenswrapper[4986]: E1203 14:06:32.722136 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="deff12eb-bb21-44aa-bd09-0b8401893ea4" containerName="copy" Dec 03 14:06:32 crc kubenswrapper[4986]: I1203 14:06:32.722144 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="deff12eb-bb21-44aa-bd09-0b8401893ea4" containerName="copy" Dec 03 14:06:32 crc kubenswrapper[4986]: E1203 14:06:32.722153 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e192b4a1-251e-44aa-a86c-4ad897c23d93" containerName="extract-utilities" Dec 03 14:06:32 crc kubenswrapper[4986]: I1203 14:06:32.722161 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="e192b4a1-251e-44aa-a86c-4ad897c23d93" containerName="extract-utilities" Dec 03 14:06:32 crc kubenswrapper[4986]: I1203 14:06:32.722387 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="deff12eb-bb21-44aa-bd09-0b8401893ea4" containerName="copy" Dec 03 14:06:32 crc kubenswrapper[4986]: I1203 14:06:32.722408 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="deff12eb-bb21-44aa-bd09-0b8401893ea4" containerName="gather" Dec 03 14:06:32 crc kubenswrapper[4986]: I1203 14:06:32.722421 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="e192b4a1-251e-44aa-a86c-4ad897c23d93" containerName="registry-server" Dec 03 14:06:32 crc kubenswrapper[4986]: I1203 14:06:32.724093 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9sqlw" Dec 03 14:06:32 crc kubenswrapper[4986]: I1203 14:06:32.730439 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9sqlw"] Dec 03 14:06:32 crc kubenswrapper[4986]: I1203 14:06:32.892959 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4m7j\" (UniqueName: \"kubernetes.io/projected/fd3d88fc-f2ed-4b4c-b941-0b6d270c65e5-kube-api-access-c4m7j\") pod \"redhat-operators-9sqlw\" (UID: \"fd3d88fc-f2ed-4b4c-b941-0b6d270c65e5\") " pod="openshift-marketplace/redhat-operators-9sqlw" Dec 03 14:06:32 crc kubenswrapper[4986]: I1203 14:06:32.893041 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd3d88fc-f2ed-4b4c-b941-0b6d270c65e5-catalog-content\") pod \"redhat-operators-9sqlw\" (UID: \"fd3d88fc-f2ed-4b4c-b941-0b6d270c65e5\") " pod="openshift-marketplace/redhat-operators-9sqlw" Dec 03 14:06:32 crc kubenswrapper[4986]: I1203 14:06:32.893237 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd3d88fc-f2ed-4b4c-b941-0b6d270c65e5-utilities\") pod \"redhat-operators-9sqlw\" (UID: \"fd3d88fc-f2ed-4b4c-b941-0b6d270c65e5\") " pod="openshift-marketplace/redhat-operators-9sqlw" Dec 03 14:06:32 crc kubenswrapper[4986]: I1203 14:06:32.995311 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd3d88fc-f2ed-4b4c-b941-0b6d270c65e5-utilities\") pod \"redhat-operators-9sqlw\" (UID: \"fd3d88fc-f2ed-4b4c-b941-0b6d270c65e5\") " pod="openshift-marketplace/redhat-operators-9sqlw" Dec 03 14:06:32 crc kubenswrapper[4986]: I1203 14:06:32.995383 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4m7j\" (UniqueName: \"kubernetes.io/projected/fd3d88fc-f2ed-4b4c-b941-0b6d270c65e5-kube-api-access-c4m7j\") pod \"redhat-operators-9sqlw\" (UID: \"fd3d88fc-f2ed-4b4c-b941-0b6d270c65e5\") " pod="openshift-marketplace/redhat-operators-9sqlw" Dec 03 14:06:32 crc kubenswrapper[4986]: I1203 14:06:32.995421 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd3d88fc-f2ed-4b4c-b941-0b6d270c65e5-catalog-content\") pod \"redhat-operators-9sqlw\" (UID: \"fd3d88fc-f2ed-4b4c-b941-0b6d270c65e5\") " pod="openshift-marketplace/redhat-operators-9sqlw" Dec 03 14:06:32 crc kubenswrapper[4986]: I1203 14:06:32.995817 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd3d88fc-f2ed-4b4c-b941-0b6d270c65e5-utilities\") pod \"redhat-operators-9sqlw\" (UID: \"fd3d88fc-f2ed-4b4c-b941-0b6d270c65e5\") " pod="openshift-marketplace/redhat-operators-9sqlw" Dec 03 14:06:32 crc kubenswrapper[4986]: I1203 14:06:32.995834 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd3d88fc-f2ed-4b4c-b941-0b6d270c65e5-catalog-content\") pod \"redhat-operators-9sqlw\" (UID: \"fd3d88fc-f2ed-4b4c-b941-0b6d270c65e5\") " pod="openshift-marketplace/redhat-operators-9sqlw" Dec 03 14:06:33 crc kubenswrapper[4986]: I1203 14:06:33.022267 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4m7j\" (UniqueName: \"kubernetes.io/projected/fd3d88fc-f2ed-4b4c-b941-0b6d270c65e5-kube-api-access-c4m7j\") pod \"redhat-operators-9sqlw\" (UID: \"fd3d88fc-f2ed-4b4c-b941-0b6d270c65e5\") " pod="openshift-marketplace/redhat-operators-9sqlw" Dec 03 14:06:33 crc kubenswrapper[4986]: I1203 14:06:33.052717 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9sqlw" Dec 03 14:06:33 crc kubenswrapper[4986]: I1203 14:06:33.618049 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9sqlw"] Dec 03 14:06:34 crc kubenswrapper[4986]: I1203 14:06:34.537444 4986 generic.go:334] "Generic (PLEG): container finished" podID="fd3d88fc-f2ed-4b4c-b941-0b6d270c65e5" containerID="e0eaa03711d754b84e1e8612996369c65ed1ca57c995f50550304cdbfe3df2b2" exitCode=0 Dec 03 14:06:34 crc kubenswrapper[4986]: I1203 14:06:34.537850 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9sqlw" event={"ID":"fd3d88fc-f2ed-4b4c-b941-0b6d270c65e5","Type":"ContainerDied","Data":"e0eaa03711d754b84e1e8612996369c65ed1ca57c995f50550304cdbfe3df2b2"} Dec 03 14:06:34 crc kubenswrapper[4986]: I1203 14:06:34.537886 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9sqlw" event={"ID":"fd3d88fc-f2ed-4b4c-b941-0b6d270c65e5","Type":"ContainerStarted","Data":"64831f116b60d15d71455327b0a2c72c9ef5f7fc7d8cf031bd0933fbd8021789"} Dec 03 14:06:34 crc kubenswrapper[4986]: I1203 14:06:34.540032 4986 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 03 14:06:42 crc kubenswrapper[4986]: I1203 14:06:42.610501 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9sqlw" event={"ID":"fd3d88fc-f2ed-4b4c-b941-0b6d270c65e5","Type":"ContainerStarted","Data":"df74eb756cbea9cb99db6b545893562cc888b1da3fe7e1b025698a87268dcb59"} Dec 03 14:06:43 crc kubenswrapper[4986]: I1203 14:06:43.811663 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fsqnm"] Dec 03 14:06:43 crc kubenswrapper[4986]: I1203 14:06:43.814514 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fsqnm" Dec 03 14:06:43 crc kubenswrapper[4986]: I1203 14:06:43.833379 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fsqnm"] Dec 03 14:06:43 crc kubenswrapper[4986]: I1203 14:06:43.922637 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qf5zz\" (UniqueName: \"kubernetes.io/projected/7246177a-e565-470a-b053-5d9c4b901645-kube-api-access-qf5zz\") pod \"certified-operators-fsqnm\" (UID: \"7246177a-e565-470a-b053-5d9c4b901645\") " pod="openshift-marketplace/certified-operators-fsqnm" Dec 03 14:06:43 crc kubenswrapper[4986]: I1203 14:06:43.923367 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7246177a-e565-470a-b053-5d9c4b901645-catalog-content\") pod \"certified-operators-fsqnm\" (UID: \"7246177a-e565-470a-b053-5d9c4b901645\") " pod="openshift-marketplace/certified-operators-fsqnm" Dec 03 14:06:43 crc kubenswrapper[4986]: I1203 14:06:43.923422 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7246177a-e565-470a-b053-5d9c4b901645-utilities\") pod \"certified-operators-fsqnm\" (UID: \"7246177a-e565-470a-b053-5d9c4b901645\") " pod="openshift-marketplace/certified-operators-fsqnm" Dec 03 14:06:44 crc kubenswrapper[4986]: I1203 14:06:44.025821 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qf5zz\" (UniqueName: \"kubernetes.io/projected/7246177a-e565-470a-b053-5d9c4b901645-kube-api-access-qf5zz\") pod \"certified-operators-fsqnm\" (UID: \"7246177a-e565-470a-b053-5d9c4b901645\") " pod="openshift-marketplace/certified-operators-fsqnm" Dec 03 14:06:44 crc kubenswrapper[4986]: I1203 14:06:44.026206 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7246177a-e565-470a-b053-5d9c4b901645-catalog-content\") pod \"certified-operators-fsqnm\" (UID: \"7246177a-e565-470a-b053-5d9c4b901645\") " pod="openshift-marketplace/certified-operators-fsqnm" Dec 03 14:06:44 crc kubenswrapper[4986]: I1203 14:06:44.026308 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7246177a-e565-470a-b053-5d9c4b901645-utilities\") pod \"certified-operators-fsqnm\" (UID: \"7246177a-e565-470a-b053-5d9c4b901645\") " pod="openshift-marketplace/certified-operators-fsqnm" Dec 03 14:06:44 crc kubenswrapper[4986]: I1203 14:06:44.026588 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7246177a-e565-470a-b053-5d9c4b901645-catalog-content\") pod \"certified-operators-fsqnm\" (UID: \"7246177a-e565-470a-b053-5d9c4b901645\") " pod="openshift-marketplace/certified-operators-fsqnm" Dec 03 14:06:44 crc kubenswrapper[4986]: I1203 14:06:44.026868 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7246177a-e565-470a-b053-5d9c4b901645-utilities\") pod \"certified-operators-fsqnm\" (UID: \"7246177a-e565-470a-b053-5d9c4b901645\") " pod="openshift-marketplace/certified-operators-fsqnm" Dec 03 14:06:44 crc kubenswrapper[4986]: I1203 14:06:44.075829 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qf5zz\" (UniqueName: \"kubernetes.io/projected/7246177a-e565-470a-b053-5d9c4b901645-kube-api-access-qf5zz\") pod \"certified-operators-fsqnm\" (UID: \"7246177a-e565-470a-b053-5d9c4b901645\") " pod="openshift-marketplace/certified-operators-fsqnm" Dec 03 14:06:44 crc kubenswrapper[4986]: I1203 14:06:44.143293 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fsqnm" Dec 03 14:06:45 crc kubenswrapper[4986]: W1203 14:06:45.402193 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7246177a_e565_470a_b053_5d9c4b901645.slice/crio-125cd7047e024180ba533b69b0db64cbaaed106ae6ef0f25f585131180149a5e WatchSource:0}: Error finding container 125cd7047e024180ba533b69b0db64cbaaed106ae6ef0f25f585131180149a5e: Status 404 returned error can't find the container with id 125cd7047e024180ba533b69b0db64cbaaed106ae6ef0f25f585131180149a5e Dec 03 14:06:45 crc kubenswrapper[4986]: I1203 14:06:45.402508 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fsqnm"] Dec 03 14:06:45 crc kubenswrapper[4986]: I1203 14:06:45.639792 4986 generic.go:334] "Generic (PLEG): container finished" podID="fd3d88fc-f2ed-4b4c-b941-0b6d270c65e5" containerID="df74eb756cbea9cb99db6b545893562cc888b1da3fe7e1b025698a87268dcb59" exitCode=0 Dec 03 14:06:45 crc kubenswrapper[4986]: I1203 14:06:45.639872 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9sqlw" event={"ID":"fd3d88fc-f2ed-4b4c-b941-0b6d270c65e5","Type":"ContainerDied","Data":"df74eb756cbea9cb99db6b545893562cc888b1da3fe7e1b025698a87268dcb59"} Dec 03 14:06:45 crc kubenswrapper[4986]: I1203 14:06:45.641055 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fsqnm" event={"ID":"7246177a-e565-470a-b053-5d9c4b901645","Type":"ContainerStarted","Data":"125cd7047e024180ba533b69b0db64cbaaed106ae6ef0f25f585131180149a5e"} Dec 03 14:06:50 crc kubenswrapper[4986]: I1203 14:06:50.701756 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fsqnm" event={"ID":"7246177a-e565-470a-b053-5d9c4b901645","Type":"ContainerStarted","Data":"ac5effef136119f1567248cc271091bfa6137b95ae914708a9dca4d2ee8e6b24"} Dec 03 14:06:51 crc kubenswrapper[4986]: I1203 14:06:51.715594 4986 generic.go:334] "Generic (PLEG): container finished" podID="7246177a-e565-470a-b053-5d9c4b901645" containerID="ac5effef136119f1567248cc271091bfa6137b95ae914708a9dca4d2ee8e6b24" exitCode=0 Dec 03 14:06:51 crc kubenswrapper[4986]: I1203 14:06:51.715718 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fsqnm" event={"ID":"7246177a-e565-470a-b053-5d9c4b901645","Type":"ContainerDied","Data":"ac5effef136119f1567248cc271091bfa6137b95ae914708a9dca4d2ee8e6b24"} Dec 03 14:06:53 crc kubenswrapper[4986]: I1203 14:06:53.740387 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9sqlw" event={"ID":"fd3d88fc-f2ed-4b4c-b941-0b6d270c65e5","Type":"ContainerStarted","Data":"12080f39ade7f66d688a262350a986423d354aa2466667e1e4134ce74caf82e6"} Dec 03 14:06:53 crc kubenswrapper[4986]: I1203 14:06:53.745533 4986 generic.go:334] "Generic (PLEG): container finished" podID="7246177a-e565-470a-b053-5d9c4b901645" containerID="5e3bdde49a68b7a6b0f8f29892ec427b1b832ecf037ce4c99964618336f56fcf" exitCode=0 Dec 03 14:06:53 crc kubenswrapper[4986]: I1203 14:06:53.745572 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fsqnm" event={"ID":"7246177a-e565-470a-b053-5d9c4b901645","Type":"ContainerDied","Data":"5e3bdde49a68b7a6b0f8f29892ec427b1b832ecf037ce4c99964618336f56fcf"} Dec 03 14:06:53 crc kubenswrapper[4986]: I1203 14:06:53.795332 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9sqlw" podStartSLOduration=4.27645616 podStartE2EDuration="21.795307575s" podCreationTimestamp="2025-12-03 14:06:32 +0000 UTC" firstStartedPulling="2025-12-03 14:06:34.539749853 +0000 UTC m=+4254.006181044" lastFinishedPulling="2025-12-03 14:06:52.058601258 +0000 UTC m=+4271.525032459" observedRunningTime="2025-12-03 14:06:53.77379626 +0000 UTC m=+4273.240227471" watchObservedRunningTime="2025-12-03 14:06:53.795307575 +0000 UTC m=+4273.261738766" Dec 03 14:06:54 crc kubenswrapper[4986]: I1203 14:06:54.756516 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fsqnm" event={"ID":"7246177a-e565-470a-b053-5d9c4b901645","Type":"ContainerStarted","Data":"e1d0007a7f5c28f0b70c8a8b0d09128634f8c46f91f16556323db36015c30b21"} Dec 03 14:06:54 crc kubenswrapper[4986]: I1203 14:06:54.774728 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fsqnm" podStartSLOduration=9.182618959 podStartE2EDuration="11.774714347s" podCreationTimestamp="2025-12-03 14:06:43 +0000 UTC" firstStartedPulling="2025-12-03 14:06:51.719177878 +0000 UTC m=+4271.185609079" lastFinishedPulling="2025-12-03 14:06:54.311273276 +0000 UTC m=+4273.777704467" observedRunningTime="2025-12-03 14:06:54.772703953 +0000 UTC m=+4274.239135144" watchObservedRunningTime="2025-12-03 14:06:54.774714347 +0000 UTC m=+4274.241145528" Dec 03 14:06:58 crc kubenswrapper[4986]: I1203 14:06:58.642255 4986 scope.go:117] "RemoveContainer" containerID="235aea7a6e3e2a5f7f14f3f53f94abdace508b38c0d922fea5e38c18052e1bf2" Dec 03 14:07:03 crc kubenswrapper[4986]: I1203 14:07:03.053686 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9sqlw" Dec 03 14:07:03 crc kubenswrapper[4986]: I1203 14:07:03.054267 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9sqlw" Dec 03 14:07:03 crc kubenswrapper[4986]: I1203 14:07:03.115134 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9sqlw" Dec 03 14:07:03 crc kubenswrapper[4986]: I1203 14:07:03.920239 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9sqlw" Dec 03 14:07:04 crc kubenswrapper[4986]: I1203 14:07:04.011800 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9sqlw"] Dec 03 14:07:04 crc kubenswrapper[4986]: I1203 14:07:04.056944 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5jd64"] Dec 03 14:07:04 crc kubenswrapper[4986]: I1203 14:07:04.057230 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5jd64" podUID="d90ad1cf-edbe-42b5-84c6-0c0568fafd43" containerName="registry-server" containerID="cri-o://9eb248043181efcc58596a772f3e072ccf0bcde0aefa28dfe5fb214de714b36c" gracePeriod=2 Dec 03 14:07:04 crc kubenswrapper[4986]: I1203 14:07:04.144179 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fsqnm" Dec 03 14:07:04 crc kubenswrapper[4986]: I1203 14:07:04.144229 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fsqnm" Dec 03 14:07:04 crc kubenswrapper[4986]: I1203 14:07:04.225475 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fsqnm" Dec 03 14:07:04 crc kubenswrapper[4986]: I1203 14:07:04.849881 4986 generic.go:334] "Generic (PLEG): container finished" podID="d90ad1cf-edbe-42b5-84c6-0c0568fafd43" containerID="9eb248043181efcc58596a772f3e072ccf0bcde0aefa28dfe5fb214de714b36c" exitCode=0 Dec 03 14:07:04 crc kubenswrapper[4986]: I1203 14:07:04.849978 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5jd64" event={"ID":"d90ad1cf-edbe-42b5-84c6-0c0568fafd43","Type":"ContainerDied","Data":"9eb248043181efcc58596a772f3e072ccf0bcde0aefa28dfe5fb214de714b36c"} Dec 03 14:07:04 crc kubenswrapper[4986]: I1203 14:07:04.902197 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fsqnm" Dec 03 14:07:05 crc kubenswrapper[4986]: I1203 14:07:05.039595 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5jd64" Dec 03 14:07:05 crc kubenswrapper[4986]: I1203 14:07:05.092741 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d90ad1cf-edbe-42b5-84c6-0c0568fafd43-catalog-content\") pod \"d90ad1cf-edbe-42b5-84c6-0c0568fafd43\" (UID: \"d90ad1cf-edbe-42b5-84c6-0c0568fafd43\") " Dec 03 14:07:05 crc kubenswrapper[4986]: I1203 14:07:05.092828 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pv897\" (UniqueName: \"kubernetes.io/projected/d90ad1cf-edbe-42b5-84c6-0c0568fafd43-kube-api-access-pv897\") pod \"d90ad1cf-edbe-42b5-84c6-0c0568fafd43\" (UID: \"d90ad1cf-edbe-42b5-84c6-0c0568fafd43\") " Dec 03 14:07:05 crc kubenswrapper[4986]: I1203 14:07:05.092851 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d90ad1cf-edbe-42b5-84c6-0c0568fafd43-utilities\") pod \"d90ad1cf-edbe-42b5-84c6-0c0568fafd43\" (UID: \"d90ad1cf-edbe-42b5-84c6-0c0568fafd43\") " Dec 03 14:07:05 crc kubenswrapper[4986]: I1203 14:07:05.093466 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d90ad1cf-edbe-42b5-84c6-0c0568fafd43-utilities" (OuterVolumeSpecName: "utilities") pod "d90ad1cf-edbe-42b5-84c6-0c0568fafd43" (UID: "d90ad1cf-edbe-42b5-84c6-0c0568fafd43"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 14:07:05 crc kubenswrapper[4986]: I1203 14:07:05.099878 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d90ad1cf-edbe-42b5-84c6-0c0568fafd43-kube-api-access-pv897" (OuterVolumeSpecName: "kube-api-access-pv897") pod "d90ad1cf-edbe-42b5-84c6-0c0568fafd43" (UID: "d90ad1cf-edbe-42b5-84c6-0c0568fafd43"). InnerVolumeSpecName "kube-api-access-pv897". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 14:07:05 crc kubenswrapper[4986]: I1203 14:07:05.192201 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d90ad1cf-edbe-42b5-84c6-0c0568fafd43-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d90ad1cf-edbe-42b5-84c6-0c0568fafd43" (UID: "d90ad1cf-edbe-42b5-84c6-0c0568fafd43"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 14:07:05 crc kubenswrapper[4986]: I1203 14:07:05.194827 4986 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d90ad1cf-edbe-42b5-84c6-0c0568fafd43-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 14:07:05 crc kubenswrapper[4986]: I1203 14:07:05.194871 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pv897\" (UniqueName: \"kubernetes.io/projected/d90ad1cf-edbe-42b5-84c6-0c0568fafd43-kube-api-access-pv897\") on node \"crc\" DevicePath \"\"" Dec 03 14:07:05 crc kubenswrapper[4986]: I1203 14:07:05.194888 4986 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d90ad1cf-edbe-42b5-84c6-0c0568fafd43-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 14:07:05 crc kubenswrapper[4986]: I1203 14:07:05.860415 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5jd64" event={"ID":"d90ad1cf-edbe-42b5-84c6-0c0568fafd43","Type":"ContainerDied","Data":"dc060f1d87d08006aa81474b23351468353187384fde66459a3785f43950131e"} Dec 03 14:07:05 crc kubenswrapper[4986]: I1203 14:07:05.860763 4986 scope.go:117] "RemoveContainer" containerID="9eb248043181efcc58596a772f3e072ccf0bcde0aefa28dfe5fb214de714b36c" Dec 03 14:07:05 crc kubenswrapper[4986]: I1203 14:07:05.860573 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5jd64" Dec 03 14:07:05 crc kubenswrapper[4986]: I1203 14:07:05.890856 4986 scope.go:117] "RemoveContainer" containerID="d811021cce69a539a587cc23891bdb9b8f42e1c0d1e83d21da5b5de662c6a05c" Dec 03 14:07:05 crc kubenswrapper[4986]: I1203 14:07:05.906641 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5jd64"] Dec 03 14:07:05 crc kubenswrapper[4986]: I1203 14:07:05.915878 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5jd64"] Dec 03 14:07:05 crc kubenswrapper[4986]: I1203 14:07:05.930348 4986 scope.go:117] "RemoveContainer" containerID="139961b98f477bb2c313feeb15aad3268a87e15ad5a921739f5df28b087ad164" Dec 03 14:07:06 crc kubenswrapper[4986]: I1203 14:07:06.531112 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fsqnm"] Dec 03 14:07:06 crc kubenswrapper[4986]: I1203 14:07:06.880496 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fsqnm" podUID="7246177a-e565-470a-b053-5d9c4b901645" containerName="registry-server" containerID="cri-o://e1d0007a7f5c28f0b70c8a8b0d09128634f8c46f91f16556323db36015c30b21" gracePeriod=2 Dec 03 14:07:06 crc kubenswrapper[4986]: I1203 14:07:06.956864 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d90ad1cf-edbe-42b5-84c6-0c0568fafd43" path="/var/lib/kubelet/pods/d90ad1cf-edbe-42b5-84c6-0c0568fafd43/volumes" Dec 03 14:07:07 crc kubenswrapper[4986]: I1203 14:07:07.432656 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fsqnm" Dec 03 14:07:07 crc kubenswrapper[4986]: I1203 14:07:07.548700 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qf5zz\" (UniqueName: \"kubernetes.io/projected/7246177a-e565-470a-b053-5d9c4b901645-kube-api-access-qf5zz\") pod \"7246177a-e565-470a-b053-5d9c4b901645\" (UID: \"7246177a-e565-470a-b053-5d9c4b901645\") " Dec 03 14:07:07 crc kubenswrapper[4986]: I1203 14:07:07.548796 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7246177a-e565-470a-b053-5d9c4b901645-utilities\") pod \"7246177a-e565-470a-b053-5d9c4b901645\" (UID: \"7246177a-e565-470a-b053-5d9c4b901645\") " Dec 03 14:07:07 crc kubenswrapper[4986]: I1203 14:07:07.548846 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7246177a-e565-470a-b053-5d9c4b901645-catalog-content\") pod \"7246177a-e565-470a-b053-5d9c4b901645\" (UID: \"7246177a-e565-470a-b053-5d9c4b901645\") " Dec 03 14:07:07 crc kubenswrapper[4986]: I1203 14:07:07.551215 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7246177a-e565-470a-b053-5d9c4b901645-utilities" (OuterVolumeSpecName: "utilities") pod "7246177a-e565-470a-b053-5d9c4b901645" (UID: "7246177a-e565-470a-b053-5d9c4b901645"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 14:07:07 crc kubenswrapper[4986]: I1203 14:07:07.557473 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7246177a-e565-470a-b053-5d9c4b901645-kube-api-access-qf5zz" (OuterVolumeSpecName: "kube-api-access-qf5zz") pod "7246177a-e565-470a-b053-5d9c4b901645" (UID: "7246177a-e565-470a-b053-5d9c4b901645"). InnerVolumeSpecName "kube-api-access-qf5zz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 14:07:07 crc kubenswrapper[4986]: I1203 14:07:07.605620 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7246177a-e565-470a-b053-5d9c4b901645-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7246177a-e565-470a-b053-5d9c4b901645" (UID: "7246177a-e565-470a-b053-5d9c4b901645"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 14:07:07 crc kubenswrapper[4986]: I1203 14:07:07.650446 4986 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7246177a-e565-470a-b053-5d9c4b901645-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 14:07:07 crc kubenswrapper[4986]: I1203 14:07:07.650483 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qf5zz\" (UniqueName: \"kubernetes.io/projected/7246177a-e565-470a-b053-5d9c4b901645-kube-api-access-qf5zz\") on node \"crc\" DevicePath \"\"" Dec 03 14:07:07 crc kubenswrapper[4986]: I1203 14:07:07.650495 4986 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7246177a-e565-470a-b053-5d9c4b901645-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 14:07:07 crc kubenswrapper[4986]: I1203 14:07:07.891446 4986 generic.go:334] "Generic (PLEG): container finished" podID="7246177a-e565-470a-b053-5d9c4b901645" containerID="e1d0007a7f5c28f0b70c8a8b0d09128634f8c46f91f16556323db36015c30b21" exitCode=0 Dec 03 14:07:07 crc kubenswrapper[4986]: I1203 14:07:07.891507 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fsqnm" event={"ID":"7246177a-e565-470a-b053-5d9c4b901645","Type":"ContainerDied","Data":"e1d0007a7f5c28f0b70c8a8b0d09128634f8c46f91f16556323db36015c30b21"} Dec 03 14:07:07 crc kubenswrapper[4986]: I1203 14:07:07.891518 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fsqnm" Dec 03 14:07:07 crc kubenswrapper[4986]: I1203 14:07:07.891549 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fsqnm" event={"ID":"7246177a-e565-470a-b053-5d9c4b901645","Type":"ContainerDied","Data":"125cd7047e024180ba533b69b0db64cbaaed106ae6ef0f25f585131180149a5e"} Dec 03 14:07:07 crc kubenswrapper[4986]: I1203 14:07:07.891577 4986 scope.go:117] "RemoveContainer" containerID="e1d0007a7f5c28f0b70c8a8b0d09128634f8c46f91f16556323db36015c30b21" Dec 03 14:07:07 crc kubenswrapper[4986]: I1203 14:07:07.908696 4986 scope.go:117] "RemoveContainer" containerID="5e3bdde49a68b7a6b0f8f29892ec427b1b832ecf037ce4c99964618336f56fcf" Dec 03 14:07:07 crc kubenswrapper[4986]: I1203 14:07:07.935007 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fsqnm"] Dec 03 14:07:07 crc kubenswrapper[4986]: I1203 14:07:07.945551 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fsqnm"] Dec 03 14:07:07 crc kubenswrapper[4986]: I1203 14:07:07.959372 4986 scope.go:117] "RemoveContainer" containerID="ac5effef136119f1567248cc271091bfa6137b95ae914708a9dca4d2ee8e6b24" Dec 03 14:07:07 crc kubenswrapper[4986]: I1203 14:07:07.982438 4986 scope.go:117] "RemoveContainer" containerID="e1d0007a7f5c28f0b70c8a8b0d09128634f8c46f91f16556323db36015c30b21" Dec 03 14:07:07 crc kubenswrapper[4986]: E1203 14:07:07.982897 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1d0007a7f5c28f0b70c8a8b0d09128634f8c46f91f16556323db36015c30b21\": container with ID starting with e1d0007a7f5c28f0b70c8a8b0d09128634f8c46f91f16556323db36015c30b21 not found: ID does not exist" containerID="e1d0007a7f5c28f0b70c8a8b0d09128634f8c46f91f16556323db36015c30b21" Dec 03 14:07:07 crc kubenswrapper[4986]: I1203 14:07:07.982931 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1d0007a7f5c28f0b70c8a8b0d09128634f8c46f91f16556323db36015c30b21"} err="failed to get container status \"e1d0007a7f5c28f0b70c8a8b0d09128634f8c46f91f16556323db36015c30b21\": rpc error: code = NotFound desc = could not find container \"e1d0007a7f5c28f0b70c8a8b0d09128634f8c46f91f16556323db36015c30b21\": container with ID starting with e1d0007a7f5c28f0b70c8a8b0d09128634f8c46f91f16556323db36015c30b21 not found: ID does not exist" Dec 03 14:07:07 crc kubenswrapper[4986]: I1203 14:07:07.982952 4986 scope.go:117] "RemoveContainer" containerID="5e3bdde49a68b7a6b0f8f29892ec427b1b832ecf037ce4c99964618336f56fcf" Dec 03 14:07:07 crc kubenswrapper[4986]: E1203 14:07:07.983610 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e3bdde49a68b7a6b0f8f29892ec427b1b832ecf037ce4c99964618336f56fcf\": container with ID starting with 5e3bdde49a68b7a6b0f8f29892ec427b1b832ecf037ce4c99964618336f56fcf not found: ID does not exist" containerID="5e3bdde49a68b7a6b0f8f29892ec427b1b832ecf037ce4c99964618336f56fcf" Dec 03 14:07:07 crc kubenswrapper[4986]: I1203 14:07:07.983632 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e3bdde49a68b7a6b0f8f29892ec427b1b832ecf037ce4c99964618336f56fcf"} err="failed to get container status \"5e3bdde49a68b7a6b0f8f29892ec427b1b832ecf037ce4c99964618336f56fcf\": rpc error: code = NotFound desc = could not find container \"5e3bdde49a68b7a6b0f8f29892ec427b1b832ecf037ce4c99964618336f56fcf\": container with ID starting with 5e3bdde49a68b7a6b0f8f29892ec427b1b832ecf037ce4c99964618336f56fcf not found: ID does not exist" Dec 03 14:07:07 crc kubenswrapper[4986]: I1203 14:07:07.983647 4986 scope.go:117] "RemoveContainer" containerID="ac5effef136119f1567248cc271091bfa6137b95ae914708a9dca4d2ee8e6b24" Dec 03 14:07:07 crc kubenswrapper[4986]: E1203 14:07:07.983903 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac5effef136119f1567248cc271091bfa6137b95ae914708a9dca4d2ee8e6b24\": container with ID starting with ac5effef136119f1567248cc271091bfa6137b95ae914708a9dca4d2ee8e6b24 not found: ID does not exist" containerID="ac5effef136119f1567248cc271091bfa6137b95ae914708a9dca4d2ee8e6b24" Dec 03 14:07:07 crc kubenswrapper[4986]: I1203 14:07:07.983922 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac5effef136119f1567248cc271091bfa6137b95ae914708a9dca4d2ee8e6b24"} err="failed to get container status \"ac5effef136119f1567248cc271091bfa6137b95ae914708a9dca4d2ee8e6b24\": rpc error: code = NotFound desc = could not find container \"ac5effef136119f1567248cc271091bfa6137b95ae914708a9dca4d2ee8e6b24\": container with ID starting with ac5effef136119f1567248cc271091bfa6137b95ae914708a9dca4d2ee8e6b24 not found: ID does not exist" Dec 03 14:07:08 crc kubenswrapper[4986]: I1203 14:07:08.956438 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7246177a-e565-470a-b053-5d9c4b901645" path="/var/lib/kubelet/pods/7246177a-e565-470a-b053-5d9c4b901645/volumes" Dec 03 14:07:17 crc kubenswrapper[4986]: I1203 14:07:17.293712 4986 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-6d976bf467-mjgvz" podUID="d70793fc-c91d-4ddc-8a21-bcd243434f73" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Dec 03 14:07:33 crc kubenswrapper[4986]: I1203 14:07:33.491626 4986 patch_prober.go:28] interesting pod/machine-config-daemon-xggpj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 14:07:33 crc kubenswrapper[4986]: I1203 14:07:33.492470 4986 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 14:08:03 crc kubenswrapper[4986]: I1203 14:08:03.491181 4986 patch_prober.go:28] interesting pod/machine-config-daemon-xggpj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 14:08:03 crc kubenswrapper[4986]: I1203 14:08:03.491818 4986 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 14:08:33 crc kubenswrapper[4986]: I1203 14:08:33.491834 4986 patch_prober.go:28] interesting pod/machine-config-daemon-xggpj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 14:08:33 crc kubenswrapper[4986]: I1203 14:08:33.492580 4986 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 14:08:33 crc kubenswrapper[4986]: I1203 14:08:33.492677 4986 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" Dec 03 14:08:33 crc kubenswrapper[4986]: I1203 14:08:33.494065 4986 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9908c5bc4e1f9c54ddf6781265a9653bd441c26c6cebabaf9a35e940adc79994"} pod="openshift-machine-config-operator/machine-config-daemon-xggpj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 14:08:33 crc kubenswrapper[4986]: I1203 14:08:33.494146 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" containerName="machine-config-daemon" containerID="cri-o://9908c5bc4e1f9c54ddf6781265a9653bd441c26c6cebabaf9a35e940adc79994" gracePeriod=600 Dec 03 14:08:33 crc kubenswrapper[4986]: E1203 14:08:33.633124 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 14:08:33 crc kubenswrapper[4986]: I1203 14:08:33.733269 4986 generic.go:334] "Generic (PLEG): container finished" podID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" containerID="9908c5bc4e1f9c54ddf6781265a9653bd441c26c6cebabaf9a35e940adc79994" exitCode=0 Dec 03 14:08:33 crc kubenswrapper[4986]: I1203 14:08:33.733364 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" event={"ID":"f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c","Type":"ContainerDied","Data":"9908c5bc4e1f9c54ddf6781265a9653bd441c26c6cebabaf9a35e940adc79994"} Dec 03 14:08:33 crc kubenswrapper[4986]: I1203 14:08:33.733412 4986 scope.go:117] "RemoveContainer" containerID="94098f17055dc83680391bf29240e7586ef8a8e160e65ff114312434843cab43" Dec 03 14:08:33 crc kubenswrapper[4986]: I1203 14:08:33.734420 4986 scope.go:117] "RemoveContainer" containerID="9908c5bc4e1f9c54ddf6781265a9653bd441c26c6cebabaf9a35e940adc79994" Dec 03 14:08:33 crc kubenswrapper[4986]: E1203 14:08:33.734875 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 14:08:37 crc kubenswrapper[4986]: I1203 14:08:37.201807 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-z5m8x/must-gather-sxmxm"] Dec 03 14:08:37 crc kubenswrapper[4986]: E1203 14:08:37.202993 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d90ad1cf-edbe-42b5-84c6-0c0568fafd43" containerName="extract-utilities" Dec 03 14:08:37 crc kubenswrapper[4986]: I1203 14:08:37.203008 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="d90ad1cf-edbe-42b5-84c6-0c0568fafd43" containerName="extract-utilities" Dec 03 14:08:37 crc kubenswrapper[4986]: E1203 14:08:37.203040 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7246177a-e565-470a-b053-5d9c4b901645" containerName="extract-utilities" Dec 03 14:08:37 crc kubenswrapper[4986]: I1203 14:08:37.203046 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="7246177a-e565-470a-b053-5d9c4b901645" containerName="extract-utilities" Dec 03 14:08:37 crc kubenswrapper[4986]: E1203 14:08:37.203081 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7246177a-e565-470a-b053-5d9c4b901645" containerName="registry-server" Dec 03 14:08:37 crc kubenswrapper[4986]: I1203 14:08:37.203088 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="7246177a-e565-470a-b053-5d9c4b901645" containerName="registry-server" Dec 03 14:08:37 crc kubenswrapper[4986]: E1203 14:08:37.209073 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d90ad1cf-edbe-42b5-84c6-0c0568fafd43" containerName="registry-server" Dec 03 14:08:37 crc kubenswrapper[4986]: I1203 14:08:37.209100 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="d90ad1cf-edbe-42b5-84c6-0c0568fafd43" containerName="registry-server" Dec 03 14:08:37 crc kubenswrapper[4986]: E1203 14:08:37.209170 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d90ad1cf-edbe-42b5-84c6-0c0568fafd43" containerName="extract-content" Dec 03 14:08:37 crc kubenswrapper[4986]: I1203 14:08:37.209180 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="d90ad1cf-edbe-42b5-84c6-0c0568fafd43" containerName="extract-content" Dec 03 14:08:37 crc kubenswrapper[4986]: E1203 14:08:37.209210 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7246177a-e565-470a-b053-5d9c4b901645" containerName="extract-content" Dec 03 14:08:37 crc kubenswrapper[4986]: I1203 14:08:37.209227 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="7246177a-e565-470a-b053-5d9c4b901645" containerName="extract-content" Dec 03 14:08:37 crc kubenswrapper[4986]: I1203 14:08:37.210512 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="d90ad1cf-edbe-42b5-84c6-0c0568fafd43" containerName="registry-server" Dec 03 14:08:37 crc kubenswrapper[4986]: I1203 14:08:37.210565 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="7246177a-e565-470a-b053-5d9c4b901645" containerName="registry-server" Dec 03 14:08:37 crc kubenswrapper[4986]: I1203 14:08:37.223350 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z5m8x/must-gather-sxmxm" Dec 03 14:08:37 crc kubenswrapper[4986]: I1203 14:08:37.227082 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-z5m8x"/"default-dockercfg-8xzr5" Dec 03 14:08:37 crc kubenswrapper[4986]: I1203 14:08:37.238975 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-z5m8x/must-gather-sxmxm"] Dec 03 14:08:37 crc kubenswrapper[4986]: I1203 14:08:37.250467 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fr2p\" (UniqueName: \"kubernetes.io/projected/5f810777-8f08-434e-ad09-0a4e18694dd0-kube-api-access-2fr2p\") pod \"must-gather-sxmxm\" (UID: \"5f810777-8f08-434e-ad09-0a4e18694dd0\") " pod="openshift-must-gather-z5m8x/must-gather-sxmxm" Dec 03 14:08:37 crc kubenswrapper[4986]: I1203 14:08:37.250620 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/5f810777-8f08-434e-ad09-0a4e18694dd0-must-gather-output\") pod \"must-gather-sxmxm\" (UID: \"5f810777-8f08-434e-ad09-0a4e18694dd0\") " pod="openshift-must-gather-z5m8x/must-gather-sxmxm" Dec 03 14:08:37 crc kubenswrapper[4986]: I1203 14:08:37.252114 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-z5m8x"/"openshift-service-ca.crt" Dec 03 14:08:37 crc kubenswrapper[4986]: I1203 14:08:37.252374 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-z5m8x"/"kube-root-ca.crt" Dec 03 14:08:37 crc kubenswrapper[4986]: I1203 14:08:37.352189 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/5f810777-8f08-434e-ad09-0a4e18694dd0-must-gather-output\") pod \"must-gather-sxmxm\" (UID: \"5f810777-8f08-434e-ad09-0a4e18694dd0\") " pod="openshift-must-gather-z5m8x/must-gather-sxmxm" Dec 03 14:08:37 crc kubenswrapper[4986]: I1203 14:08:37.352387 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fr2p\" (UniqueName: \"kubernetes.io/projected/5f810777-8f08-434e-ad09-0a4e18694dd0-kube-api-access-2fr2p\") pod \"must-gather-sxmxm\" (UID: \"5f810777-8f08-434e-ad09-0a4e18694dd0\") " pod="openshift-must-gather-z5m8x/must-gather-sxmxm" Dec 03 14:08:37 crc kubenswrapper[4986]: I1203 14:08:37.353176 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/5f810777-8f08-434e-ad09-0a4e18694dd0-must-gather-output\") pod \"must-gather-sxmxm\" (UID: \"5f810777-8f08-434e-ad09-0a4e18694dd0\") " pod="openshift-must-gather-z5m8x/must-gather-sxmxm" Dec 03 14:08:37 crc kubenswrapper[4986]: I1203 14:08:37.377887 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fr2p\" (UniqueName: \"kubernetes.io/projected/5f810777-8f08-434e-ad09-0a4e18694dd0-kube-api-access-2fr2p\") pod \"must-gather-sxmxm\" (UID: \"5f810777-8f08-434e-ad09-0a4e18694dd0\") " pod="openshift-must-gather-z5m8x/must-gather-sxmxm" Dec 03 14:08:37 crc kubenswrapper[4986]: I1203 14:08:37.552335 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z5m8x/must-gather-sxmxm" Dec 03 14:08:37 crc kubenswrapper[4986]: I1203 14:08:37.987742 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-z5m8x/must-gather-sxmxm"] Dec 03 14:08:38 crc kubenswrapper[4986]: I1203 14:08:38.780979 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-z5m8x/must-gather-sxmxm" event={"ID":"5f810777-8f08-434e-ad09-0a4e18694dd0","Type":"ContainerStarted","Data":"b90a08fe18e948af22be71a0e9977a45093d62e78faf7e0da4f5e8927a283c4e"} Dec 03 14:08:38 crc kubenswrapper[4986]: I1203 14:08:38.782487 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-z5m8x/must-gather-sxmxm" event={"ID":"5f810777-8f08-434e-ad09-0a4e18694dd0","Type":"ContainerStarted","Data":"ae5dce02cecac31dba54a82509965958a2c10c78fbd1119e71ab949183dfae6c"} Dec 03 14:08:38 crc kubenswrapper[4986]: I1203 14:08:38.782568 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-z5m8x/must-gather-sxmxm" event={"ID":"5f810777-8f08-434e-ad09-0a4e18694dd0","Type":"ContainerStarted","Data":"7e253950c064e7647ce8659f18f611e3488a8d0508b193def3f8defcc28d9301"} Dec 03 14:08:38 crc kubenswrapper[4986]: I1203 14:08:38.815262 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-z5m8x/must-gather-sxmxm" podStartSLOduration=1.815242976 podStartE2EDuration="1.815242976s" podCreationTimestamp="2025-12-03 14:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 14:08:38.810202732 +0000 UTC m=+4378.276633923" watchObservedRunningTime="2025-12-03 14:08:38.815242976 +0000 UTC m=+4378.281674167" Dec 03 14:08:42 crc kubenswrapper[4986]: I1203 14:08:42.301096 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-z5m8x/crc-debug-5v6nl"] Dec 03 14:08:42 crc kubenswrapper[4986]: I1203 14:08:42.303615 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z5m8x/crc-debug-5v6nl" Dec 03 14:08:42 crc kubenswrapper[4986]: I1203 14:08:42.360273 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/535c1aa1-d2dc-46d9-a6dd-50126c151a18-host\") pod \"crc-debug-5v6nl\" (UID: \"535c1aa1-d2dc-46d9-a6dd-50126c151a18\") " pod="openshift-must-gather-z5m8x/crc-debug-5v6nl" Dec 03 14:08:42 crc kubenswrapper[4986]: I1203 14:08:42.360703 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ql26j\" (UniqueName: \"kubernetes.io/projected/535c1aa1-d2dc-46d9-a6dd-50126c151a18-kube-api-access-ql26j\") pod \"crc-debug-5v6nl\" (UID: \"535c1aa1-d2dc-46d9-a6dd-50126c151a18\") " pod="openshift-must-gather-z5m8x/crc-debug-5v6nl" Dec 03 14:08:42 crc kubenswrapper[4986]: I1203 14:08:42.462427 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/535c1aa1-d2dc-46d9-a6dd-50126c151a18-host\") pod \"crc-debug-5v6nl\" (UID: \"535c1aa1-d2dc-46d9-a6dd-50126c151a18\") " pod="openshift-must-gather-z5m8x/crc-debug-5v6nl" Dec 03 14:08:42 crc kubenswrapper[4986]: I1203 14:08:42.462545 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ql26j\" (UniqueName: \"kubernetes.io/projected/535c1aa1-d2dc-46d9-a6dd-50126c151a18-kube-api-access-ql26j\") pod \"crc-debug-5v6nl\" (UID: \"535c1aa1-d2dc-46d9-a6dd-50126c151a18\") " pod="openshift-must-gather-z5m8x/crc-debug-5v6nl" Dec 03 14:08:42 crc kubenswrapper[4986]: I1203 14:08:42.462917 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/535c1aa1-d2dc-46d9-a6dd-50126c151a18-host\") pod \"crc-debug-5v6nl\" (UID: \"535c1aa1-d2dc-46d9-a6dd-50126c151a18\") " pod="openshift-must-gather-z5m8x/crc-debug-5v6nl" Dec 03 14:08:42 crc kubenswrapper[4986]: I1203 14:08:42.737460 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ql26j\" (UniqueName: \"kubernetes.io/projected/535c1aa1-d2dc-46d9-a6dd-50126c151a18-kube-api-access-ql26j\") pod \"crc-debug-5v6nl\" (UID: \"535c1aa1-d2dc-46d9-a6dd-50126c151a18\") " pod="openshift-must-gather-z5m8x/crc-debug-5v6nl" Dec 03 14:08:42 crc kubenswrapper[4986]: I1203 14:08:42.922004 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z5m8x/crc-debug-5v6nl" Dec 03 14:08:43 crc kubenswrapper[4986]: I1203 14:08:43.829618 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-z5m8x/crc-debug-5v6nl" event={"ID":"535c1aa1-d2dc-46d9-a6dd-50126c151a18","Type":"ContainerStarted","Data":"ba111ec619a971d8d73e30d4e6e5373c7f2d8c5f37e9cd5ed1961a50385556c0"} Dec 03 14:08:43 crc kubenswrapper[4986]: I1203 14:08:43.830040 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-z5m8x/crc-debug-5v6nl" event={"ID":"535c1aa1-d2dc-46d9-a6dd-50126c151a18","Type":"ContainerStarted","Data":"f2aaf8d044afad1fc109c388ab0957dc4f1cbbde25729fb4dbf184b5347ca9ee"} Dec 03 14:08:43 crc kubenswrapper[4986]: I1203 14:08:43.848610 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-z5m8x/crc-debug-5v6nl" podStartSLOduration=1.848590237 podStartE2EDuration="1.848590237s" podCreationTimestamp="2025-12-03 14:08:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 14:08:43.842515585 +0000 UTC m=+4383.308946796" watchObservedRunningTime="2025-12-03 14:08:43.848590237 +0000 UTC m=+4383.315021418" Dec 03 14:08:48 crc kubenswrapper[4986]: I1203 14:08:48.944149 4986 scope.go:117] "RemoveContainer" containerID="9908c5bc4e1f9c54ddf6781265a9653bd441c26c6cebabaf9a35e940adc79994" Dec 03 14:08:48 crc kubenswrapper[4986]: E1203 14:08:48.944989 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 14:08:59 crc kubenswrapper[4986]: I1203 14:08:59.946454 4986 scope.go:117] "RemoveContainer" containerID="9908c5bc4e1f9c54ddf6781265a9653bd441c26c6cebabaf9a35e940adc79994" Dec 03 14:08:59 crc kubenswrapper[4986]: E1203 14:08:59.947248 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 14:09:13 crc kubenswrapper[4986]: I1203 14:09:13.943760 4986 scope.go:117] "RemoveContainer" containerID="9908c5bc4e1f9c54ddf6781265a9653bd441c26c6cebabaf9a35e940adc79994" Dec 03 14:09:13 crc kubenswrapper[4986]: E1203 14:09:13.944621 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 14:09:17 crc kubenswrapper[4986]: I1203 14:09:17.143786 4986 generic.go:334] "Generic (PLEG): container finished" podID="535c1aa1-d2dc-46d9-a6dd-50126c151a18" containerID="ba111ec619a971d8d73e30d4e6e5373c7f2d8c5f37e9cd5ed1961a50385556c0" exitCode=0 Dec 03 14:09:17 crc kubenswrapper[4986]: I1203 14:09:17.143879 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-z5m8x/crc-debug-5v6nl" event={"ID":"535c1aa1-d2dc-46d9-a6dd-50126c151a18","Type":"ContainerDied","Data":"ba111ec619a971d8d73e30d4e6e5373c7f2d8c5f37e9cd5ed1961a50385556c0"} Dec 03 14:09:18 crc kubenswrapper[4986]: I1203 14:09:18.255461 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z5m8x/crc-debug-5v6nl" Dec 03 14:09:18 crc kubenswrapper[4986]: I1203 14:09:18.288381 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-z5m8x/crc-debug-5v6nl"] Dec 03 14:09:18 crc kubenswrapper[4986]: I1203 14:09:18.296445 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-z5m8x/crc-debug-5v6nl"] Dec 03 14:09:18 crc kubenswrapper[4986]: I1203 14:09:18.365526 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ql26j\" (UniqueName: \"kubernetes.io/projected/535c1aa1-d2dc-46d9-a6dd-50126c151a18-kube-api-access-ql26j\") pod \"535c1aa1-d2dc-46d9-a6dd-50126c151a18\" (UID: \"535c1aa1-d2dc-46d9-a6dd-50126c151a18\") " Dec 03 14:09:18 crc kubenswrapper[4986]: I1203 14:09:18.365585 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/535c1aa1-d2dc-46d9-a6dd-50126c151a18-host\") pod \"535c1aa1-d2dc-46d9-a6dd-50126c151a18\" (UID: \"535c1aa1-d2dc-46d9-a6dd-50126c151a18\") " Dec 03 14:09:18 crc kubenswrapper[4986]: I1203 14:09:18.365764 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/535c1aa1-d2dc-46d9-a6dd-50126c151a18-host" (OuterVolumeSpecName: "host") pod "535c1aa1-d2dc-46d9-a6dd-50126c151a18" (UID: "535c1aa1-d2dc-46d9-a6dd-50126c151a18"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 14:09:18 crc kubenswrapper[4986]: I1203 14:09:18.366059 4986 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/535c1aa1-d2dc-46d9-a6dd-50126c151a18-host\") on node \"crc\" DevicePath \"\"" Dec 03 14:09:18 crc kubenswrapper[4986]: I1203 14:09:18.372640 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/535c1aa1-d2dc-46d9-a6dd-50126c151a18-kube-api-access-ql26j" (OuterVolumeSpecName: "kube-api-access-ql26j") pod "535c1aa1-d2dc-46d9-a6dd-50126c151a18" (UID: "535c1aa1-d2dc-46d9-a6dd-50126c151a18"). InnerVolumeSpecName "kube-api-access-ql26j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 14:09:18 crc kubenswrapper[4986]: I1203 14:09:18.467589 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ql26j\" (UniqueName: \"kubernetes.io/projected/535c1aa1-d2dc-46d9-a6dd-50126c151a18-kube-api-access-ql26j\") on node \"crc\" DevicePath \"\"" Dec 03 14:09:18 crc kubenswrapper[4986]: I1203 14:09:18.960044 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="535c1aa1-d2dc-46d9-a6dd-50126c151a18" path="/var/lib/kubelet/pods/535c1aa1-d2dc-46d9-a6dd-50126c151a18/volumes" Dec 03 14:09:19 crc kubenswrapper[4986]: I1203 14:09:19.163332 4986 scope.go:117] "RemoveContainer" containerID="ba111ec619a971d8d73e30d4e6e5373c7f2d8c5f37e9cd5ed1961a50385556c0" Dec 03 14:09:19 crc kubenswrapper[4986]: I1203 14:09:19.163453 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z5m8x/crc-debug-5v6nl" Dec 03 14:09:19 crc kubenswrapper[4986]: I1203 14:09:19.482078 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-z5m8x/crc-debug-97l2k"] Dec 03 14:09:19 crc kubenswrapper[4986]: E1203 14:09:19.483583 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="535c1aa1-d2dc-46d9-a6dd-50126c151a18" containerName="container-00" Dec 03 14:09:19 crc kubenswrapper[4986]: I1203 14:09:19.483686 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="535c1aa1-d2dc-46d9-a6dd-50126c151a18" containerName="container-00" Dec 03 14:09:19 crc kubenswrapper[4986]: I1203 14:09:19.484017 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="535c1aa1-d2dc-46d9-a6dd-50126c151a18" containerName="container-00" Dec 03 14:09:19 crc kubenswrapper[4986]: I1203 14:09:19.484888 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z5m8x/crc-debug-97l2k" Dec 03 14:09:19 crc kubenswrapper[4986]: I1203 14:09:19.594395 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxfq2\" (UniqueName: \"kubernetes.io/projected/bad52e52-1c63-4d31-bd71-e8e60b95e42e-kube-api-access-cxfq2\") pod \"crc-debug-97l2k\" (UID: \"bad52e52-1c63-4d31-bd71-e8e60b95e42e\") " pod="openshift-must-gather-z5m8x/crc-debug-97l2k" Dec 03 14:09:19 crc kubenswrapper[4986]: I1203 14:09:19.594475 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bad52e52-1c63-4d31-bd71-e8e60b95e42e-host\") pod \"crc-debug-97l2k\" (UID: \"bad52e52-1c63-4d31-bd71-e8e60b95e42e\") " pod="openshift-must-gather-z5m8x/crc-debug-97l2k" Dec 03 14:09:19 crc kubenswrapper[4986]: I1203 14:09:19.696009 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxfq2\" (UniqueName: \"kubernetes.io/projected/bad52e52-1c63-4d31-bd71-e8e60b95e42e-kube-api-access-cxfq2\") pod \"crc-debug-97l2k\" (UID: \"bad52e52-1c63-4d31-bd71-e8e60b95e42e\") " pod="openshift-must-gather-z5m8x/crc-debug-97l2k" Dec 03 14:09:19 crc kubenswrapper[4986]: I1203 14:09:19.696488 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bad52e52-1c63-4d31-bd71-e8e60b95e42e-host\") pod \"crc-debug-97l2k\" (UID: \"bad52e52-1c63-4d31-bd71-e8e60b95e42e\") " pod="openshift-must-gather-z5m8x/crc-debug-97l2k" Dec 03 14:09:19 crc kubenswrapper[4986]: I1203 14:09:19.696598 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bad52e52-1c63-4d31-bd71-e8e60b95e42e-host\") pod \"crc-debug-97l2k\" (UID: \"bad52e52-1c63-4d31-bd71-e8e60b95e42e\") " pod="openshift-must-gather-z5m8x/crc-debug-97l2k" Dec 03 14:09:19 crc kubenswrapper[4986]: I1203 14:09:19.712713 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxfq2\" (UniqueName: \"kubernetes.io/projected/bad52e52-1c63-4d31-bd71-e8e60b95e42e-kube-api-access-cxfq2\") pod \"crc-debug-97l2k\" (UID: \"bad52e52-1c63-4d31-bd71-e8e60b95e42e\") " pod="openshift-must-gather-z5m8x/crc-debug-97l2k" Dec 03 14:09:19 crc kubenswrapper[4986]: I1203 14:09:19.800779 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z5m8x/crc-debug-97l2k" Dec 03 14:09:20 crc kubenswrapper[4986]: I1203 14:09:20.172926 4986 generic.go:334] "Generic (PLEG): container finished" podID="bad52e52-1c63-4d31-bd71-e8e60b95e42e" containerID="904b175dd6c34b8744408159c23411d5414574699de3d7bfd116f076fcecf0e3" exitCode=0 Dec 03 14:09:20 crc kubenswrapper[4986]: I1203 14:09:20.173104 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-z5m8x/crc-debug-97l2k" event={"ID":"bad52e52-1c63-4d31-bd71-e8e60b95e42e","Type":"ContainerDied","Data":"904b175dd6c34b8744408159c23411d5414574699de3d7bfd116f076fcecf0e3"} Dec 03 14:09:20 crc kubenswrapper[4986]: I1203 14:09:20.173366 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-z5m8x/crc-debug-97l2k" event={"ID":"bad52e52-1c63-4d31-bd71-e8e60b95e42e","Type":"ContainerStarted","Data":"a3de7b3a90a4003e6834481ce5afb1b2b94d654c45392316be32c00935dce560"} Dec 03 14:09:20 crc kubenswrapper[4986]: I1203 14:09:20.646502 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-z5m8x/crc-debug-97l2k"] Dec 03 14:09:20 crc kubenswrapper[4986]: I1203 14:09:20.654380 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-z5m8x/crc-debug-97l2k"] Dec 03 14:09:21 crc kubenswrapper[4986]: I1203 14:09:21.550823 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z5m8x/crc-debug-97l2k" Dec 03 14:09:21 crc kubenswrapper[4986]: I1203 14:09:21.627489 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cxfq2\" (UniqueName: \"kubernetes.io/projected/bad52e52-1c63-4d31-bd71-e8e60b95e42e-kube-api-access-cxfq2\") pod \"bad52e52-1c63-4d31-bd71-e8e60b95e42e\" (UID: \"bad52e52-1c63-4d31-bd71-e8e60b95e42e\") " Dec 03 14:09:21 crc kubenswrapper[4986]: I1203 14:09:21.627594 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bad52e52-1c63-4d31-bd71-e8e60b95e42e-host\") pod \"bad52e52-1c63-4d31-bd71-e8e60b95e42e\" (UID: \"bad52e52-1c63-4d31-bd71-e8e60b95e42e\") " Dec 03 14:09:21 crc kubenswrapper[4986]: I1203 14:09:21.627683 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bad52e52-1c63-4d31-bd71-e8e60b95e42e-host" (OuterVolumeSpecName: "host") pod "bad52e52-1c63-4d31-bd71-e8e60b95e42e" (UID: "bad52e52-1c63-4d31-bd71-e8e60b95e42e"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 14:09:21 crc kubenswrapper[4986]: I1203 14:09:21.628165 4986 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bad52e52-1c63-4d31-bd71-e8e60b95e42e-host\") on node \"crc\" DevicePath \"\"" Dec 03 14:09:21 crc kubenswrapper[4986]: I1203 14:09:21.634788 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bad52e52-1c63-4d31-bd71-e8e60b95e42e-kube-api-access-cxfq2" (OuterVolumeSpecName: "kube-api-access-cxfq2") pod "bad52e52-1c63-4d31-bd71-e8e60b95e42e" (UID: "bad52e52-1c63-4d31-bd71-e8e60b95e42e"). InnerVolumeSpecName "kube-api-access-cxfq2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 14:09:21 crc kubenswrapper[4986]: I1203 14:09:21.729346 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cxfq2\" (UniqueName: \"kubernetes.io/projected/bad52e52-1c63-4d31-bd71-e8e60b95e42e-kube-api-access-cxfq2\") on node \"crc\" DevicePath \"\"" Dec 03 14:09:21 crc kubenswrapper[4986]: I1203 14:09:21.800021 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-z5m8x/crc-debug-t2tf2"] Dec 03 14:09:21 crc kubenswrapper[4986]: E1203 14:09:21.800431 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bad52e52-1c63-4d31-bd71-e8e60b95e42e" containerName="container-00" Dec 03 14:09:21 crc kubenswrapper[4986]: I1203 14:09:21.800448 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="bad52e52-1c63-4d31-bd71-e8e60b95e42e" containerName="container-00" Dec 03 14:09:21 crc kubenswrapper[4986]: I1203 14:09:21.800621 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="bad52e52-1c63-4d31-bd71-e8e60b95e42e" containerName="container-00" Dec 03 14:09:21 crc kubenswrapper[4986]: I1203 14:09:21.801230 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z5m8x/crc-debug-t2tf2" Dec 03 14:09:21 crc kubenswrapper[4986]: I1203 14:09:21.932530 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tl8mq\" (UniqueName: \"kubernetes.io/projected/aefcd0f9-63f2-4ad6-b6ab-1cdd83f41438-kube-api-access-tl8mq\") pod \"crc-debug-t2tf2\" (UID: \"aefcd0f9-63f2-4ad6-b6ab-1cdd83f41438\") " pod="openshift-must-gather-z5m8x/crc-debug-t2tf2" Dec 03 14:09:21 crc kubenswrapper[4986]: I1203 14:09:21.932625 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/aefcd0f9-63f2-4ad6-b6ab-1cdd83f41438-host\") pod \"crc-debug-t2tf2\" (UID: \"aefcd0f9-63f2-4ad6-b6ab-1cdd83f41438\") " pod="openshift-must-gather-z5m8x/crc-debug-t2tf2" Dec 03 14:09:22 crc kubenswrapper[4986]: I1203 14:09:22.034658 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/aefcd0f9-63f2-4ad6-b6ab-1cdd83f41438-host\") pod \"crc-debug-t2tf2\" (UID: \"aefcd0f9-63f2-4ad6-b6ab-1cdd83f41438\") " pod="openshift-must-gather-z5m8x/crc-debug-t2tf2" Dec 03 14:09:22 crc kubenswrapper[4986]: I1203 14:09:22.034827 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tl8mq\" (UniqueName: \"kubernetes.io/projected/aefcd0f9-63f2-4ad6-b6ab-1cdd83f41438-kube-api-access-tl8mq\") pod \"crc-debug-t2tf2\" (UID: \"aefcd0f9-63f2-4ad6-b6ab-1cdd83f41438\") " pod="openshift-must-gather-z5m8x/crc-debug-t2tf2" Dec 03 14:09:22 crc kubenswrapper[4986]: I1203 14:09:22.035369 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/aefcd0f9-63f2-4ad6-b6ab-1cdd83f41438-host\") pod \"crc-debug-t2tf2\" (UID: \"aefcd0f9-63f2-4ad6-b6ab-1cdd83f41438\") " pod="openshift-must-gather-z5m8x/crc-debug-t2tf2" Dec 03 14:09:22 crc kubenswrapper[4986]: I1203 14:09:22.058360 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tl8mq\" (UniqueName: \"kubernetes.io/projected/aefcd0f9-63f2-4ad6-b6ab-1cdd83f41438-kube-api-access-tl8mq\") pod \"crc-debug-t2tf2\" (UID: \"aefcd0f9-63f2-4ad6-b6ab-1cdd83f41438\") " pod="openshift-must-gather-z5m8x/crc-debug-t2tf2" Dec 03 14:09:22 crc kubenswrapper[4986]: I1203 14:09:22.116938 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z5m8x/crc-debug-t2tf2" Dec 03 14:09:22 crc kubenswrapper[4986]: W1203 14:09:22.151204 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaefcd0f9_63f2_4ad6_b6ab_1cdd83f41438.slice/crio-d4978a1525617cc868f348c853d01c4066a266d18c47154d2a80e8a359b34902 WatchSource:0}: Error finding container d4978a1525617cc868f348c853d01c4066a266d18c47154d2a80e8a359b34902: Status 404 returned error can't find the container with id d4978a1525617cc868f348c853d01c4066a266d18c47154d2a80e8a359b34902 Dec 03 14:09:22 crc kubenswrapper[4986]: I1203 14:09:22.201374 4986 scope.go:117] "RemoveContainer" containerID="904b175dd6c34b8744408159c23411d5414574699de3d7bfd116f076fcecf0e3" Dec 03 14:09:22 crc kubenswrapper[4986]: I1203 14:09:22.201412 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z5m8x/crc-debug-97l2k" Dec 03 14:09:22 crc kubenswrapper[4986]: I1203 14:09:22.202933 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-z5m8x/crc-debug-t2tf2" event={"ID":"aefcd0f9-63f2-4ad6-b6ab-1cdd83f41438","Type":"ContainerStarted","Data":"d4978a1525617cc868f348c853d01c4066a266d18c47154d2a80e8a359b34902"} Dec 03 14:09:22 crc kubenswrapper[4986]: I1203 14:09:22.962103 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bad52e52-1c63-4d31-bd71-e8e60b95e42e" path="/var/lib/kubelet/pods/bad52e52-1c63-4d31-bd71-e8e60b95e42e/volumes" Dec 03 14:09:23 crc kubenswrapper[4986]: I1203 14:09:23.217262 4986 generic.go:334] "Generic (PLEG): container finished" podID="aefcd0f9-63f2-4ad6-b6ab-1cdd83f41438" containerID="eba210574886512a2ecde35f372ca5455f6630c0829f14e80be2d458fa3ad431" exitCode=0 Dec 03 14:09:23 crc kubenswrapper[4986]: I1203 14:09:23.217430 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-z5m8x/crc-debug-t2tf2" event={"ID":"aefcd0f9-63f2-4ad6-b6ab-1cdd83f41438","Type":"ContainerDied","Data":"eba210574886512a2ecde35f372ca5455f6630c0829f14e80be2d458fa3ad431"} Dec 03 14:09:23 crc kubenswrapper[4986]: I1203 14:09:23.279016 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-z5m8x/crc-debug-t2tf2"] Dec 03 14:09:23 crc kubenswrapper[4986]: I1203 14:09:23.290600 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-z5m8x/crc-debug-t2tf2"] Dec 03 14:09:24 crc kubenswrapper[4986]: I1203 14:09:24.364396 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z5m8x/crc-debug-t2tf2" Dec 03 14:09:24 crc kubenswrapper[4986]: I1203 14:09:24.483406 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/aefcd0f9-63f2-4ad6-b6ab-1cdd83f41438-host\") pod \"aefcd0f9-63f2-4ad6-b6ab-1cdd83f41438\" (UID: \"aefcd0f9-63f2-4ad6-b6ab-1cdd83f41438\") " Dec 03 14:09:24 crc kubenswrapper[4986]: I1203 14:09:24.483684 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tl8mq\" (UniqueName: \"kubernetes.io/projected/aefcd0f9-63f2-4ad6-b6ab-1cdd83f41438-kube-api-access-tl8mq\") pod \"aefcd0f9-63f2-4ad6-b6ab-1cdd83f41438\" (UID: \"aefcd0f9-63f2-4ad6-b6ab-1cdd83f41438\") " Dec 03 14:09:24 crc kubenswrapper[4986]: I1203 14:09:24.483783 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aefcd0f9-63f2-4ad6-b6ab-1cdd83f41438-host" (OuterVolumeSpecName: "host") pod "aefcd0f9-63f2-4ad6-b6ab-1cdd83f41438" (UID: "aefcd0f9-63f2-4ad6-b6ab-1cdd83f41438"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 14:09:24 crc kubenswrapper[4986]: I1203 14:09:24.485151 4986 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/aefcd0f9-63f2-4ad6-b6ab-1cdd83f41438-host\") on node \"crc\" DevicePath \"\"" Dec 03 14:09:24 crc kubenswrapper[4986]: I1203 14:09:24.499751 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aefcd0f9-63f2-4ad6-b6ab-1cdd83f41438-kube-api-access-tl8mq" (OuterVolumeSpecName: "kube-api-access-tl8mq") pod "aefcd0f9-63f2-4ad6-b6ab-1cdd83f41438" (UID: "aefcd0f9-63f2-4ad6-b6ab-1cdd83f41438"). InnerVolumeSpecName "kube-api-access-tl8mq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 14:09:24 crc kubenswrapper[4986]: I1203 14:09:24.587214 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tl8mq\" (UniqueName: \"kubernetes.io/projected/aefcd0f9-63f2-4ad6-b6ab-1cdd83f41438-kube-api-access-tl8mq\") on node \"crc\" DevicePath \"\"" Dec 03 14:09:24 crc kubenswrapper[4986]: I1203 14:09:24.956077 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aefcd0f9-63f2-4ad6-b6ab-1cdd83f41438" path="/var/lib/kubelet/pods/aefcd0f9-63f2-4ad6-b6ab-1cdd83f41438/volumes" Dec 03 14:09:25 crc kubenswrapper[4986]: I1203 14:09:25.238720 4986 scope.go:117] "RemoveContainer" containerID="eba210574886512a2ecde35f372ca5455f6630c0829f14e80be2d458fa3ad431" Dec 03 14:09:25 crc kubenswrapper[4986]: I1203 14:09:25.238792 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z5m8x/crc-debug-t2tf2" Dec 03 14:09:28 crc kubenswrapper[4986]: I1203 14:09:28.943877 4986 scope.go:117] "RemoveContainer" containerID="9908c5bc4e1f9c54ddf6781265a9653bd441c26c6cebabaf9a35e940adc79994" Dec 03 14:09:28 crc kubenswrapper[4986]: E1203 14:09:28.944809 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 14:09:43 crc kubenswrapper[4986]: I1203 14:09:43.943512 4986 scope.go:117] "RemoveContainer" containerID="9908c5bc4e1f9c54ddf6781265a9653bd441c26c6cebabaf9a35e940adc79994" Dec 03 14:09:43 crc kubenswrapper[4986]: E1203 14:09:43.944814 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 14:09:48 crc kubenswrapper[4986]: I1203 14:09:48.549670 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5744ccfcbb-rcmx5_37ccb095-f90f-4383-88e9-05d2d82cab28/barbican-api/0.log" Dec 03 14:09:48 crc kubenswrapper[4986]: I1203 14:09:48.740210 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5744ccfcbb-rcmx5_37ccb095-f90f-4383-88e9-05d2d82cab28/barbican-api-log/0.log" Dec 03 14:09:48 crc kubenswrapper[4986]: I1203 14:09:48.743735 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-747896b766-b8kzr_ebfa5ced-0a56-44db-ba24-d5f663d65920/barbican-keystone-listener/0.log" Dec 03 14:09:48 crc kubenswrapper[4986]: I1203 14:09:48.878442 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-747896b766-b8kzr_ebfa5ced-0a56-44db-ba24-d5f663d65920/barbican-keystone-listener-log/0.log" Dec 03 14:09:48 crc kubenswrapper[4986]: I1203 14:09:48.936994 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-787dc78df5-jtcv6_17f8ed93-1afd-41c1-a52b-addefeb38ab0/barbican-worker/0.log" Dec 03 14:09:49 crc kubenswrapper[4986]: I1203 14:09:49.007818 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-787dc78df5-jtcv6_17f8ed93-1afd-41c1-a52b-addefeb38ab0/barbican-worker-log/0.log" Dec 03 14:09:49 crc kubenswrapper[4986]: I1203 14:09:49.150915 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-f4dgg_b4057123-895b-4436-93ac-02902a78df76/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 14:09:49 crc kubenswrapper[4986]: I1203 14:09:49.213864 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_bc0fd902-4092-4036-b7ba-1b6ede68bf04/ceilometer-central-agent/0.log" Dec 03 14:09:49 crc kubenswrapper[4986]: I1203 14:09:49.329460 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_bc0fd902-4092-4036-b7ba-1b6ede68bf04/ceilometer-notification-agent/0.log" Dec 03 14:09:49 crc kubenswrapper[4986]: I1203 14:09:49.340390 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_bc0fd902-4092-4036-b7ba-1b6ede68bf04/proxy-httpd/0.log" Dec 03 14:09:49 crc kubenswrapper[4986]: I1203 14:09:49.413660 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_bc0fd902-4092-4036-b7ba-1b6ede68bf04/sg-core/0.log" Dec 03 14:09:49 crc kubenswrapper[4986]: I1203 14:09:49.562078 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_297881ac-9b99-4d7d-9e59-4fb75c103648/cinder-api-log/0.log" Dec 03 14:09:49 crc kubenswrapper[4986]: I1203 14:09:49.573738 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_297881ac-9b99-4d7d-9e59-4fb75c103648/cinder-api/0.log" Dec 03 14:09:49 crc kubenswrapper[4986]: I1203 14:09:49.696265 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_cf917ae1-694f-4e1e-8d85-6452ac6c4e0e/cinder-scheduler/0.log" Dec 03 14:09:49 crc kubenswrapper[4986]: I1203 14:09:49.774774 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_cf917ae1-694f-4e1e-8d85-6452ac6c4e0e/probe/0.log" Dec 03 14:09:49 crc kubenswrapper[4986]: I1203 14:09:49.887978 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-8smpl_8fec002b-a660-4a80-8a57-51a2ce32cf29/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 14:09:50 crc kubenswrapper[4986]: I1203 14:09:50.001021 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-plm9h_1d933f9e-e44c-4d6b-ab8b-0186020b5b28/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 14:09:50 crc kubenswrapper[4986]: I1203 14:09:50.063669 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cb6ffcf87-pklsb_33180d69-fad9-4b8f-877a-f68644b85da8/init/0.log" Dec 03 14:09:50 crc kubenswrapper[4986]: I1203 14:09:50.279738 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cb6ffcf87-pklsb_33180d69-fad9-4b8f-877a-f68644b85da8/init/0.log" Dec 03 14:09:50 crc kubenswrapper[4986]: I1203 14:09:50.330393 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-2696c_4dc44651-c2df-4ca1-abf4-f9093fa3f70d/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 14:09:50 crc kubenswrapper[4986]: I1203 14:09:50.360134 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cb6ffcf87-pklsb_33180d69-fad9-4b8f-877a-f68644b85da8/dnsmasq-dns/0.log" Dec 03 14:09:50 crc kubenswrapper[4986]: I1203 14:09:50.506888 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_515375fd-69ab-4f66-9fa3-ac72e0eeb97b/glance-httpd/0.log" Dec 03 14:09:50 crc kubenswrapper[4986]: I1203 14:09:50.550605 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_515375fd-69ab-4f66-9fa3-ac72e0eeb97b/glance-log/0.log" Dec 03 14:09:50 crc kubenswrapper[4986]: I1203 14:09:50.693207 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_91b841cf-0b35-4065-8268-3d018757b029/glance-httpd/0.log" Dec 03 14:09:50 crc kubenswrapper[4986]: I1203 14:09:50.708247 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_91b841cf-0b35-4065-8268-3d018757b029/glance-log/0.log" Dec 03 14:09:50 crc kubenswrapper[4986]: I1203 14:09:50.872229 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-7797f969d4-6c2wn_4d67e23b-bda4-42d5-81b6-be58c643861d/horizon/0.log" Dec 03 14:09:50 crc kubenswrapper[4986]: I1203 14:09:50.995527 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-l29sb_7e34b649-2740-4d76-9aff-598b66d301b7/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 14:09:51 crc kubenswrapper[4986]: I1203 14:09:51.187563 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-gpv76_5436a47f-49ff-42be-b125-e3d98fbce1e9/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 14:09:51 crc kubenswrapper[4986]: I1203 14:09:51.481086 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-7797f969d4-6c2wn_4d67e23b-bda4-42d5-81b6-be58c643861d/horizon-log/0.log" Dec 03 14:09:51 crc kubenswrapper[4986]: I1203 14:09:51.964831 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29412841-m7k2b_c12d1f52-dc0d-4deb-9f05-089d1a21267c/keystone-cron/0.log" Dec 03 14:09:52 crc kubenswrapper[4986]: I1203 14:09:52.128707 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-868bb78845-npjxs_652fbb43-2a64-471b-9123-cd6734de8993/keystone-api/0.log" Dec 03 14:09:52 crc kubenswrapper[4986]: I1203 14:09:52.140299 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_33238cb5-2bde-4244-aa0d-cc11080f57fb/kube-state-metrics/0.log" Dec 03 14:09:52 crc kubenswrapper[4986]: I1203 14:09:52.220135 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-fdb9l_38870d92-6fb1-40ac-8763-a8c8bfbbdd77/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 14:09:52 crc kubenswrapper[4986]: I1203 14:09:52.583639 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-59f49d79c7-qt4rk_5a350421-5f01-4d60-92b8-edc85e4ef3c5/neutron-httpd/0.log" Dec 03 14:09:52 crc kubenswrapper[4986]: I1203 14:09:52.614943 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-59f49d79c7-qt4rk_5a350421-5f01-4d60-92b8-edc85e4ef3c5/neutron-api/0.log" Dec 03 14:09:52 crc kubenswrapper[4986]: I1203 14:09:52.650826 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-s5hc5_2a01e184-500d-44fe-9561-d971fb030c77/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 14:09:53 crc kubenswrapper[4986]: I1203 14:09:53.169607 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_39eb69a6-11cd-41da-956d-a9697ef88d67/nova-api-log/0.log" Dec 03 14:09:53 crc kubenswrapper[4986]: I1203 14:09:53.268578 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_d0e9f871-dd3e-4b2b-813a-01ef0428cb44/nova-cell0-conductor-conductor/0.log" Dec 03 14:09:53 crc kubenswrapper[4986]: I1203 14:09:53.533989 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_39eb69a6-11cd-41da-956d-a9697ef88d67/nova-api-api/0.log" Dec 03 14:09:54 crc kubenswrapper[4986]: I1203 14:09:54.035900 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_70518715-2cd8-4268-ae57-aaa98fa28843/nova-cell1-conductor-conductor/0.log" Dec 03 14:09:54 crc kubenswrapper[4986]: I1203 14:09:54.084419 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_4b4f56d6-d999-4f05-ace4-61b79327feec/nova-cell1-novncproxy-novncproxy/0.log" Dec 03 14:09:54 crc kubenswrapper[4986]: I1203 14:09:54.196103 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-g6sdc_2d43935a-d3d4-4e5e-b92a-dacb88b12f26/nova-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 14:09:54 crc kubenswrapper[4986]: I1203 14:09:54.393883 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_3bb61071-904d-46d6-8594-2312383a8a06/nova-metadata-log/0.log" Dec 03 14:09:54 crc kubenswrapper[4986]: I1203 14:09:54.669487 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_a0496538-1ab2-45a2-94ab-fc3474533ec3/mysql-bootstrap/0.log" Dec 03 14:09:54 crc kubenswrapper[4986]: I1203 14:09:54.672714 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_74dfd7ae-e1e8-4fe0-9563-0b6a8aaa60f4/nova-scheduler-scheduler/0.log" Dec 03 14:09:54 crc kubenswrapper[4986]: I1203 14:09:54.854832 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_a0496538-1ab2-45a2-94ab-fc3474533ec3/galera/0.log" Dec 03 14:09:54 crc kubenswrapper[4986]: I1203 14:09:54.884813 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_a0496538-1ab2-45a2-94ab-fc3474533ec3/mysql-bootstrap/0.log" Dec 03 14:09:55 crc kubenswrapper[4986]: I1203 14:09:55.049911 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_50659f56-763b-4cac-9ab4-d660c7d777af/mysql-bootstrap/0.log" Dec 03 14:09:55 crc kubenswrapper[4986]: I1203 14:09:55.248558 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_50659f56-763b-4cac-9ab4-d660c7d777af/galera/0.log" Dec 03 14:09:55 crc kubenswrapper[4986]: I1203 14:09:55.362742 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_50659f56-763b-4cac-9ab4-d660c7d777af/mysql-bootstrap/0.log" Dec 03 14:09:55 crc kubenswrapper[4986]: I1203 14:09:55.454948 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_a2644ca5-1db7-491d-949e-e8810934a296/openstackclient/0.log" Dec 03 14:09:55 crc kubenswrapper[4986]: I1203 14:09:55.574584 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-kqn7t_d06f8249-00a2-4e59-a055-82ab737c7b92/ovn-controller/0.log" Dec 03 14:09:55 crc kubenswrapper[4986]: I1203 14:09:55.758750 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-nm5xm_0798ce7a-8eef-4450-900d-d89e2ab41858/openstack-network-exporter/0.log" Dec 03 14:09:55 crc kubenswrapper[4986]: I1203 14:09:55.817722 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_3bb61071-904d-46d6-8594-2312383a8a06/nova-metadata-metadata/0.log" Dec 03 14:09:55 crc kubenswrapper[4986]: I1203 14:09:55.931943 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-45czf_5510fce4-e81b-4089-a5c3-4c4b6c72d9e0/ovsdb-server-init/0.log" Dec 03 14:09:56 crc kubenswrapper[4986]: I1203 14:09:56.153235 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-45czf_5510fce4-e81b-4089-a5c3-4c4b6c72d9e0/ovsdb-server-init/0.log" Dec 03 14:09:56 crc kubenswrapper[4986]: I1203 14:09:56.156489 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-45czf_5510fce4-e81b-4089-a5c3-4c4b6c72d9e0/ovs-vswitchd/0.log" Dec 03 14:09:56 crc kubenswrapper[4986]: I1203 14:09:56.181434 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-45czf_5510fce4-e81b-4089-a5c3-4c4b6c72d9e0/ovsdb-server/0.log" Dec 03 14:09:56 crc kubenswrapper[4986]: I1203 14:09:56.336375 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-c9nd5_ff750414-499f-4652-9627-3e45a82b6cf3/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 14:09:56 crc kubenswrapper[4986]: I1203 14:09:56.382414 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_2c06d0ad-4862-4c0f-9cad-5f29aa8af72a/openstack-network-exporter/0.log" Dec 03 14:09:56 crc kubenswrapper[4986]: I1203 14:09:56.441807 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_2c06d0ad-4862-4c0f-9cad-5f29aa8af72a/ovn-northd/0.log" Dec 03 14:09:56 crc kubenswrapper[4986]: I1203 14:09:56.631198 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_905d78e0-0235-400d-8004-1f612a11b60a/openstack-network-exporter/0.log" Dec 03 14:09:56 crc kubenswrapper[4986]: I1203 14:09:56.683308 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_905d78e0-0235-400d-8004-1f612a11b60a/ovsdbserver-nb/0.log" Dec 03 14:09:56 crc kubenswrapper[4986]: I1203 14:09:56.843015 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_40aba5da-7d4c-49e1-a054-6e6789aca293/openstack-network-exporter/0.log" Dec 03 14:09:56 crc kubenswrapper[4986]: I1203 14:09:56.865200 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_40aba5da-7d4c-49e1-a054-6e6789aca293/ovsdbserver-sb/0.log" Dec 03 14:09:56 crc kubenswrapper[4986]: I1203 14:09:56.943236 4986 scope.go:117] "RemoveContainer" containerID="9908c5bc4e1f9c54ddf6781265a9653bd441c26c6cebabaf9a35e940adc79994" Dec 03 14:09:56 crc kubenswrapper[4986]: E1203 14:09:56.943545 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 14:09:57 crc kubenswrapper[4986]: I1203 14:09:57.096826 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-db8cd8b46-ffl2g_c622f06b-5b3c-45e4-890e-9f7ba2283ab3/placement-api/0.log" Dec 03 14:09:57 crc kubenswrapper[4986]: I1203 14:09:57.117005 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-db8cd8b46-ffl2g_c622f06b-5b3c-45e4-890e-9f7ba2283ab3/placement-log/0.log" Dec 03 14:09:57 crc kubenswrapper[4986]: I1203 14:09:57.223121 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_e06b4596-d4ac-4524-a521-ae6edfc239be/setup-container/0.log" Dec 03 14:09:57 crc kubenswrapper[4986]: I1203 14:09:57.394241 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_e06b4596-d4ac-4524-a521-ae6edfc239be/setup-container/0.log" Dec 03 14:09:57 crc kubenswrapper[4986]: I1203 14:09:57.441175 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_e06b4596-d4ac-4524-a521-ae6edfc239be/rabbitmq/0.log" Dec 03 14:09:57 crc kubenswrapper[4986]: I1203 14:09:57.514949 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_67da7713-f27f-48cb-a2f1-4ebea4d2f939/setup-container/0.log" Dec 03 14:09:57 crc kubenswrapper[4986]: I1203 14:09:57.715548 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_67da7713-f27f-48cb-a2f1-4ebea4d2f939/rabbitmq/0.log" Dec 03 14:09:57 crc kubenswrapper[4986]: I1203 14:09:57.730350 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_67da7713-f27f-48cb-a2f1-4ebea4d2f939/setup-container/0.log" Dec 03 14:09:57 crc kubenswrapper[4986]: I1203 14:09:57.780688 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-qjdgt_859dd2e9-8a4b-4b51-8718-9d8b5837d098/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 14:09:58 crc kubenswrapper[4986]: I1203 14:09:58.076652 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-c9qzr_931d4925-ed6c-4a1f-8b14-4e726641d115/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 14:09:58 crc kubenswrapper[4986]: I1203 14:09:58.113424 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-qpl5d_d9886573-7ee1-4a0b-a6d2-f8621fdabf83/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 14:09:58 crc kubenswrapper[4986]: I1203 14:09:58.358851 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-n9vwn_8c21eccd-73c5-4d10-9bfe-ff9530e7627b/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 14:09:58 crc kubenswrapper[4986]: I1203 14:09:58.391788 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-5x8bl_614b1cb6-38ce-43ac-a5f3-abc66d1dd088/ssh-known-hosts-edpm-deployment/0.log" Dec 03 14:09:58 crc kubenswrapper[4986]: I1203 14:09:58.633243 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6d976bf467-mjgvz_d70793fc-c91d-4ddc-8a21-bcd243434f73/proxy-server/0.log" Dec 03 14:09:58 crc kubenswrapper[4986]: I1203 14:09:58.740339 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6d976bf467-mjgvz_d70793fc-c91d-4ddc-8a21-bcd243434f73/proxy-httpd/0.log" Dec 03 14:09:58 crc kubenswrapper[4986]: I1203 14:09:58.786985 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-sxrg6_95163eb6-a8f2-45d5-b816-84dd6ffbdab2/swift-ring-rebalance/0.log" Dec 03 14:09:58 crc kubenswrapper[4986]: I1203 14:09:58.906331 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe/account-auditor/0.log" Dec 03 14:09:58 crc kubenswrapper[4986]: I1203 14:09:58.987648 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe/account-reaper/0.log" Dec 03 14:09:59 crc kubenswrapper[4986]: I1203 14:09:59.110013 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe/account-replicator/0.log" Dec 03 14:09:59 crc kubenswrapper[4986]: I1203 14:09:59.140822 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe/container-auditor/0.log" Dec 03 14:09:59 crc kubenswrapper[4986]: I1203 14:09:59.196156 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe/account-server/0.log" Dec 03 14:09:59 crc kubenswrapper[4986]: I1203 14:09:59.226520 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe/container-replicator/0.log" Dec 03 14:09:59 crc kubenswrapper[4986]: I1203 14:09:59.348297 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe/container-server/0.log" Dec 03 14:09:59 crc kubenswrapper[4986]: I1203 14:09:59.356848 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe/container-updater/0.log" Dec 03 14:09:59 crc kubenswrapper[4986]: I1203 14:09:59.445053 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe/object-auditor/0.log" Dec 03 14:09:59 crc kubenswrapper[4986]: I1203 14:09:59.484915 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe/object-expirer/0.log" Dec 03 14:09:59 crc kubenswrapper[4986]: I1203 14:09:59.582027 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe/object-server/0.log" Dec 03 14:09:59 crc kubenswrapper[4986]: I1203 14:09:59.606538 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe/object-replicator/0.log" Dec 03 14:09:59 crc kubenswrapper[4986]: I1203 14:09:59.672222 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe/object-updater/0.log" Dec 03 14:09:59 crc kubenswrapper[4986]: I1203 14:09:59.758192 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe/rsync/0.log" Dec 03 14:09:59 crc kubenswrapper[4986]: I1203 14:09:59.847160 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cd2a3561-dcdc-4f29-b2a0-2fed2f810bbe/swift-recon-cron/0.log" Dec 03 14:09:59 crc kubenswrapper[4986]: I1203 14:09:59.956135 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-2vnss_963319ab-2780-4d81-bf46-9b6dee690eeb/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 14:10:00 crc kubenswrapper[4986]: I1203 14:10:00.090120 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_4159adcb-0a7a-4765-ac54-186effebee8e/tempest-tests-tempest-tests-runner/0.log" Dec 03 14:10:00 crc kubenswrapper[4986]: I1203 14:10:00.178398 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_e8d1d8d5-6041-4e8a-bf89-d4811ef1a2bb/test-operator-logs-container/0.log" Dec 03 14:10:00 crc kubenswrapper[4986]: I1203 14:10:00.371220 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-mq9d2_6d8473a8-d750-4ff5-84be-96088a3eea45/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 14:10:09 crc kubenswrapper[4986]: I1203 14:10:09.890083 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_cb2b35c1-7c5f-4ffc-a178-4f7ade5cc313/memcached/0.log" Dec 03 14:10:10 crc kubenswrapper[4986]: I1203 14:10:10.953149 4986 scope.go:117] "RemoveContainer" containerID="9908c5bc4e1f9c54ddf6781265a9653bd441c26c6cebabaf9a35e940adc79994" Dec 03 14:10:10 crc kubenswrapper[4986]: E1203 14:10:10.954028 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 14:10:22 crc kubenswrapper[4986]: I1203 14:10:22.944278 4986 scope.go:117] "RemoveContainer" containerID="9908c5bc4e1f9c54ddf6781265a9653bd441c26c6cebabaf9a35e940adc79994" Dec 03 14:10:22 crc kubenswrapper[4986]: E1203 14:10:22.945162 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 14:10:27 crc kubenswrapper[4986]: I1203 14:10:27.055757 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_824d2a61b6100fd280a9a09bd63ce89af5b90b0d18a4f901e4852c2491zpwrf_b7926b50-9c30-44b4-ac3f-058edec517b9/util/0.log" Dec 03 14:10:27 crc kubenswrapper[4986]: I1203 14:10:27.248072 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_824d2a61b6100fd280a9a09bd63ce89af5b90b0d18a4f901e4852c2491zpwrf_b7926b50-9c30-44b4-ac3f-058edec517b9/pull/0.log" Dec 03 14:10:27 crc kubenswrapper[4986]: I1203 14:10:27.268324 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_824d2a61b6100fd280a9a09bd63ce89af5b90b0d18a4f901e4852c2491zpwrf_b7926b50-9c30-44b4-ac3f-058edec517b9/util/0.log" Dec 03 14:10:27 crc kubenswrapper[4986]: I1203 14:10:27.279736 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_824d2a61b6100fd280a9a09bd63ce89af5b90b0d18a4f901e4852c2491zpwrf_b7926b50-9c30-44b4-ac3f-058edec517b9/pull/0.log" Dec 03 14:10:27 crc kubenswrapper[4986]: I1203 14:10:27.445418 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_824d2a61b6100fd280a9a09bd63ce89af5b90b0d18a4f901e4852c2491zpwrf_b7926b50-9c30-44b4-ac3f-058edec517b9/util/0.log" Dec 03 14:10:27 crc kubenswrapper[4986]: I1203 14:10:27.484000 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_824d2a61b6100fd280a9a09bd63ce89af5b90b0d18a4f901e4852c2491zpwrf_b7926b50-9c30-44b4-ac3f-058edec517b9/extract/0.log" Dec 03 14:10:27 crc kubenswrapper[4986]: I1203 14:10:27.521606 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_824d2a61b6100fd280a9a09bd63ce89af5b90b0d18a4f901e4852c2491zpwrf_b7926b50-9c30-44b4-ac3f-058edec517b9/pull/0.log" Dec 03 14:10:27 crc kubenswrapper[4986]: I1203 14:10:27.644118 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-8z7gr_69fed752-e65d-4007-a731-3faee6335366/kube-rbac-proxy/0.log" Dec 03 14:10:27 crc kubenswrapper[4986]: I1203 14:10:27.723254 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-8z7gr_69fed752-e65d-4007-a731-3faee6335366/manager/0.log" Dec 03 14:10:27 crc kubenswrapper[4986]: I1203 14:10:27.734074 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-859b6ccc6-l48wq_f3328b2b-d4e4-4b39-a949-bfd1463596f0/kube-rbac-proxy/0.log" Dec 03 14:10:27 crc kubenswrapper[4986]: I1203 14:10:27.872898 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-859b6ccc6-l48wq_f3328b2b-d4e4-4b39-a949-bfd1463596f0/manager/0.log" Dec 03 14:10:27 crc kubenswrapper[4986]: I1203 14:10:27.927481 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-78b4bc895b-fw7nw_924573df-b6fe-4d17-add4-376f76084fab/kube-rbac-proxy/0.log" Dec 03 14:10:27 crc kubenswrapper[4986]: I1203 14:10:27.937495 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-78b4bc895b-fw7nw_924573df-b6fe-4d17-add4-376f76084fab/manager/0.log" Dec 03 14:10:28 crc kubenswrapper[4986]: I1203 14:10:28.116534 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987cd8cd-5vc5p_8af63121-8727-4c23-b872-554fe679fc2f/kube-rbac-proxy/0.log" Dec 03 14:10:28 crc kubenswrapper[4986]: I1203 14:10:28.162041 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987cd8cd-5vc5p_8af63121-8727-4c23-b872-554fe679fc2f/manager/0.log" Dec 03 14:10:28 crc kubenswrapper[4986]: I1203 14:10:28.261900 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-nnhsz_f6268841-12af-4fa7-a9ab-54927e3256cf/kube-rbac-proxy/0.log" Dec 03 14:10:28 crc kubenswrapper[4986]: I1203 14:10:28.333088 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-nnhsz_f6268841-12af-4fa7-a9ab-54927e3256cf/manager/0.log" Dec 03 14:10:28 crc kubenswrapper[4986]: I1203 14:10:28.403598 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-h49tf_f2be2f63-f6d7-425a-8ce1-d2bc205e24f0/kube-rbac-proxy/0.log" Dec 03 14:10:28 crc kubenswrapper[4986]: I1203 14:10:28.475590 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-h49tf_f2be2f63-f6d7-425a-8ce1-d2bc205e24f0/manager/0.log" Dec 03 14:10:28 crc kubenswrapper[4986]: I1203 14:10:28.548390 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-cgjf2_b47ead63-1562-466f-887b-54c155983ebf/kube-rbac-proxy/0.log" Dec 03 14:10:28 crc kubenswrapper[4986]: I1203 14:10:28.760625 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-cgjf2_b47ead63-1562-466f-887b-54c155983ebf/manager/0.log" Dec 03 14:10:28 crc kubenswrapper[4986]: I1203 14:10:28.786121 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6c548fd776-jgxqz_ac6af48b-36c2-427c-93ad-090cc34434f7/kube-rbac-proxy/0.log" Dec 03 14:10:28 crc kubenswrapper[4986]: I1203 14:10:28.826210 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6c548fd776-jgxqz_ac6af48b-36c2-427c-93ad-090cc34434f7/manager/0.log" Dec 03 14:10:28 crc kubenswrapper[4986]: I1203 14:10:28.967607 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7765d96ddf-bm5lk_29ac6999-88ff-472f-a03e-0b95f1042d38/kube-rbac-proxy/0.log" Dec 03 14:10:29 crc kubenswrapper[4986]: I1203 14:10:29.041979 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7c79b5df47-h5sqr_f01db271-4787-4af7-b37b-5ba6e4e2e5b7/kube-rbac-proxy/0.log" Dec 03 14:10:29 crc kubenswrapper[4986]: I1203 14:10:29.057943 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7765d96ddf-bm5lk_29ac6999-88ff-472f-a03e-0b95f1042d38/manager/0.log" Dec 03 14:10:29 crc kubenswrapper[4986]: I1203 14:10:29.157109 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7c79b5df47-h5sqr_f01db271-4787-4af7-b37b-5ba6e4e2e5b7/manager/0.log" Dec 03 14:10:29 crc kubenswrapper[4986]: I1203 14:10:29.232181 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-56bbcc9d85-bkf5w_3d88c3a6-1643-4fff-acbe-2327b9878103/kube-rbac-proxy/0.log" Dec 03 14:10:29 crc kubenswrapper[4986]: I1203 14:10:29.277113 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-56bbcc9d85-bkf5w_3d88c3a6-1643-4fff-acbe-2327b9878103/manager/0.log" Dec 03 14:10:29 crc kubenswrapper[4986]: I1203 14:10:29.420117 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-blk7z_1ecc0034-a740-410d-a135-6b65d34ce64d/kube-rbac-proxy/0.log" Dec 03 14:10:29 crc kubenswrapper[4986]: I1203 14:10:29.464341 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-blk7z_1ecc0034-a740-410d-a135-6b65d34ce64d/manager/0.log" Dec 03 14:10:29 crc kubenswrapper[4986]: I1203 14:10:29.634590 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-7t7xl_d601bb24-2bd9-478a-96d1-ed2001bd53b6/kube-rbac-proxy/0.log" Dec 03 14:10:29 crc kubenswrapper[4986]: I1203 14:10:29.674394 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-7t7xl_d601bb24-2bd9-478a-96d1-ed2001bd53b6/manager/0.log" Dec 03 14:10:29 crc kubenswrapper[4986]: I1203 14:10:29.702580 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-xcs9s_be530205-b10b-4d4b-9fa3-4d9d0548054c/kube-rbac-proxy/0.log" Dec 03 14:10:29 crc kubenswrapper[4986]: I1203 14:10:29.851436 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-xcs9s_be530205-b10b-4d4b-9fa3-4d9d0548054c/manager/0.log" Dec 03 14:10:29 crc kubenswrapper[4986]: I1203 14:10:29.908053 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-64bc77cfd4fw9mm_400b4a35-c3f1-409e-83fe-019ff145c65a/manager/0.log" Dec 03 14:10:29 crc kubenswrapper[4986]: I1203 14:10:29.920428 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-64bc77cfd4fw9mm_400b4a35-c3f1-409e-83fe-019ff145c65a/kube-rbac-proxy/0.log" Dec 03 14:10:30 crc kubenswrapper[4986]: I1203 14:10:30.385895 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-64576b8bc7-w9775_bbe2108e-e5e6-4482-91b9-148932254640/operator/0.log" Dec 03 14:10:30 crc kubenswrapper[4986]: I1203 14:10:30.897820 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-rxgnt_dbe624e6-2210-4ead-ac45-77704177e0a4/registry-server/0.log" Dec 03 14:10:31 crc kubenswrapper[4986]: I1203 14:10:31.066955 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-lvpbh_42fe5051-78e1-45ab-9766-dbd119c4e060/kube-rbac-proxy/0.log" Dec 03 14:10:31 crc kubenswrapper[4986]: I1203 14:10:31.194070 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-lvpbh_42fe5051-78e1-45ab-9766-dbd119c4e060/manager/0.log" Dec 03 14:10:31 crc kubenswrapper[4986]: I1203 14:10:31.287345 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-j4bg2_c6bc4b54-a3be-4a2c-813e-fa19eea0dbe8/kube-rbac-proxy/0.log" Dec 03 14:10:31 crc kubenswrapper[4986]: I1203 14:10:31.310551 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-j4bg2_c6bc4b54-a3be-4a2c-813e-fa19eea0dbe8/manager/0.log" Dec 03 14:10:31 crc kubenswrapper[4986]: I1203 14:10:31.383774 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-5cd9cc65cb-tjrw7_cd923320-06e2-4933-bc26-a4c947ab732b/manager/0.log" Dec 03 14:10:31 crc kubenswrapper[4986]: I1203 14:10:31.460975 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-9ghmd_b9df4fcc-97aa-4a32-acbf-25f42addf8cc/operator/0.log" Dec 03 14:10:31 crc kubenswrapper[4986]: I1203 14:10:31.562478 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f8c65bbfc-c6wk7_a5090dbe-8e6f-4865-92a3-28720422db9f/kube-rbac-proxy/0.log" Dec 03 14:10:31 crc kubenswrapper[4986]: I1203 14:10:31.628064 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f8c65bbfc-c6wk7_a5090dbe-8e6f-4865-92a3-28720422db9f/manager/0.log" Dec 03 14:10:31 crc kubenswrapper[4986]: I1203 14:10:31.731041 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-76cc84c6bb-726wj_a7413edd-cf2a-4756-b6b7-afe4e4e42fe6/kube-rbac-proxy/0.log" Dec 03 14:10:31 crc kubenswrapper[4986]: I1203 14:10:31.771843 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-76cc84c6bb-726wj_a7413edd-cf2a-4756-b6b7-afe4e4e42fe6/manager/0.log" Dec 03 14:10:31 crc kubenswrapper[4986]: I1203 14:10:31.807547 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-bg4vv_276259ca-95c1-41c2-803f-b82904067552/kube-rbac-proxy/0.log" Dec 03 14:10:31 crc kubenswrapper[4986]: I1203 14:10:31.880638 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-bg4vv_276259ca-95c1-41c2-803f-b82904067552/manager/0.log" Dec 03 14:10:31 crc kubenswrapper[4986]: I1203 14:10:31.966801 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-769dc69bc-ph2qf_2d441496-72d9-462b-aea6-e2588499fbf0/kube-rbac-proxy/0.log" Dec 03 14:10:32 crc kubenswrapper[4986]: I1203 14:10:32.002553 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-769dc69bc-ph2qf_2d441496-72d9-462b-aea6-e2588499fbf0/manager/0.log" Dec 03 14:10:33 crc kubenswrapper[4986]: I1203 14:10:33.943708 4986 scope.go:117] "RemoveContainer" containerID="9908c5bc4e1f9c54ddf6781265a9653bd441c26c6cebabaf9a35e940adc79994" Dec 03 14:10:33 crc kubenswrapper[4986]: E1203 14:10:33.944214 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 14:10:47 crc kubenswrapper[4986]: I1203 14:10:47.943372 4986 scope.go:117] "RemoveContainer" containerID="9908c5bc4e1f9c54ddf6781265a9653bd441c26c6cebabaf9a35e940adc79994" Dec 03 14:10:47 crc kubenswrapper[4986]: E1203 14:10:47.944459 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 14:10:51 crc kubenswrapper[4986]: I1203 14:10:51.597053 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-s6wq8_05a3b920-eb04-4864-81ac-924ba7c63d4e/control-plane-machine-set-operator/0.log" Dec 03 14:10:51 crc kubenswrapper[4986]: I1203 14:10:51.742529 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-xn84j_2588cb3b-8139-4529-a6e1-c57532afdfa7/kube-rbac-proxy/0.log" Dec 03 14:10:51 crc kubenswrapper[4986]: I1203 14:10:51.814275 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-xn84j_2588cb3b-8139-4529-a6e1-c57532afdfa7/machine-api-operator/0.log" Dec 03 14:11:02 crc kubenswrapper[4986]: I1203 14:11:02.943101 4986 scope.go:117] "RemoveContainer" containerID="9908c5bc4e1f9c54ddf6781265a9653bd441c26c6cebabaf9a35e940adc79994" Dec 03 14:11:02 crc kubenswrapper[4986]: E1203 14:11:02.943796 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 14:11:04 crc kubenswrapper[4986]: I1203 14:11:04.958081 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-ftncc_c0dbd0c1-fbde-463c-917f-d7d101f6c6e8/cert-manager-controller/0.log" Dec 03 14:11:05 crc kubenswrapper[4986]: I1203 14:11:05.158693 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-dzxc8_f4e3f3b7-bc75-4d36-9278-773bdf1109df/cert-manager-cainjector/0.log" Dec 03 14:11:05 crc kubenswrapper[4986]: I1203 14:11:05.220003 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-2wgk2_e54cc3fa-08d8-433f-9db5-bcff8c8e43fe/cert-manager-webhook/0.log" Dec 03 14:11:14 crc kubenswrapper[4986]: I1203 14:11:14.943721 4986 scope.go:117] "RemoveContainer" containerID="9908c5bc4e1f9c54ddf6781265a9653bd441c26c6cebabaf9a35e940adc79994" Dec 03 14:11:14 crc kubenswrapper[4986]: E1203 14:11:14.944487 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 14:11:18 crc kubenswrapper[4986]: I1203 14:11:18.936371 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7fbb5f6569-8lb6z_c4ff9001-bea2-41b9-8820-0c46e15b2fbb/nmstate-console-plugin/0.log" Dec 03 14:11:19 crc kubenswrapper[4986]: I1203 14:11:19.072548 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-rgwgv_bd89b324-ae85-4e4e-b40b-a76a7ae8e498/nmstate-handler/0.log" Dec 03 14:11:19 crc kubenswrapper[4986]: I1203 14:11:19.093307 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-ll678_cb8d169c-9c96-403d-9f6b-357dd8ccc78a/kube-rbac-proxy/0.log" Dec 03 14:11:19 crc kubenswrapper[4986]: I1203 14:11:19.137716 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-ll678_cb8d169c-9c96-403d-9f6b-357dd8ccc78a/nmstate-metrics/0.log" Dec 03 14:11:19 crc kubenswrapper[4986]: I1203 14:11:19.254881 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-5b5b58f5c8-zznsq_eed2e928-3f77-4b48-8f9d-9cd923d4f708/nmstate-operator/0.log" Dec 03 14:11:19 crc kubenswrapper[4986]: I1203 14:11:19.329580 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f6d4c5ccb-zztq9_c233dd25-afb2-4ee7-b907-c79d08e02af6/nmstate-webhook/0.log" Dec 03 14:11:25 crc kubenswrapper[4986]: I1203 14:11:25.943959 4986 scope.go:117] "RemoveContainer" containerID="9908c5bc4e1f9c54ddf6781265a9653bd441c26c6cebabaf9a35e940adc79994" Dec 03 14:11:25 crc kubenswrapper[4986]: E1203 14:11:25.945826 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 14:11:34 crc kubenswrapper[4986]: I1203 14:11:34.582952 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-qshgg_0e60a343-90aa-4c8b-a745-020b111c0b76/kube-rbac-proxy/0.log" Dec 03 14:11:34 crc kubenswrapper[4986]: I1203 14:11:34.588318 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-qshgg_0e60a343-90aa-4c8b-a745-020b111c0b76/controller/0.log" Dec 03 14:11:34 crc kubenswrapper[4986]: I1203 14:11:34.810866 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7fcb986d4-bvs5x_56c900c4-e165-4ada-a70f-3ab4f267441d/frr-k8s-webhook-server/0.log" Dec 03 14:11:34 crc kubenswrapper[4986]: I1203 14:11:34.830190 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zv59f_2db2fcc5-1b0c-48df-a5e9-321d28b4efb3/cp-frr-files/0.log" Dec 03 14:11:35 crc kubenswrapper[4986]: I1203 14:11:35.033998 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zv59f_2db2fcc5-1b0c-48df-a5e9-321d28b4efb3/cp-reloader/0.log" Dec 03 14:11:35 crc kubenswrapper[4986]: I1203 14:11:35.062078 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zv59f_2db2fcc5-1b0c-48df-a5e9-321d28b4efb3/cp-reloader/0.log" Dec 03 14:11:35 crc kubenswrapper[4986]: I1203 14:11:35.083436 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zv59f_2db2fcc5-1b0c-48df-a5e9-321d28b4efb3/cp-frr-files/0.log" Dec 03 14:11:35 crc kubenswrapper[4986]: I1203 14:11:35.096317 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zv59f_2db2fcc5-1b0c-48df-a5e9-321d28b4efb3/cp-metrics/0.log" Dec 03 14:11:35 crc kubenswrapper[4986]: I1203 14:11:35.282019 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zv59f_2db2fcc5-1b0c-48df-a5e9-321d28b4efb3/cp-frr-files/0.log" Dec 03 14:11:35 crc kubenswrapper[4986]: I1203 14:11:35.297603 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zv59f_2db2fcc5-1b0c-48df-a5e9-321d28b4efb3/cp-reloader/0.log" Dec 03 14:11:35 crc kubenswrapper[4986]: I1203 14:11:35.307214 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zv59f_2db2fcc5-1b0c-48df-a5e9-321d28b4efb3/cp-metrics/0.log" Dec 03 14:11:35 crc kubenswrapper[4986]: I1203 14:11:35.315076 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zv59f_2db2fcc5-1b0c-48df-a5e9-321d28b4efb3/cp-metrics/0.log" Dec 03 14:11:35 crc kubenswrapper[4986]: I1203 14:11:35.497703 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zv59f_2db2fcc5-1b0c-48df-a5e9-321d28b4efb3/cp-frr-files/0.log" Dec 03 14:11:35 crc kubenswrapper[4986]: I1203 14:11:35.512596 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zv59f_2db2fcc5-1b0c-48df-a5e9-321d28b4efb3/cp-metrics/0.log" Dec 03 14:11:35 crc kubenswrapper[4986]: I1203 14:11:35.513252 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zv59f_2db2fcc5-1b0c-48df-a5e9-321d28b4efb3/cp-reloader/0.log" Dec 03 14:11:35 crc kubenswrapper[4986]: I1203 14:11:35.547269 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zv59f_2db2fcc5-1b0c-48df-a5e9-321d28b4efb3/controller/0.log" Dec 03 14:11:35 crc kubenswrapper[4986]: I1203 14:11:35.709973 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zv59f_2db2fcc5-1b0c-48df-a5e9-321d28b4efb3/frr-metrics/0.log" Dec 03 14:11:35 crc kubenswrapper[4986]: I1203 14:11:35.778719 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zv59f_2db2fcc5-1b0c-48df-a5e9-321d28b4efb3/kube-rbac-proxy-frr/0.log" Dec 03 14:11:35 crc kubenswrapper[4986]: I1203 14:11:35.780380 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zv59f_2db2fcc5-1b0c-48df-a5e9-321d28b4efb3/kube-rbac-proxy/0.log" Dec 03 14:11:35 crc kubenswrapper[4986]: I1203 14:11:35.930011 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zv59f_2db2fcc5-1b0c-48df-a5e9-321d28b4efb3/reloader/0.log" Dec 03 14:11:35 crc kubenswrapper[4986]: I1203 14:11:35.993463 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-5b7fcdf964-xx85j_c6eca6fc-6cf0-4862-b2a8-caa0c4ec2c8a/manager/0.log" Dec 03 14:11:36 crc kubenswrapper[4986]: I1203 14:11:36.219688 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-ddbdbd445-x6ccv_c786d2ef-19b1-4e12-a803-3cf1c459f6a7/webhook-server/0.log" Dec 03 14:11:36 crc kubenswrapper[4986]: I1203 14:11:36.571063 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-d7x7x_559a743d-b60c-4a89-b256-4842e829043c/kube-rbac-proxy/0.log" Dec 03 14:11:37 crc kubenswrapper[4986]: I1203 14:11:37.068058 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-d7x7x_559a743d-b60c-4a89-b256-4842e829043c/speaker/0.log" Dec 03 14:11:37 crc kubenswrapper[4986]: I1203 14:11:37.183947 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zv59f_2db2fcc5-1b0c-48df-a5e9-321d28b4efb3/frr/0.log" Dec 03 14:11:37 crc kubenswrapper[4986]: I1203 14:11:37.944189 4986 scope.go:117] "RemoveContainer" containerID="9908c5bc4e1f9c54ddf6781265a9653bd441c26c6cebabaf9a35e940adc79994" Dec 03 14:11:37 crc kubenswrapper[4986]: E1203 14:11:37.944753 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 14:11:51 crc kubenswrapper[4986]: I1203 14:11:51.378313 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212flxqdt_3e31ff03-19d4-45b9-a2a2-c80add3e095a/util/0.log" Dec 03 14:11:51 crc kubenswrapper[4986]: I1203 14:11:51.608725 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212flxqdt_3e31ff03-19d4-45b9-a2a2-c80add3e095a/util/0.log" Dec 03 14:11:51 crc kubenswrapper[4986]: I1203 14:11:51.615955 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212flxqdt_3e31ff03-19d4-45b9-a2a2-c80add3e095a/pull/0.log" Dec 03 14:11:51 crc kubenswrapper[4986]: I1203 14:11:51.662880 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212flxqdt_3e31ff03-19d4-45b9-a2a2-c80add3e095a/pull/0.log" Dec 03 14:11:51 crc kubenswrapper[4986]: I1203 14:11:51.943538 4986 scope.go:117] "RemoveContainer" containerID="9908c5bc4e1f9c54ddf6781265a9653bd441c26c6cebabaf9a35e940adc79994" Dec 03 14:11:51 crc kubenswrapper[4986]: E1203 14:11:51.943927 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 14:11:52 crc kubenswrapper[4986]: I1203 14:11:52.842901 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212flxqdt_3e31ff03-19d4-45b9-a2a2-c80add3e095a/extract/0.log" Dec 03 14:11:52 crc kubenswrapper[4986]: I1203 14:11:52.849225 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212flxqdt_3e31ff03-19d4-45b9-a2a2-c80add3e095a/pull/0.log" Dec 03 14:11:52 crc kubenswrapper[4986]: I1203 14:11:52.863234 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212flxqdt_3e31ff03-19d4-45b9-a2a2-c80add3e095a/util/0.log" Dec 03 14:11:53 crc kubenswrapper[4986]: I1203 14:11:53.020663 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83lg6jv_8ec4e0a9-2294-4df2-b849-9c32dba275f9/util/0.log" Dec 03 14:11:53 crc kubenswrapper[4986]: I1203 14:11:53.231196 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83lg6jv_8ec4e0a9-2294-4df2-b849-9c32dba275f9/pull/0.log" Dec 03 14:11:53 crc kubenswrapper[4986]: I1203 14:11:53.246005 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83lg6jv_8ec4e0a9-2294-4df2-b849-9c32dba275f9/pull/0.log" Dec 03 14:11:53 crc kubenswrapper[4986]: I1203 14:11:53.334749 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83lg6jv_8ec4e0a9-2294-4df2-b849-9c32dba275f9/util/0.log" Dec 03 14:11:53 crc kubenswrapper[4986]: I1203 14:11:53.367485 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83lg6jv_8ec4e0a9-2294-4df2-b849-9c32dba275f9/util/0.log" Dec 03 14:11:53 crc kubenswrapper[4986]: I1203 14:11:53.463155 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83lg6jv_8ec4e0a9-2294-4df2-b849-9c32dba275f9/pull/0.log" Dec 03 14:11:53 crc kubenswrapper[4986]: I1203 14:11:53.464671 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83lg6jv_8ec4e0a9-2294-4df2-b849-9c32dba275f9/extract/0.log" Dec 03 14:11:53 crc kubenswrapper[4986]: I1203 14:11:53.536594 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-x6bw7_c13ffdcc-9a64-45ba-8fec-96d700c3a387/extract-utilities/0.log" Dec 03 14:11:53 crc kubenswrapper[4986]: I1203 14:11:53.874540 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-x6bw7_c13ffdcc-9a64-45ba-8fec-96d700c3a387/extract-content/0.log" Dec 03 14:11:53 crc kubenswrapper[4986]: I1203 14:11:53.893752 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-x6bw7_c13ffdcc-9a64-45ba-8fec-96d700c3a387/extract-content/0.log" Dec 03 14:11:53 crc kubenswrapper[4986]: I1203 14:11:53.905733 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-x6bw7_c13ffdcc-9a64-45ba-8fec-96d700c3a387/extract-utilities/0.log" Dec 03 14:11:54 crc kubenswrapper[4986]: I1203 14:11:54.069575 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-x6bw7_c13ffdcc-9a64-45ba-8fec-96d700c3a387/extract-utilities/0.log" Dec 03 14:11:54 crc kubenswrapper[4986]: I1203 14:11:54.115361 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-x6bw7_c13ffdcc-9a64-45ba-8fec-96d700c3a387/extract-content/0.log" Dec 03 14:11:54 crc kubenswrapper[4986]: I1203 14:11:54.608454 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-cwp4r_d5ee5a5d-5ed3-46fa-acee-0fa1d5f5048b/extract-utilities/0.log" Dec 03 14:11:54 crc kubenswrapper[4986]: I1203 14:11:54.728909 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-x6bw7_c13ffdcc-9a64-45ba-8fec-96d700c3a387/registry-server/0.log" Dec 03 14:11:54 crc kubenswrapper[4986]: I1203 14:11:54.792846 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-cwp4r_d5ee5a5d-5ed3-46fa-acee-0fa1d5f5048b/extract-content/0.log" Dec 03 14:11:54 crc kubenswrapper[4986]: I1203 14:11:54.837257 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-cwp4r_d5ee5a5d-5ed3-46fa-acee-0fa1d5f5048b/extract-content/0.log" Dec 03 14:11:54 crc kubenswrapper[4986]: I1203 14:11:54.840199 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-cwp4r_d5ee5a5d-5ed3-46fa-acee-0fa1d5f5048b/extract-utilities/0.log" Dec 03 14:11:55 crc kubenswrapper[4986]: I1203 14:11:55.051106 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-cwp4r_d5ee5a5d-5ed3-46fa-acee-0fa1d5f5048b/extract-content/0.log" Dec 03 14:11:55 crc kubenswrapper[4986]: I1203 14:11:55.058765 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-cwp4r_d5ee5a5d-5ed3-46fa-acee-0fa1d5f5048b/extract-utilities/0.log" Dec 03 14:11:55 crc kubenswrapper[4986]: I1203 14:11:55.258678 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-7lnmw_4083ec9d-ae1e-4b92-955d-7b2c3ee874c7/marketplace-operator/0.log" Dec 03 14:11:55 crc kubenswrapper[4986]: I1203 14:11:55.466645 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-84w5m_57e3297d-ec2e-4cc1-8939-4d2dd78c6a8e/extract-utilities/0.log" Dec 03 14:11:55 crc kubenswrapper[4986]: I1203 14:11:55.587514 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-cwp4r_d5ee5a5d-5ed3-46fa-acee-0fa1d5f5048b/registry-server/0.log" Dec 03 14:11:55 crc kubenswrapper[4986]: I1203 14:11:55.659522 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-84w5m_57e3297d-ec2e-4cc1-8939-4d2dd78c6a8e/extract-utilities/0.log" Dec 03 14:11:55 crc kubenswrapper[4986]: I1203 14:11:55.680782 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-84w5m_57e3297d-ec2e-4cc1-8939-4d2dd78c6a8e/extract-content/0.log" Dec 03 14:11:55 crc kubenswrapper[4986]: I1203 14:11:55.701471 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-84w5m_57e3297d-ec2e-4cc1-8939-4d2dd78c6a8e/extract-content/0.log" Dec 03 14:11:55 crc kubenswrapper[4986]: I1203 14:11:55.890124 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9sqlw_fd3d88fc-f2ed-4b4c-b941-0b6d270c65e5/extract-utilities/0.log" Dec 03 14:11:55 crc kubenswrapper[4986]: I1203 14:11:55.894867 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-84w5m_57e3297d-ec2e-4cc1-8939-4d2dd78c6a8e/extract-utilities/0.log" Dec 03 14:11:55 crc kubenswrapper[4986]: I1203 14:11:55.907191 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-84w5m_57e3297d-ec2e-4cc1-8939-4d2dd78c6a8e/extract-content/0.log" Dec 03 14:11:56 crc kubenswrapper[4986]: I1203 14:11:56.019462 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-84w5m_57e3297d-ec2e-4cc1-8939-4d2dd78c6a8e/registry-server/0.log" Dec 03 14:11:56 crc kubenswrapper[4986]: I1203 14:11:56.131362 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9sqlw_fd3d88fc-f2ed-4b4c-b941-0b6d270c65e5/extract-utilities/0.log" Dec 03 14:11:56 crc kubenswrapper[4986]: I1203 14:11:56.140966 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9sqlw_fd3d88fc-f2ed-4b4c-b941-0b6d270c65e5/extract-content/0.log" Dec 03 14:11:56 crc kubenswrapper[4986]: I1203 14:11:56.143346 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9sqlw_fd3d88fc-f2ed-4b4c-b941-0b6d270c65e5/extract-content/0.log" Dec 03 14:11:56 crc kubenswrapper[4986]: I1203 14:11:56.314322 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9sqlw_fd3d88fc-f2ed-4b4c-b941-0b6d270c65e5/extract-utilities/0.log" Dec 03 14:11:56 crc kubenswrapper[4986]: I1203 14:11:56.338555 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9sqlw_fd3d88fc-f2ed-4b4c-b941-0b6d270c65e5/extract-content/0.log" Dec 03 14:11:56 crc kubenswrapper[4986]: I1203 14:11:56.457679 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9sqlw_fd3d88fc-f2ed-4b4c-b941-0b6d270c65e5/registry-server/0.log" Dec 03 14:12:03 crc kubenswrapper[4986]: I1203 14:12:03.944129 4986 scope.go:117] "RemoveContainer" containerID="9908c5bc4e1f9c54ddf6781265a9653bd441c26c6cebabaf9a35e940adc79994" Dec 03 14:12:03 crc kubenswrapper[4986]: E1203 14:12:03.945230 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 14:12:18 crc kubenswrapper[4986]: I1203 14:12:18.944014 4986 scope.go:117] "RemoveContainer" containerID="9908c5bc4e1f9c54ddf6781265a9653bd441c26c6cebabaf9a35e940adc79994" Dec 03 14:12:18 crc kubenswrapper[4986]: E1203 14:12:18.944785 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 14:12:30 crc kubenswrapper[4986]: I1203 14:12:30.943030 4986 scope.go:117] "RemoveContainer" containerID="9908c5bc4e1f9c54ddf6781265a9653bd441c26c6cebabaf9a35e940adc79994" Dec 03 14:12:30 crc kubenswrapper[4986]: E1203 14:12:30.943951 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 14:12:44 crc kubenswrapper[4986]: I1203 14:12:44.944174 4986 scope.go:117] "RemoveContainer" containerID="9908c5bc4e1f9c54ddf6781265a9653bd441c26c6cebabaf9a35e940adc79994" Dec 03 14:12:44 crc kubenswrapper[4986]: E1203 14:12:44.945103 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 14:12:55 crc kubenswrapper[4986]: I1203 14:12:55.944051 4986 scope.go:117] "RemoveContainer" containerID="9908c5bc4e1f9c54ddf6781265a9653bd441c26c6cebabaf9a35e940adc79994" Dec 03 14:12:55 crc kubenswrapper[4986]: E1203 14:12:55.944955 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 14:13:10 crc kubenswrapper[4986]: I1203 14:13:10.956774 4986 scope.go:117] "RemoveContainer" containerID="9908c5bc4e1f9c54ddf6781265a9653bd441c26c6cebabaf9a35e940adc79994" Dec 03 14:13:10 crc kubenswrapper[4986]: E1203 14:13:10.957975 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 14:13:12 crc kubenswrapper[4986]: I1203 14:13:12.114036 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-h5wqb"] Dec 03 14:13:12 crc kubenswrapper[4986]: E1203 14:13:12.114860 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aefcd0f9-63f2-4ad6-b6ab-1cdd83f41438" containerName="container-00" Dec 03 14:13:12 crc kubenswrapper[4986]: I1203 14:13:12.114877 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="aefcd0f9-63f2-4ad6-b6ab-1cdd83f41438" containerName="container-00" Dec 03 14:13:12 crc kubenswrapper[4986]: I1203 14:13:12.115155 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="aefcd0f9-63f2-4ad6-b6ab-1cdd83f41438" containerName="container-00" Dec 03 14:13:12 crc kubenswrapper[4986]: I1203 14:13:12.116931 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h5wqb" Dec 03 14:13:12 crc kubenswrapper[4986]: I1203 14:13:12.136313 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h5wqb"] Dec 03 14:13:12 crc kubenswrapper[4986]: I1203 14:13:12.291495 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82qjs\" (UniqueName: \"kubernetes.io/projected/07d220e7-9211-46fa-82c1-bf90bc829446-kube-api-access-82qjs\") pod \"community-operators-h5wqb\" (UID: \"07d220e7-9211-46fa-82c1-bf90bc829446\") " pod="openshift-marketplace/community-operators-h5wqb" Dec 03 14:13:12 crc kubenswrapper[4986]: I1203 14:13:12.291551 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07d220e7-9211-46fa-82c1-bf90bc829446-utilities\") pod \"community-operators-h5wqb\" (UID: \"07d220e7-9211-46fa-82c1-bf90bc829446\") " pod="openshift-marketplace/community-operators-h5wqb" Dec 03 14:13:12 crc kubenswrapper[4986]: I1203 14:13:12.291577 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07d220e7-9211-46fa-82c1-bf90bc829446-catalog-content\") pod \"community-operators-h5wqb\" (UID: \"07d220e7-9211-46fa-82c1-bf90bc829446\") " pod="openshift-marketplace/community-operators-h5wqb" Dec 03 14:13:12 crc kubenswrapper[4986]: I1203 14:13:12.393067 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82qjs\" (UniqueName: \"kubernetes.io/projected/07d220e7-9211-46fa-82c1-bf90bc829446-kube-api-access-82qjs\") pod \"community-operators-h5wqb\" (UID: \"07d220e7-9211-46fa-82c1-bf90bc829446\") " pod="openshift-marketplace/community-operators-h5wqb" Dec 03 14:13:12 crc kubenswrapper[4986]: I1203 14:13:12.393129 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07d220e7-9211-46fa-82c1-bf90bc829446-utilities\") pod \"community-operators-h5wqb\" (UID: \"07d220e7-9211-46fa-82c1-bf90bc829446\") " pod="openshift-marketplace/community-operators-h5wqb" Dec 03 14:13:12 crc kubenswrapper[4986]: I1203 14:13:12.393152 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07d220e7-9211-46fa-82c1-bf90bc829446-catalog-content\") pod \"community-operators-h5wqb\" (UID: \"07d220e7-9211-46fa-82c1-bf90bc829446\") " pod="openshift-marketplace/community-operators-h5wqb" Dec 03 14:13:12 crc kubenswrapper[4986]: I1203 14:13:12.393685 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07d220e7-9211-46fa-82c1-bf90bc829446-utilities\") pod \"community-operators-h5wqb\" (UID: \"07d220e7-9211-46fa-82c1-bf90bc829446\") " pod="openshift-marketplace/community-operators-h5wqb" Dec 03 14:13:12 crc kubenswrapper[4986]: I1203 14:13:12.393777 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07d220e7-9211-46fa-82c1-bf90bc829446-catalog-content\") pod \"community-operators-h5wqb\" (UID: \"07d220e7-9211-46fa-82c1-bf90bc829446\") " pod="openshift-marketplace/community-operators-h5wqb" Dec 03 14:13:12 crc kubenswrapper[4986]: I1203 14:13:12.417403 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82qjs\" (UniqueName: \"kubernetes.io/projected/07d220e7-9211-46fa-82c1-bf90bc829446-kube-api-access-82qjs\") pod \"community-operators-h5wqb\" (UID: \"07d220e7-9211-46fa-82c1-bf90bc829446\") " pod="openshift-marketplace/community-operators-h5wqb" Dec 03 14:13:12 crc kubenswrapper[4986]: I1203 14:13:12.444486 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h5wqb" Dec 03 14:13:13 crc kubenswrapper[4986]: I1203 14:13:13.056051 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h5wqb"] Dec 03 14:13:13 crc kubenswrapper[4986]: I1203 14:13:13.515949 4986 generic.go:334] "Generic (PLEG): container finished" podID="07d220e7-9211-46fa-82c1-bf90bc829446" containerID="3ac7ac0471e718d162d459466290537129010facd61b6c816002d9ad9e619358" exitCode=0 Dec 03 14:13:13 crc kubenswrapper[4986]: I1203 14:13:13.516043 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h5wqb" event={"ID":"07d220e7-9211-46fa-82c1-bf90bc829446","Type":"ContainerDied","Data":"3ac7ac0471e718d162d459466290537129010facd61b6c816002d9ad9e619358"} Dec 03 14:13:13 crc kubenswrapper[4986]: I1203 14:13:13.516079 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h5wqb" event={"ID":"07d220e7-9211-46fa-82c1-bf90bc829446","Type":"ContainerStarted","Data":"4932d4e7bf41da12b9cf46c6ecd0ab763a6d37805219422fa36ac47c9e70db21"} Dec 03 14:13:13 crc kubenswrapper[4986]: I1203 14:13:13.519442 4986 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 03 14:13:15 crc kubenswrapper[4986]: I1203 14:13:15.539404 4986 generic.go:334] "Generic (PLEG): container finished" podID="07d220e7-9211-46fa-82c1-bf90bc829446" containerID="5f3d7e14eaf180bb32bd093f0e30ac4df20ee7729398024b4bafe1d756a760c4" exitCode=0 Dec 03 14:13:15 crc kubenswrapper[4986]: I1203 14:13:15.539526 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h5wqb" event={"ID":"07d220e7-9211-46fa-82c1-bf90bc829446","Type":"ContainerDied","Data":"5f3d7e14eaf180bb32bd093f0e30ac4df20ee7729398024b4bafe1d756a760c4"} Dec 03 14:13:16 crc kubenswrapper[4986]: I1203 14:13:16.551075 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h5wqb" event={"ID":"07d220e7-9211-46fa-82c1-bf90bc829446","Type":"ContainerStarted","Data":"502c986b7f5378b26c754a4828d8aaf0d2f7dff98db8b7049de03a367c316672"} Dec 03 14:13:16 crc kubenswrapper[4986]: I1203 14:13:16.580577 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-h5wqb" podStartSLOduration=2.122176488 podStartE2EDuration="4.580551557s" podCreationTimestamp="2025-12-03 14:13:12 +0000 UTC" firstStartedPulling="2025-12-03 14:13:13.518863454 +0000 UTC m=+4652.985294685" lastFinishedPulling="2025-12-03 14:13:15.977238523 +0000 UTC m=+4655.443669754" observedRunningTime="2025-12-03 14:13:16.572473501 +0000 UTC m=+4656.038904702" watchObservedRunningTime="2025-12-03 14:13:16.580551557 +0000 UTC m=+4656.046982758" Dec 03 14:13:21 crc kubenswrapper[4986]: I1203 14:13:21.944551 4986 scope.go:117] "RemoveContainer" containerID="9908c5bc4e1f9c54ddf6781265a9653bd441c26c6cebabaf9a35e940adc79994" Dec 03 14:13:21 crc kubenswrapper[4986]: E1203 14:13:21.945606 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 14:13:22 crc kubenswrapper[4986]: I1203 14:13:22.447836 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-h5wqb" Dec 03 14:13:22 crc kubenswrapper[4986]: I1203 14:13:22.448035 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-h5wqb" Dec 03 14:13:22 crc kubenswrapper[4986]: I1203 14:13:22.505028 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-h5wqb" Dec 03 14:13:22 crc kubenswrapper[4986]: I1203 14:13:22.684952 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-h5wqb" Dec 03 14:13:23 crc kubenswrapper[4986]: I1203 14:13:23.907793 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-h5wqb"] Dec 03 14:13:25 crc kubenswrapper[4986]: I1203 14:13:25.645418 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-h5wqb" podUID="07d220e7-9211-46fa-82c1-bf90bc829446" containerName="registry-server" containerID="cri-o://502c986b7f5378b26c754a4828d8aaf0d2f7dff98db8b7049de03a367c316672" gracePeriod=2 Dec 03 14:13:26 crc kubenswrapper[4986]: I1203 14:13:26.123711 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h5wqb" Dec 03 14:13:26 crc kubenswrapper[4986]: I1203 14:13:26.295364 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07d220e7-9211-46fa-82c1-bf90bc829446-utilities\") pod \"07d220e7-9211-46fa-82c1-bf90bc829446\" (UID: \"07d220e7-9211-46fa-82c1-bf90bc829446\") " Dec 03 14:13:26 crc kubenswrapper[4986]: I1203 14:13:26.295439 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07d220e7-9211-46fa-82c1-bf90bc829446-catalog-content\") pod \"07d220e7-9211-46fa-82c1-bf90bc829446\" (UID: \"07d220e7-9211-46fa-82c1-bf90bc829446\") " Dec 03 14:13:26 crc kubenswrapper[4986]: I1203 14:13:26.295514 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-82qjs\" (UniqueName: \"kubernetes.io/projected/07d220e7-9211-46fa-82c1-bf90bc829446-kube-api-access-82qjs\") pod \"07d220e7-9211-46fa-82c1-bf90bc829446\" (UID: \"07d220e7-9211-46fa-82c1-bf90bc829446\") " Dec 03 14:13:26 crc kubenswrapper[4986]: I1203 14:13:26.297300 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07d220e7-9211-46fa-82c1-bf90bc829446-utilities" (OuterVolumeSpecName: "utilities") pod "07d220e7-9211-46fa-82c1-bf90bc829446" (UID: "07d220e7-9211-46fa-82c1-bf90bc829446"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 14:13:26 crc kubenswrapper[4986]: I1203 14:13:26.301503 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07d220e7-9211-46fa-82c1-bf90bc829446-kube-api-access-82qjs" (OuterVolumeSpecName: "kube-api-access-82qjs") pod "07d220e7-9211-46fa-82c1-bf90bc829446" (UID: "07d220e7-9211-46fa-82c1-bf90bc829446"). InnerVolumeSpecName "kube-api-access-82qjs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 14:13:26 crc kubenswrapper[4986]: I1203 14:13:26.367134 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07d220e7-9211-46fa-82c1-bf90bc829446-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "07d220e7-9211-46fa-82c1-bf90bc829446" (UID: "07d220e7-9211-46fa-82c1-bf90bc829446"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 14:13:26 crc kubenswrapper[4986]: I1203 14:13:26.397371 4986 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07d220e7-9211-46fa-82c1-bf90bc829446-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 14:13:26 crc kubenswrapper[4986]: I1203 14:13:26.397402 4986 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07d220e7-9211-46fa-82c1-bf90bc829446-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 14:13:26 crc kubenswrapper[4986]: I1203 14:13:26.398268 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-82qjs\" (UniqueName: \"kubernetes.io/projected/07d220e7-9211-46fa-82c1-bf90bc829446-kube-api-access-82qjs\") on node \"crc\" DevicePath \"\"" Dec 03 14:13:26 crc kubenswrapper[4986]: I1203 14:13:26.653904 4986 generic.go:334] "Generic (PLEG): container finished" podID="07d220e7-9211-46fa-82c1-bf90bc829446" containerID="502c986b7f5378b26c754a4828d8aaf0d2f7dff98db8b7049de03a367c316672" exitCode=0 Dec 03 14:13:26 crc kubenswrapper[4986]: I1203 14:13:26.653951 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h5wqb" Dec 03 14:13:26 crc kubenswrapper[4986]: I1203 14:13:26.653964 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h5wqb" event={"ID":"07d220e7-9211-46fa-82c1-bf90bc829446","Type":"ContainerDied","Data":"502c986b7f5378b26c754a4828d8aaf0d2f7dff98db8b7049de03a367c316672"} Dec 03 14:13:26 crc kubenswrapper[4986]: I1203 14:13:26.654002 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h5wqb" event={"ID":"07d220e7-9211-46fa-82c1-bf90bc829446","Type":"ContainerDied","Data":"4932d4e7bf41da12b9cf46c6ecd0ab763a6d37805219422fa36ac47c9e70db21"} Dec 03 14:13:26 crc kubenswrapper[4986]: I1203 14:13:26.654029 4986 scope.go:117] "RemoveContainer" containerID="502c986b7f5378b26c754a4828d8aaf0d2f7dff98db8b7049de03a367c316672" Dec 03 14:13:26 crc kubenswrapper[4986]: I1203 14:13:26.673015 4986 scope.go:117] "RemoveContainer" containerID="5f3d7e14eaf180bb32bd093f0e30ac4df20ee7729398024b4bafe1d756a760c4" Dec 03 14:13:26 crc kubenswrapper[4986]: I1203 14:13:26.703082 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-h5wqb"] Dec 03 14:13:26 crc kubenswrapper[4986]: I1203 14:13:26.713093 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-h5wqb"] Dec 03 14:13:26 crc kubenswrapper[4986]: I1203 14:13:26.733547 4986 scope.go:117] "RemoveContainer" containerID="3ac7ac0471e718d162d459466290537129010facd61b6c816002d9ad9e619358" Dec 03 14:13:26 crc kubenswrapper[4986]: I1203 14:13:26.759358 4986 scope.go:117] "RemoveContainer" containerID="502c986b7f5378b26c754a4828d8aaf0d2f7dff98db8b7049de03a367c316672" Dec 03 14:13:26 crc kubenswrapper[4986]: E1203 14:13:26.759972 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"502c986b7f5378b26c754a4828d8aaf0d2f7dff98db8b7049de03a367c316672\": container with ID starting with 502c986b7f5378b26c754a4828d8aaf0d2f7dff98db8b7049de03a367c316672 not found: ID does not exist" containerID="502c986b7f5378b26c754a4828d8aaf0d2f7dff98db8b7049de03a367c316672" Dec 03 14:13:26 crc kubenswrapper[4986]: I1203 14:13:26.760016 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"502c986b7f5378b26c754a4828d8aaf0d2f7dff98db8b7049de03a367c316672"} err="failed to get container status \"502c986b7f5378b26c754a4828d8aaf0d2f7dff98db8b7049de03a367c316672\": rpc error: code = NotFound desc = could not find container \"502c986b7f5378b26c754a4828d8aaf0d2f7dff98db8b7049de03a367c316672\": container with ID starting with 502c986b7f5378b26c754a4828d8aaf0d2f7dff98db8b7049de03a367c316672 not found: ID does not exist" Dec 03 14:13:26 crc kubenswrapper[4986]: I1203 14:13:26.760042 4986 scope.go:117] "RemoveContainer" containerID="5f3d7e14eaf180bb32bd093f0e30ac4df20ee7729398024b4bafe1d756a760c4" Dec 03 14:13:26 crc kubenswrapper[4986]: E1203 14:13:26.760512 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f3d7e14eaf180bb32bd093f0e30ac4df20ee7729398024b4bafe1d756a760c4\": container with ID starting with 5f3d7e14eaf180bb32bd093f0e30ac4df20ee7729398024b4bafe1d756a760c4 not found: ID does not exist" containerID="5f3d7e14eaf180bb32bd093f0e30ac4df20ee7729398024b4bafe1d756a760c4" Dec 03 14:13:26 crc kubenswrapper[4986]: I1203 14:13:26.760541 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f3d7e14eaf180bb32bd093f0e30ac4df20ee7729398024b4bafe1d756a760c4"} err="failed to get container status \"5f3d7e14eaf180bb32bd093f0e30ac4df20ee7729398024b4bafe1d756a760c4\": rpc error: code = NotFound desc = could not find container \"5f3d7e14eaf180bb32bd093f0e30ac4df20ee7729398024b4bafe1d756a760c4\": container with ID starting with 5f3d7e14eaf180bb32bd093f0e30ac4df20ee7729398024b4bafe1d756a760c4 not found: ID does not exist" Dec 03 14:13:26 crc kubenswrapper[4986]: I1203 14:13:26.760559 4986 scope.go:117] "RemoveContainer" containerID="3ac7ac0471e718d162d459466290537129010facd61b6c816002d9ad9e619358" Dec 03 14:13:26 crc kubenswrapper[4986]: E1203 14:13:26.761036 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ac7ac0471e718d162d459466290537129010facd61b6c816002d9ad9e619358\": container with ID starting with 3ac7ac0471e718d162d459466290537129010facd61b6c816002d9ad9e619358 not found: ID does not exist" containerID="3ac7ac0471e718d162d459466290537129010facd61b6c816002d9ad9e619358" Dec 03 14:13:26 crc kubenswrapper[4986]: I1203 14:13:26.761121 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ac7ac0471e718d162d459466290537129010facd61b6c816002d9ad9e619358"} err="failed to get container status \"3ac7ac0471e718d162d459466290537129010facd61b6c816002d9ad9e619358\": rpc error: code = NotFound desc = could not find container \"3ac7ac0471e718d162d459466290537129010facd61b6c816002d9ad9e619358\": container with ID starting with 3ac7ac0471e718d162d459466290537129010facd61b6c816002d9ad9e619358 not found: ID does not exist" Dec 03 14:13:26 crc kubenswrapper[4986]: I1203 14:13:26.956329 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07d220e7-9211-46fa-82c1-bf90bc829446" path="/var/lib/kubelet/pods/07d220e7-9211-46fa-82c1-bf90bc829446/volumes" Dec 03 14:13:32 crc kubenswrapper[4986]: I1203 14:13:32.943983 4986 scope.go:117] "RemoveContainer" containerID="9908c5bc4e1f9c54ddf6781265a9653bd441c26c6cebabaf9a35e940adc79994" Dec 03 14:13:32 crc kubenswrapper[4986]: E1203 14:13:32.944748 4986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xggpj_openshift-machine-config-operator(f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" Dec 03 14:13:37 crc kubenswrapper[4986]: I1203 14:13:37.115983 4986 generic.go:334] "Generic (PLEG): container finished" podID="5f810777-8f08-434e-ad09-0a4e18694dd0" containerID="ae5dce02cecac31dba54a82509965958a2c10c78fbd1119e71ab949183dfae6c" exitCode=0 Dec 03 14:13:37 crc kubenswrapper[4986]: I1203 14:13:37.116051 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-z5m8x/must-gather-sxmxm" event={"ID":"5f810777-8f08-434e-ad09-0a4e18694dd0","Type":"ContainerDied","Data":"ae5dce02cecac31dba54a82509965958a2c10c78fbd1119e71ab949183dfae6c"} Dec 03 14:13:37 crc kubenswrapper[4986]: I1203 14:13:37.120004 4986 scope.go:117] "RemoveContainer" containerID="ae5dce02cecac31dba54a82509965958a2c10c78fbd1119e71ab949183dfae6c" Dec 03 14:13:37 crc kubenswrapper[4986]: I1203 14:13:37.330870 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-z5m8x_must-gather-sxmxm_5f810777-8f08-434e-ad09-0a4e18694dd0/gather/0.log" Dec 03 14:13:43 crc kubenswrapper[4986]: I1203 14:13:43.944099 4986 scope.go:117] "RemoveContainer" containerID="9908c5bc4e1f9c54ddf6781265a9653bd441c26c6cebabaf9a35e940adc79994" Dec 03 14:13:45 crc kubenswrapper[4986]: I1203 14:13:45.200390 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" event={"ID":"f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c","Type":"ContainerStarted","Data":"a1927d3957f997e0b9f0a26a316582ff4f5de725b2545467ef2eb90d71e4bf06"} Dec 03 14:13:47 crc kubenswrapper[4986]: I1203 14:13:47.526029 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-z5m8x/must-gather-sxmxm"] Dec 03 14:13:47 crc kubenswrapper[4986]: I1203 14:13:47.526923 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-z5m8x/must-gather-sxmxm" podUID="5f810777-8f08-434e-ad09-0a4e18694dd0" containerName="copy" containerID="cri-o://b90a08fe18e948af22be71a0e9977a45093d62e78faf7e0da4f5e8927a283c4e" gracePeriod=2 Dec 03 14:13:47 crc kubenswrapper[4986]: I1203 14:13:47.542894 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-z5m8x/must-gather-sxmxm"] Dec 03 14:13:48 crc kubenswrapper[4986]: I1203 14:13:48.233137 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-z5m8x_must-gather-sxmxm_5f810777-8f08-434e-ad09-0a4e18694dd0/copy/0.log" Dec 03 14:13:48 crc kubenswrapper[4986]: I1203 14:13:48.233777 4986 generic.go:334] "Generic (PLEG): container finished" podID="5f810777-8f08-434e-ad09-0a4e18694dd0" containerID="b90a08fe18e948af22be71a0e9977a45093d62e78faf7e0da4f5e8927a283c4e" exitCode=143 Dec 03 14:13:48 crc kubenswrapper[4986]: I1203 14:13:48.233819 4986 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e253950c064e7647ce8659f18f611e3488a8d0508b193def3f8defcc28d9301" Dec 03 14:13:48 crc kubenswrapper[4986]: I1203 14:13:48.307734 4986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-z5m8x_must-gather-sxmxm_5f810777-8f08-434e-ad09-0a4e18694dd0/copy/0.log" Dec 03 14:13:48 crc kubenswrapper[4986]: I1203 14:13:48.308210 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z5m8x/must-gather-sxmxm" Dec 03 14:13:48 crc kubenswrapper[4986]: I1203 14:13:48.441663 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/5f810777-8f08-434e-ad09-0a4e18694dd0-must-gather-output\") pod \"5f810777-8f08-434e-ad09-0a4e18694dd0\" (UID: \"5f810777-8f08-434e-ad09-0a4e18694dd0\") " Dec 03 14:13:48 crc kubenswrapper[4986]: I1203 14:13:48.441923 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2fr2p\" (UniqueName: \"kubernetes.io/projected/5f810777-8f08-434e-ad09-0a4e18694dd0-kube-api-access-2fr2p\") pod \"5f810777-8f08-434e-ad09-0a4e18694dd0\" (UID: \"5f810777-8f08-434e-ad09-0a4e18694dd0\") " Dec 03 14:13:48 crc kubenswrapper[4986]: I1203 14:13:48.540611 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f810777-8f08-434e-ad09-0a4e18694dd0-kube-api-access-2fr2p" (OuterVolumeSpecName: "kube-api-access-2fr2p") pod "5f810777-8f08-434e-ad09-0a4e18694dd0" (UID: "5f810777-8f08-434e-ad09-0a4e18694dd0"). InnerVolumeSpecName "kube-api-access-2fr2p". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 14:13:48 crc kubenswrapper[4986]: I1203 14:13:48.550506 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2fr2p\" (UniqueName: \"kubernetes.io/projected/5f810777-8f08-434e-ad09-0a4e18694dd0-kube-api-access-2fr2p\") on node \"crc\" DevicePath \"\"" Dec 03 14:13:48 crc kubenswrapper[4986]: I1203 14:13:48.616267 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f810777-8f08-434e-ad09-0a4e18694dd0-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "5f810777-8f08-434e-ad09-0a4e18694dd0" (UID: "5f810777-8f08-434e-ad09-0a4e18694dd0"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 14:13:48 crc kubenswrapper[4986]: I1203 14:13:48.651970 4986 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/5f810777-8f08-434e-ad09-0a4e18694dd0-must-gather-output\") on node \"crc\" DevicePath \"\"" Dec 03 14:13:48 crc kubenswrapper[4986]: I1203 14:13:48.954618 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f810777-8f08-434e-ad09-0a4e18694dd0" path="/var/lib/kubelet/pods/5f810777-8f08-434e-ad09-0a4e18694dd0/volumes" Dec 03 14:13:49 crc kubenswrapper[4986]: I1203 14:13:49.242182 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z5m8x/must-gather-sxmxm" Dec 03 14:14:27 crc kubenswrapper[4986]: I1203 14:14:27.343143 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jvnfk"] Dec 03 14:14:27 crc kubenswrapper[4986]: E1203 14:14:27.344195 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07d220e7-9211-46fa-82c1-bf90bc829446" containerName="extract-utilities" Dec 03 14:14:27 crc kubenswrapper[4986]: I1203 14:14:27.344213 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="07d220e7-9211-46fa-82c1-bf90bc829446" containerName="extract-utilities" Dec 03 14:14:27 crc kubenswrapper[4986]: E1203 14:14:27.344232 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07d220e7-9211-46fa-82c1-bf90bc829446" containerName="registry-server" Dec 03 14:14:27 crc kubenswrapper[4986]: I1203 14:14:27.344240 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="07d220e7-9211-46fa-82c1-bf90bc829446" containerName="registry-server" Dec 03 14:14:27 crc kubenswrapper[4986]: E1203 14:14:27.344263 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07d220e7-9211-46fa-82c1-bf90bc829446" containerName="extract-content" Dec 03 14:14:27 crc kubenswrapper[4986]: I1203 14:14:27.344270 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="07d220e7-9211-46fa-82c1-bf90bc829446" containerName="extract-content" Dec 03 14:14:27 crc kubenswrapper[4986]: E1203 14:14:27.344307 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f810777-8f08-434e-ad09-0a4e18694dd0" containerName="gather" Dec 03 14:14:27 crc kubenswrapper[4986]: I1203 14:14:27.344316 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f810777-8f08-434e-ad09-0a4e18694dd0" containerName="gather" Dec 03 14:14:27 crc kubenswrapper[4986]: E1203 14:14:27.344330 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f810777-8f08-434e-ad09-0a4e18694dd0" containerName="copy" Dec 03 14:14:27 crc kubenswrapper[4986]: I1203 14:14:27.344338 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f810777-8f08-434e-ad09-0a4e18694dd0" containerName="copy" Dec 03 14:14:27 crc kubenswrapper[4986]: I1203 14:14:27.344595 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="07d220e7-9211-46fa-82c1-bf90bc829446" containerName="registry-server" Dec 03 14:14:27 crc kubenswrapper[4986]: I1203 14:14:27.344622 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f810777-8f08-434e-ad09-0a4e18694dd0" containerName="copy" Dec 03 14:14:27 crc kubenswrapper[4986]: I1203 14:14:27.344645 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f810777-8f08-434e-ad09-0a4e18694dd0" containerName="gather" Dec 03 14:14:27 crc kubenswrapper[4986]: I1203 14:14:27.346415 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jvnfk" Dec 03 14:14:27 crc kubenswrapper[4986]: I1203 14:14:27.362715 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jvnfk"] Dec 03 14:14:27 crc kubenswrapper[4986]: I1203 14:14:27.428648 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/930b386c-b439-49c0-9426-e8b41695de6d-catalog-content\") pod \"redhat-marketplace-jvnfk\" (UID: \"930b386c-b439-49c0-9426-e8b41695de6d\") " pod="openshift-marketplace/redhat-marketplace-jvnfk" Dec 03 14:14:27 crc kubenswrapper[4986]: I1203 14:14:27.428897 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/930b386c-b439-49c0-9426-e8b41695de6d-utilities\") pod \"redhat-marketplace-jvnfk\" (UID: \"930b386c-b439-49c0-9426-e8b41695de6d\") " pod="openshift-marketplace/redhat-marketplace-jvnfk" Dec 03 14:14:27 crc kubenswrapper[4986]: I1203 14:14:27.429159 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vwhr\" (UniqueName: \"kubernetes.io/projected/930b386c-b439-49c0-9426-e8b41695de6d-kube-api-access-4vwhr\") pod \"redhat-marketplace-jvnfk\" (UID: \"930b386c-b439-49c0-9426-e8b41695de6d\") " pod="openshift-marketplace/redhat-marketplace-jvnfk" Dec 03 14:14:27 crc kubenswrapper[4986]: I1203 14:14:27.531814 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/930b386c-b439-49c0-9426-e8b41695de6d-utilities\") pod \"redhat-marketplace-jvnfk\" (UID: \"930b386c-b439-49c0-9426-e8b41695de6d\") " pod="openshift-marketplace/redhat-marketplace-jvnfk" Dec 03 14:14:27 crc kubenswrapper[4986]: I1203 14:14:27.531984 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vwhr\" (UniqueName: \"kubernetes.io/projected/930b386c-b439-49c0-9426-e8b41695de6d-kube-api-access-4vwhr\") pod \"redhat-marketplace-jvnfk\" (UID: \"930b386c-b439-49c0-9426-e8b41695de6d\") " pod="openshift-marketplace/redhat-marketplace-jvnfk" Dec 03 14:14:27 crc kubenswrapper[4986]: I1203 14:14:27.532245 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/930b386c-b439-49c0-9426-e8b41695de6d-catalog-content\") pod \"redhat-marketplace-jvnfk\" (UID: \"930b386c-b439-49c0-9426-e8b41695de6d\") " pod="openshift-marketplace/redhat-marketplace-jvnfk" Dec 03 14:14:27 crc kubenswrapper[4986]: I1203 14:14:27.532494 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/930b386c-b439-49c0-9426-e8b41695de6d-utilities\") pod \"redhat-marketplace-jvnfk\" (UID: \"930b386c-b439-49c0-9426-e8b41695de6d\") " pod="openshift-marketplace/redhat-marketplace-jvnfk" Dec 03 14:14:27 crc kubenswrapper[4986]: I1203 14:14:27.532785 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/930b386c-b439-49c0-9426-e8b41695de6d-catalog-content\") pod \"redhat-marketplace-jvnfk\" (UID: \"930b386c-b439-49c0-9426-e8b41695de6d\") " pod="openshift-marketplace/redhat-marketplace-jvnfk" Dec 03 14:14:27 crc kubenswrapper[4986]: I1203 14:14:27.562066 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vwhr\" (UniqueName: \"kubernetes.io/projected/930b386c-b439-49c0-9426-e8b41695de6d-kube-api-access-4vwhr\") pod \"redhat-marketplace-jvnfk\" (UID: \"930b386c-b439-49c0-9426-e8b41695de6d\") " pod="openshift-marketplace/redhat-marketplace-jvnfk" Dec 03 14:14:27 crc kubenswrapper[4986]: I1203 14:14:27.675962 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jvnfk" Dec 03 14:14:28 crc kubenswrapper[4986]: I1203 14:14:28.149851 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jvnfk"] Dec 03 14:14:28 crc kubenswrapper[4986]: I1203 14:14:28.612298 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jvnfk" event={"ID":"930b386c-b439-49c0-9426-e8b41695de6d","Type":"ContainerStarted","Data":"1c19e5f3156b5f51247685ea0314b5288f7257646c53b4fa5d037ca5bfc8eef3"} Dec 03 14:14:28 crc kubenswrapper[4986]: I1203 14:14:28.612693 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jvnfk" event={"ID":"930b386c-b439-49c0-9426-e8b41695de6d","Type":"ContainerStarted","Data":"b9ecf4a8440c1afb8ab4d7d2ca2cf74fc4458d1e61cc5eb56f186ab2f5461915"} Dec 03 14:14:29 crc kubenswrapper[4986]: I1203 14:14:29.621822 4986 generic.go:334] "Generic (PLEG): container finished" podID="930b386c-b439-49c0-9426-e8b41695de6d" containerID="1c19e5f3156b5f51247685ea0314b5288f7257646c53b4fa5d037ca5bfc8eef3" exitCode=0 Dec 03 14:14:29 crc kubenswrapper[4986]: I1203 14:14:29.621891 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jvnfk" event={"ID":"930b386c-b439-49c0-9426-e8b41695de6d","Type":"ContainerDied","Data":"1c19e5f3156b5f51247685ea0314b5288f7257646c53b4fa5d037ca5bfc8eef3"} Dec 03 14:14:30 crc kubenswrapper[4986]: I1203 14:14:30.630982 4986 generic.go:334] "Generic (PLEG): container finished" podID="930b386c-b439-49c0-9426-e8b41695de6d" containerID="acd763bd874a7467e3b0490eb7f0d9b06262e531585680c66bb068461a5505bf" exitCode=0 Dec 03 14:14:30 crc kubenswrapper[4986]: I1203 14:14:30.631156 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jvnfk" event={"ID":"930b386c-b439-49c0-9426-e8b41695de6d","Type":"ContainerDied","Data":"acd763bd874a7467e3b0490eb7f0d9b06262e531585680c66bb068461a5505bf"} Dec 03 14:14:31 crc kubenswrapper[4986]: I1203 14:14:31.643818 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jvnfk" event={"ID":"930b386c-b439-49c0-9426-e8b41695de6d","Type":"ContainerStarted","Data":"2755a1138a842965a9e681dcbe2d5973024db239f87510aa1a43bd57a33d5da1"} Dec 03 14:14:31 crc kubenswrapper[4986]: I1203 14:14:31.667672 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jvnfk" podStartSLOduration=3.031491455 podStartE2EDuration="4.667650188s" podCreationTimestamp="2025-12-03 14:14:27 +0000 UTC" firstStartedPulling="2025-12-03 14:14:29.623921837 +0000 UTC m=+4729.090353028" lastFinishedPulling="2025-12-03 14:14:31.26008053 +0000 UTC m=+4730.726511761" observedRunningTime="2025-12-03 14:14:31.661047552 +0000 UTC m=+4731.127478753" watchObservedRunningTime="2025-12-03 14:14:31.667650188 +0000 UTC m=+4731.134081389" Dec 03 14:14:37 crc kubenswrapper[4986]: I1203 14:14:37.676708 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jvnfk" Dec 03 14:14:37 crc kubenswrapper[4986]: I1203 14:14:37.677834 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jvnfk" Dec 03 14:14:37 crc kubenswrapper[4986]: I1203 14:14:37.741170 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jvnfk" Dec 03 14:14:38 crc kubenswrapper[4986]: I1203 14:14:38.472655 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jvnfk" Dec 03 14:14:38 crc kubenswrapper[4986]: I1203 14:14:38.518656 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jvnfk"] Dec 03 14:14:39 crc kubenswrapper[4986]: I1203 14:14:39.905016 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jvnfk" podUID="930b386c-b439-49c0-9426-e8b41695de6d" containerName="registry-server" containerID="cri-o://2755a1138a842965a9e681dcbe2d5973024db239f87510aa1a43bd57a33d5da1" gracePeriod=2 Dec 03 14:14:40 crc kubenswrapper[4986]: I1203 14:14:40.860631 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jvnfk" Dec 03 14:14:40 crc kubenswrapper[4986]: I1203 14:14:40.915092 4986 generic.go:334] "Generic (PLEG): container finished" podID="930b386c-b439-49c0-9426-e8b41695de6d" containerID="2755a1138a842965a9e681dcbe2d5973024db239f87510aa1a43bd57a33d5da1" exitCode=0 Dec 03 14:14:40 crc kubenswrapper[4986]: I1203 14:14:40.915160 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jvnfk" Dec 03 14:14:40 crc kubenswrapper[4986]: I1203 14:14:40.915175 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jvnfk" event={"ID":"930b386c-b439-49c0-9426-e8b41695de6d","Type":"ContainerDied","Data":"2755a1138a842965a9e681dcbe2d5973024db239f87510aa1a43bd57a33d5da1"} Dec 03 14:14:40 crc kubenswrapper[4986]: I1203 14:14:40.916636 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jvnfk" event={"ID":"930b386c-b439-49c0-9426-e8b41695de6d","Type":"ContainerDied","Data":"b9ecf4a8440c1afb8ab4d7d2ca2cf74fc4458d1e61cc5eb56f186ab2f5461915"} Dec 03 14:14:40 crc kubenswrapper[4986]: I1203 14:14:40.916688 4986 scope.go:117] "RemoveContainer" containerID="2755a1138a842965a9e681dcbe2d5973024db239f87510aa1a43bd57a33d5da1" Dec 03 14:14:40 crc kubenswrapper[4986]: I1203 14:14:40.938025 4986 scope.go:117] "RemoveContainer" containerID="acd763bd874a7467e3b0490eb7f0d9b06262e531585680c66bb068461a5505bf" Dec 03 14:14:40 crc kubenswrapper[4986]: I1203 14:14:40.963065 4986 scope.go:117] "RemoveContainer" containerID="1c19e5f3156b5f51247685ea0314b5288f7257646c53b4fa5d037ca5bfc8eef3" Dec 03 14:14:41 crc kubenswrapper[4986]: I1203 14:14:41.021791 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/930b386c-b439-49c0-9426-e8b41695de6d-catalog-content\") pod \"930b386c-b439-49c0-9426-e8b41695de6d\" (UID: \"930b386c-b439-49c0-9426-e8b41695de6d\") " Dec 03 14:14:41 crc kubenswrapper[4986]: I1203 14:14:41.021957 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/930b386c-b439-49c0-9426-e8b41695de6d-utilities\") pod \"930b386c-b439-49c0-9426-e8b41695de6d\" (UID: \"930b386c-b439-49c0-9426-e8b41695de6d\") " Dec 03 14:14:41 crc kubenswrapper[4986]: I1203 14:14:41.022113 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vwhr\" (UniqueName: \"kubernetes.io/projected/930b386c-b439-49c0-9426-e8b41695de6d-kube-api-access-4vwhr\") pod \"930b386c-b439-49c0-9426-e8b41695de6d\" (UID: \"930b386c-b439-49c0-9426-e8b41695de6d\") " Dec 03 14:14:41 crc kubenswrapper[4986]: I1203 14:14:41.022864 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/930b386c-b439-49c0-9426-e8b41695de6d-utilities" (OuterVolumeSpecName: "utilities") pod "930b386c-b439-49c0-9426-e8b41695de6d" (UID: "930b386c-b439-49c0-9426-e8b41695de6d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 14:14:41 crc kubenswrapper[4986]: I1203 14:14:41.026065 4986 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/930b386c-b439-49c0-9426-e8b41695de6d-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 14:14:41 crc kubenswrapper[4986]: I1203 14:14:41.027042 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/930b386c-b439-49c0-9426-e8b41695de6d-kube-api-access-4vwhr" (OuterVolumeSpecName: "kube-api-access-4vwhr") pod "930b386c-b439-49c0-9426-e8b41695de6d" (UID: "930b386c-b439-49c0-9426-e8b41695de6d"). InnerVolumeSpecName "kube-api-access-4vwhr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 14:14:41 crc kubenswrapper[4986]: I1203 14:14:41.029163 4986 scope.go:117] "RemoveContainer" containerID="2755a1138a842965a9e681dcbe2d5973024db239f87510aa1a43bd57a33d5da1" Dec 03 14:14:41 crc kubenswrapper[4986]: E1203 14:14:41.029606 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2755a1138a842965a9e681dcbe2d5973024db239f87510aa1a43bd57a33d5da1\": container with ID starting with 2755a1138a842965a9e681dcbe2d5973024db239f87510aa1a43bd57a33d5da1 not found: ID does not exist" containerID="2755a1138a842965a9e681dcbe2d5973024db239f87510aa1a43bd57a33d5da1" Dec 03 14:14:41 crc kubenswrapper[4986]: I1203 14:14:41.029642 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2755a1138a842965a9e681dcbe2d5973024db239f87510aa1a43bd57a33d5da1"} err="failed to get container status \"2755a1138a842965a9e681dcbe2d5973024db239f87510aa1a43bd57a33d5da1\": rpc error: code = NotFound desc = could not find container \"2755a1138a842965a9e681dcbe2d5973024db239f87510aa1a43bd57a33d5da1\": container with ID starting with 2755a1138a842965a9e681dcbe2d5973024db239f87510aa1a43bd57a33d5da1 not found: ID does not exist" Dec 03 14:14:41 crc kubenswrapper[4986]: I1203 14:14:41.029670 4986 scope.go:117] "RemoveContainer" containerID="acd763bd874a7467e3b0490eb7f0d9b06262e531585680c66bb068461a5505bf" Dec 03 14:14:41 crc kubenswrapper[4986]: E1203 14:14:41.029977 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"acd763bd874a7467e3b0490eb7f0d9b06262e531585680c66bb068461a5505bf\": container with ID starting with acd763bd874a7467e3b0490eb7f0d9b06262e531585680c66bb068461a5505bf not found: ID does not exist" containerID="acd763bd874a7467e3b0490eb7f0d9b06262e531585680c66bb068461a5505bf" Dec 03 14:14:41 crc kubenswrapper[4986]: I1203 14:14:41.030004 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acd763bd874a7467e3b0490eb7f0d9b06262e531585680c66bb068461a5505bf"} err="failed to get container status \"acd763bd874a7467e3b0490eb7f0d9b06262e531585680c66bb068461a5505bf\": rpc error: code = NotFound desc = could not find container \"acd763bd874a7467e3b0490eb7f0d9b06262e531585680c66bb068461a5505bf\": container with ID starting with acd763bd874a7467e3b0490eb7f0d9b06262e531585680c66bb068461a5505bf not found: ID does not exist" Dec 03 14:14:41 crc kubenswrapper[4986]: I1203 14:14:41.030021 4986 scope.go:117] "RemoveContainer" containerID="1c19e5f3156b5f51247685ea0314b5288f7257646c53b4fa5d037ca5bfc8eef3" Dec 03 14:14:41 crc kubenswrapper[4986]: E1203 14:14:41.030316 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c19e5f3156b5f51247685ea0314b5288f7257646c53b4fa5d037ca5bfc8eef3\": container with ID starting with 1c19e5f3156b5f51247685ea0314b5288f7257646c53b4fa5d037ca5bfc8eef3 not found: ID does not exist" containerID="1c19e5f3156b5f51247685ea0314b5288f7257646c53b4fa5d037ca5bfc8eef3" Dec 03 14:14:41 crc kubenswrapper[4986]: I1203 14:14:41.030359 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c19e5f3156b5f51247685ea0314b5288f7257646c53b4fa5d037ca5bfc8eef3"} err="failed to get container status \"1c19e5f3156b5f51247685ea0314b5288f7257646c53b4fa5d037ca5bfc8eef3\": rpc error: code = NotFound desc = could not find container \"1c19e5f3156b5f51247685ea0314b5288f7257646c53b4fa5d037ca5bfc8eef3\": container with ID starting with 1c19e5f3156b5f51247685ea0314b5288f7257646c53b4fa5d037ca5bfc8eef3 not found: ID does not exist" Dec 03 14:14:41 crc kubenswrapper[4986]: I1203 14:14:41.043370 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/930b386c-b439-49c0-9426-e8b41695de6d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "930b386c-b439-49c0-9426-e8b41695de6d" (UID: "930b386c-b439-49c0-9426-e8b41695de6d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 14:14:41 crc kubenswrapper[4986]: I1203 14:14:41.127953 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4vwhr\" (UniqueName: \"kubernetes.io/projected/930b386c-b439-49c0-9426-e8b41695de6d-kube-api-access-4vwhr\") on node \"crc\" DevicePath \"\"" Dec 03 14:14:41 crc kubenswrapper[4986]: I1203 14:14:41.127994 4986 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/930b386c-b439-49c0-9426-e8b41695de6d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 14:14:41 crc kubenswrapper[4986]: I1203 14:14:41.266731 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jvnfk"] Dec 03 14:14:41 crc kubenswrapper[4986]: I1203 14:14:41.279345 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jvnfk"] Dec 03 14:14:42 crc kubenswrapper[4986]: I1203 14:14:42.957157 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="930b386c-b439-49c0-9426-e8b41695de6d" path="/var/lib/kubelet/pods/930b386c-b439-49c0-9426-e8b41695de6d/volumes" Dec 03 14:14:58 crc kubenswrapper[4986]: I1203 14:14:58.929809 4986 scope.go:117] "RemoveContainer" containerID="ae5dce02cecac31dba54a82509965958a2c10c78fbd1119e71ab949183dfae6c" Dec 03 14:14:59 crc kubenswrapper[4986]: I1203 14:14:59.007100 4986 scope.go:117] "RemoveContainer" containerID="b90a08fe18e948af22be71a0e9977a45093d62e78faf7e0da4f5e8927a283c4e" Dec 03 14:15:00 crc kubenswrapper[4986]: I1203 14:15:00.158633 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412855-vm44h"] Dec 03 14:15:00 crc kubenswrapper[4986]: E1203 14:15:00.159656 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="930b386c-b439-49c0-9426-e8b41695de6d" containerName="registry-server" Dec 03 14:15:00 crc kubenswrapper[4986]: I1203 14:15:00.159690 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="930b386c-b439-49c0-9426-e8b41695de6d" containerName="registry-server" Dec 03 14:15:00 crc kubenswrapper[4986]: E1203 14:15:00.159725 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="930b386c-b439-49c0-9426-e8b41695de6d" containerName="extract-utilities" Dec 03 14:15:00 crc kubenswrapper[4986]: I1203 14:15:00.159742 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="930b386c-b439-49c0-9426-e8b41695de6d" containerName="extract-utilities" Dec 03 14:15:00 crc kubenswrapper[4986]: E1203 14:15:00.159796 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="930b386c-b439-49c0-9426-e8b41695de6d" containerName="extract-content" Dec 03 14:15:00 crc kubenswrapper[4986]: I1203 14:15:00.159813 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="930b386c-b439-49c0-9426-e8b41695de6d" containerName="extract-content" Dec 03 14:15:00 crc kubenswrapper[4986]: I1203 14:15:00.160481 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="930b386c-b439-49c0-9426-e8b41695de6d" containerName="registry-server" Dec 03 14:15:00 crc kubenswrapper[4986]: I1203 14:15:00.162703 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412855-vm44h" Dec 03 14:15:00 crc kubenswrapper[4986]: I1203 14:15:00.166098 4986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 03 14:15:00 crc kubenswrapper[4986]: I1203 14:15:00.167009 4986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 03 14:15:00 crc kubenswrapper[4986]: I1203 14:15:00.175720 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412855-vm44h"] Dec 03 14:15:00 crc kubenswrapper[4986]: I1203 14:15:00.197332 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c4dac5f8-8d34-4858-aa83-0c5d344dd2da-config-volume\") pod \"collect-profiles-29412855-vm44h\" (UID: \"c4dac5f8-8d34-4858-aa83-0c5d344dd2da\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412855-vm44h" Dec 03 14:15:00 crc kubenswrapper[4986]: I1203 14:15:00.197404 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c4dac5f8-8d34-4858-aa83-0c5d344dd2da-secret-volume\") pod \"collect-profiles-29412855-vm44h\" (UID: \"c4dac5f8-8d34-4858-aa83-0c5d344dd2da\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412855-vm44h" Dec 03 14:15:00 crc kubenswrapper[4986]: I1203 14:15:00.197444 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kx7kq\" (UniqueName: \"kubernetes.io/projected/c4dac5f8-8d34-4858-aa83-0c5d344dd2da-kube-api-access-kx7kq\") pod \"collect-profiles-29412855-vm44h\" (UID: \"c4dac5f8-8d34-4858-aa83-0c5d344dd2da\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412855-vm44h" Dec 03 14:15:00 crc kubenswrapper[4986]: I1203 14:15:00.303021 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c4dac5f8-8d34-4858-aa83-0c5d344dd2da-config-volume\") pod \"collect-profiles-29412855-vm44h\" (UID: \"c4dac5f8-8d34-4858-aa83-0c5d344dd2da\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412855-vm44h" Dec 03 14:15:00 crc kubenswrapper[4986]: I1203 14:15:00.303115 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c4dac5f8-8d34-4858-aa83-0c5d344dd2da-secret-volume\") pod \"collect-profiles-29412855-vm44h\" (UID: \"c4dac5f8-8d34-4858-aa83-0c5d344dd2da\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412855-vm44h" Dec 03 14:15:00 crc kubenswrapper[4986]: I1203 14:15:00.303184 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kx7kq\" (UniqueName: \"kubernetes.io/projected/c4dac5f8-8d34-4858-aa83-0c5d344dd2da-kube-api-access-kx7kq\") pod \"collect-profiles-29412855-vm44h\" (UID: \"c4dac5f8-8d34-4858-aa83-0c5d344dd2da\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412855-vm44h" Dec 03 14:15:00 crc kubenswrapper[4986]: I1203 14:15:00.304743 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c4dac5f8-8d34-4858-aa83-0c5d344dd2da-config-volume\") pod \"collect-profiles-29412855-vm44h\" (UID: \"c4dac5f8-8d34-4858-aa83-0c5d344dd2da\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412855-vm44h" Dec 03 14:15:00 crc kubenswrapper[4986]: I1203 14:15:00.309923 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c4dac5f8-8d34-4858-aa83-0c5d344dd2da-secret-volume\") pod \"collect-profiles-29412855-vm44h\" (UID: \"c4dac5f8-8d34-4858-aa83-0c5d344dd2da\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412855-vm44h" Dec 03 14:15:00 crc kubenswrapper[4986]: I1203 14:15:00.333507 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kx7kq\" (UniqueName: \"kubernetes.io/projected/c4dac5f8-8d34-4858-aa83-0c5d344dd2da-kube-api-access-kx7kq\") pod \"collect-profiles-29412855-vm44h\" (UID: \"c4dac5f8-8d34-4858-aa83-0c5d344dd2da\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412855-vm44h" Dec 03 14:15:00 crc kubenswrapper[4986]: I1203 14:15:00.501218 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412855-vm44h" Dec 03 14:15:00 crc kubenswrapper[4986]: I1203 14:15:00.919208 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412855-vm44h"] Dec 03 14:15:00 crc kubenswrapper[4986]: W1203 14:15:00.924489 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4dac5f8_8d34_4858_aa83_0c5d344dd2da.slice/crio-ebadd1f5dfc0b06c36afcca9986e9a419148411be66e4916ea7d0860d4399c3a WatchSource:0}: Error finding container ebadd1f5dfc0b06c36afcca9986e9a419148411be66e4916ea7d0860d4399c3a: Status 404 returned error can't find the container with id ebadd1f5dfc0b06c36afcca9986e9a419148411be66e4916ea7d0860d4399c3a Dec 03 14:15:01 crc kubenswrapper[4986]: I1203 14:15:01.157775 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412855-vm44h" event={"ID":"c4dac5f8-8d34-4858-aa83-0c5d344dd2da","Type":"ContainerStarted","Data":"e6fe51b8e7d441ae091dd5b7dd9f64e0ebbc03afd53ac63d04b34ade4efb831e"} Dec 03 14:15:01 crc kubenswrapper[4986]: I1203 14:15:01.158073 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412855-vm44h" event={"ID":"c4dac5f8-8d34-4858-aa83-0c5d344dd2da","Type":"ContainerStarted","Data":"ebadd1f5dfc0b06c36afcca9986e9a419148411be66e4916ea7d0860d4399c3a"} Dec 03 14:15:01 crc kubenswrapper[4986]: I1203 14:15:01.178070 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29412855-vm44h" podStartSLOduration=1.178042342 podStartE2EDuration="1.178042342s" podCreationTimestamp="2025-12-03 14:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 14:15:01.170098681 +0000 UTC m=+4760.636529882" watchObservedRunningTime="2025-12-03 14:15:01.178042342 +0000 UTC m=+4760.644473573" Dec 03 14:15:02 crc kubenswrapper[4986]: I1203 14:15:02.167486 4986 generic.go:334] "Generic (PLEG): container finished" podID="c4dac5f8-8d34-4858-aa83-0c5d344dd2da" containerID="e6fe51b8e7d441ae091dd5b7dd9f64e0ebbc03afd53ac63d04b34ade4efb831e" exitCode=0 Dec 03 14:15:02 crc kubenswrapper[4986]: I1203 14:15:02.167542 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412855-vm44h" event={"ID":"c4dac5f8-8d34-4858-aa83-0c5d344dd2da","Type":"ContainerDied","Data":"e6fe51b8e7d441ae091dd5b7dd9f64e0ebbc03afd53ac63d04b34ade4efb831e"} Dec 03 14:15:03 crc kubenswrapper[4986]: I1203 14:15:03.519005 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412855-vm44h" Dec 03 14:15:03 crc kubenswrapper[4986]: I1203 14:15:03.573866 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c4dac5f8-8d34-4858-aa83-0c5d344dd2da-config-volume\") pod \"c4dac5f8-8d34-4858-aa83-0c5d344dd2da\" (UID: \"c4dac5f8-8d34-4858-aa83-0c5d344dd2da\") " Dec 03 14:15:03 crc kubenswrapper[4986]: I1203 14:15:03.574052 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kx7kq\" (UniqueName: \"kubernetes.io/projected/c4dac5f8-8d34-4858-aa83-0c5d344dd2da-kube-api-access-kx7kq\") pod \"c4dac5f8-8d34-4858-aa83-0c5d344dd2da\" (UID: \"c4dac5f8-8d34-4858-aa83-0c5d344dd2da\") " Dec 03 14:15:03 crc kubenswrapper[4986]: I1203 14:15:03.574094 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c4dac5f8-8d34-4858-aa83-0c5d344dd2da-secret-volume\") pod \"c4dac5f8-8d34-4858-aa83-0c5d344dd2da\" (UID: \"c4dac5f8-8d34-4858-aa83-0c5d344dd2da\") " Dec 03 14:15:03 crc kubenswrapper[4986]: I1203 14:15:03.575203 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4dac5f8-8d34-4858-aa83-0c5d344dd2da-config-volume" (OuterVolumeSpecName: "config-volume") pod "c4dac5f8-8d34-4858-aa83-0c5d344dd2da" (UID: "c4dac5f8-8d34-4858-aa83-0c5d344dd2da"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 14:15:03 crc kubenswrapper[4986]: I1203 14:15:03.575673 4986 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c4dac5f8-8d34-4858-aa83-0c5d344dd2da-config-volume\") on node \"crc\" DevicePath \"\"" Dec 03 14:15:03 crc kubenswrapper[4986]: I1203 14:15:03.583193 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4dac5f8-8d34-4858-aa83-0c5d344dd2da-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c4dac5f8-8d34-4858-aa83-0c5d344dd2da" (UID: "c4dac5f8-8d34-4858-aa83-0c5d344dd2da"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 14:15:03 crc kubenswrapper[4986]: I1203 14:15:03.584188 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4dac5f8-8d34-4858-aa83-0c5d344dd2da-kube-api-access-kx7kq" (OuterVolumeSpecName: "kube-api-access-kx7kq") pod "c4dac5f8-8d34-4858-aa83-0c5d344dd2da" (UID: "c4dac5f8-8d34-4858-aa83-0c5d344dd2da"). InnerVolumeSpecName "kube-api-access-kx7kq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 14:15:03 crc kubenswrapper[4986]: I1203 14:15:03.677681 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kx7kq\" (UniqueName: \"kubernetes.io/projected/c4dac5f8-8d34-4858-aa83-0c5d344dd2da-kube-api-access-kx7kq\") on node \"crc\" DevicePath \"\"" Dec 03 14:15:03 crc kubenswrapper[4986]: I1203 14:15:03.677710 4986 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c4dac5f8-8d34-4858-aa83-0c5d344dd2da-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 03 14:15:04 crc kubenswrapper[4986]: I1203 14:15:04.186247 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412855-vm44h" event={"ID":"c4dac5f8-8d34-4858-aa83-0c5d344dd2da","Type":"ContainerDied","Data":"ebadd1f5dfc0b06c36afcca9986e9a419148411be66e4916ea7d0860d4399c3a"} Dec 03 14:15:04 crc kubenswrapper[4986]: I1203 14:15:04.186565 4986 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ebadd1f5dfc0b06c36afcca9986e9a419148411be66e4916ea7d0860d4399c3a" Dec 03 14:15:04 crc kubenswrapper[4986]: I1203 14:15:04.186350 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412855-vm44h" Dec 03 14:15:04 crc kubenswrapper[4986]: I1203 14:15:04.257408 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412810-v7zdw"] Dec 03 14:15:04 crc kubenswrapper[4986]: I1203 14:15:04.265113 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412810-v7zdw"] Dec 03 14:15:04 crc kubenswrapper[4986]: I1203 14:15:04.963402 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aff8aecd-2002-457a-86e6-9fdb87097b4f" path="/var/lib/kubelet/pods/aff8aecd-2002-457a-86e6-9fdb87097b4f/volumes" Dec 03 14:15:59 crc kubenswrapper[4986]: I1203 14:15:59.072676 4986 scope.go:117] "RemoveContainer" containerID="8c5c7ead76201c9e43159330a03af4a3d133bbfaa7f01ccb646d34f475765a22" Dec 03 14:16:03 crc kubenswrapper[4986]: I1203 14:16:03.491725 4986 patch_prober.go:28] interesting pod/machine-config-daemon-xggpj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 14:16:03 crc kubenswrapper[4986]: I1203 14:16:03.492201 4986 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 14:16:33 crc kubenswrapper[4986]: I1203 14:16:33.491228 4986 patch_prober.go:28] interesting pod/machine-config-daemon-xggpj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 14:16:33 crc kubenswrapper[4986]: I1203 14:16:33.492151 4986 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 14:17:03 crc kubenswrapper[4986]: I1203 14:17:03.490718 4986 patch_prober.go:28] interesting pod/machine-config-daemon-xggpj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 14:17:03 crc kubenswrapper[4986]: I1203 14:17:03.491450 4986 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 14:17:03 crc kubenswrapper[4986]: I1203 14:17:03.491522 4986 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" Dec 03 14:17:03 crc kubenswrapper[4986]: I1203 14:17:03.492525 4986 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a1927d3957f997e0b9f0a26a316582ff4f5de725b2545467ef2eb90d71e4bf06"} pod="openshift-machine-config-operator/machine-config-daemon-xggpj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 14:17:03 crc kubenswrapper[4986]: I1203 14:17:03.492618 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" podUID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" containerName="machine-config-daemon" containerID="cri-o://a1927d3957f997e0b9f0a26a316582ff4f5de725b2545467ef2eb90d71e4bf06" gracePeriod=600 Dec 03 14:17:03 crc kubenswrapper[4986]: I1203 14:17:03.766297 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mln4j"] Dec 03 14:17:03 crc kubenswrapper[4986]: E1203 14:17:03.767044 4986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4dac5f8-8d34-4858-aa83-0c5d344dd2da" containerName="collect-profiles" Dec 03 14:17:03 crc kubenswrapper[4986]: I1203 14:17:03.767065 4986 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4dac5f8-8d34-4858-aa83-0c5d344dd2da" containerName="collect-profiles" Dec 03 14:17:03 crc kubenswrapper[4986]: I1203 14:17:03.767269 4986 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4dac5f8-8d34-4858-aa83-0c5d344dd2da" containerName="collect-profiles" Dec 03 14:17:03 crc kubenswrapper[4986]: I1203 14:17:03.769174 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mln4j" Dec 03 14:17:03 crc kubenswrapper[4986]: I1203 14:17:03.786024 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mln4j"] Dec 03 14:17:03 crc kubenswrapper[4986]: I1203 14:17:03.871263 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1bd04fa2-be2e-493f-a5d7-019917931c47-utilities\") pod \"certified-operators-mln4j\" (UID: \"1bd04fa2-be2e-493f-a5d7-019917931c47\") " pod="openshift-marketplace/certified-operators-mln4j" Dec 03 14:17:03 crc kubenswrapper[4986]: I1203 14:17:03.871346 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qs982\" (UniqueName: \"kubernetes.io/projected/1bd04fa2-be2e-493f-a5d7-019917931c47-kube-api-access-qs982\") pod \"certified-operators-mln4j\" (UID: \"1bd04fa2-be2e-493f-a5d7-019917931c47\") " pod="openshift-marketplace/certified-operators-mln4j" Dec 03 14:17:03 crc kubenswrapper[4986]: I1203 14:17:03.871716 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1bd04fa2-be2e-493f-a5d7-019917931c47-catalog-content\") pod \"certified-operators-mln4j\" (UID: \"1bd04fa2-be2e-493f-a5d7-019917931c47\") " pod="openshift-marketplace/certified-operators-mln4j" Dec 03 14:17:03 crc kubenswrapper[4986]: I1203 14:17:03.973238 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1bd04fa2-be2e-493f-a5d7-019917931c47-catalog-content\") pod \"certified-operators-mln4j\" (UID: \"1bd04fa2-be2e-493f-a5d7-019917931c47\") " pod="openshift-marketplace/certified-operators-mln4j" Dec 03 14:17:03 crc kubenswrapper[4986]: I1203 14:17:03.973331 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1bd04fa2-be2e-493f-a5d7-019917931c47-utilities\") pod \"certified-operators-mln4j\" (UID: \"1bd04fa2-be2e-493f-a5d7-019917931c47\") " pod="openshift-marketplace/certified-operators-mln4j" Dec 03 14:17:03 crc kubenswrapper[4986]: I1203 14:17:03.973380 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qs982\" (UniqueName: \"kubernetes.io/projected/1bd04fa2-be2e-493f-a5d7-019917931c47-kube-api-access-qs982\") pod \"certified-operators-mln4j\" (UID: \"1bd04fa2-be2e-493f-a5d7-019917931c47\") " pod="openshift-marketplace/certified-operators-mln4j" Dec 03 14:17:03 crc kubenswrapper[4986]: I1203 14:17:03.973949 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1bd04fa2-be2e-493f-a5d7-019917931c47-catalog-content\") pod \"certified-operators-mln4j\" (UID: \"1bd04fa2-be2e-493f-a5d7-019917931c47\") " pod="openshift-marketplace/certified-operators-mln4j" Dec 03 14:17:03 crc kubenswrapper[4986]: I1203 14:17:03.974400 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1bd04fa2-be2e-493f-a5d7-019917931c47-utilities\") pod \"certified-operators-mln4j\" (UID: \"1bd04fa2-be2e-493f-a5d7-019917931c47\") " pod="openshift-marketplace/certified-operators-mln4j" Dec 03 14:17:03 crc kubenswrapper[4986]: I1203 14:17:03.997102 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qs982\" (UniqueName: \"kubernetes.io/projected/1bd04fa2-be2e-493f-a5d7-019917931c47-kube-api-access-qs982\") pod \"certified-operators-mln4j\" (UID: \"1bd04fa2-be2e-493f-a5d7-019917931c47\") " pod="openshift-marketplace/certified-operators-mln4j" Dec 03 14:17:04 crc kubenswrapper[4986]: I1203 14:17:04.086023 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mln4j" Dec 03 14:17:04 crc kubenswrapper[4986]: I1203 14:17:04.382694 4986 generic.go:334] "Generic (PLEG): container finished" podID="f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c" containerID="a1927d3957f997e0b9f0a26a316582ff4f5de725b2545467ef2eb90d71e4bf06" exitCode=0 Dec 03 14:17:04 crc kubenswrapper[4986]: I1203 14:17:04.383015 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" event={"ID":"f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c","Type":"ContainerDied","Data":"a1927d3957f997e0b9f0a26a316582ff4f5de725b2545467ef2eb90d71e4bf06"} Dec 03 14:17:04 crc kubenswrapper[4986]: I1203 14:17:04.383047 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xggpj" event={"ID":"f0f15c1f-4bb1-4375-a07f-3ecd5818fa8c","Type":"ContainerStarted","Data":"c18a858264c27c98d40fad277ad9f7534ea23c009895e2cbd47fb03455e539bf"} Dec 03 14:17:04 crc kubenswrapper[4986]: I1203 14:17:04.383065 4986 scope.go:117] "RemoveContainer" containerID="9908c5bc4e1f9c54ddf6781265a9653bd441c26c6cebabaf9a35e940adc79994" Dec 03 14:17:04 crc kubenswrapper[4986]: I1203 14:17:04.638427 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mln4j"] Dec 03 14:17:05 crc kubenswrapper[4986]: I1203 14:17:05.393371 4986 generic.go:334] "Generic (PLEG): container finished" podID="1bd04fa2-be2e-493f-a5d7-019917931c47" containerID="a21ae30f374daaa50b3506ca205dfa49b5af77374452b03f082802e595392f5f" exitCode=0 Dec 03 14:17:05 crc kubenswrapper[4986]: I1203 14:17:05.393572 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mln4j" event={"ID":"1bd04fa2-be2e-493f-a5d7-019917931c47","Type":"ContainerDied","Data":"a21ae30f374daaa50b3506ca205dfa49b5af77374452b03f082802e595392f5f"} Dec 03 14:17:05 crc kubenswrapper[4986]: I1203 14:17:05.393914 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mln4j" event={"ID":"1bd04fa2-be2e-493f-a5d7-019917931c47","Type":"ContainerStarted","Data":"5776afe4d68113a2ac0020e1c9f301302d36bdaf8b99392bdd53b70f4af16399"} Dec 03 14:17:06 crc kubenswrapper[4986]: I1203 14:17:06.417116 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mln4j" event={"ID":"1bd04fa2-be2e-493f-a5d7-019917931c47","Type":"ContainerStarted","Data":"1f14e251db31114e810b9c2f6190be7b4496de267555fae05594164529d80abe"} Dec 03 14:17:06 crc kubenswrapper[4986]: I1203 14:17:06.768806 4986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-kx2v2"] Dec 03 14:17:06 crc kubenswrapper[4986]: I1203 14:17:06.772044 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kx2v2" Dec 03 14:17:06 crc kubenswrapper[4986]: I1203 14:17:06.792314 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kx2v2"] Dec 03 14:17:06 crc kubenswrapper[4986]: I1203 14:17:06.827864 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbsk2\" (UniqueName: \"kubernetes.io/projected/f50966d5-a8bb-4623-928a-0b947a7f53cb-kube-api-access-tbsk2\") pod \"redhat-operators-kx2v2\" (UID: \"f50966d5-a8bb-4623-928a-0b947a7f53cb\") " pod="openshift-marketplace/redhat-operators-kx2v2" Dec 03 14:17:06 crc kubenswrapper[4986]: I1203 14:17:06.828343 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f50966d5-a8bb-4623-928a-0b947a7f53cb-utilities\") pod \"redhat-operators-kx2v2\" (UID: \"f50966d5-a8bb-4623-928a-0b947a7f53cb\") " pod="openshift-marketplace/redhat-operators-kx2v2" Dec 03 14:17:06 crc kubenswrapper[4986]: I1203 14:17:06.828578 4986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f50966d5-a8bb-4623-928a-0b947a7f53cb-catalog-content\") pod \"redhat-operators-kx2v2\" (UID: \"f50966d5-a8bb-4623-928a-0b947a7f53cb\") " pod="openshift-marketplace/redhat-operators-kx2v2" Dec 03 14:17:06 crc kubenswrapper[4986]: I1203 14:17:06.931766 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f50966d5-a8bb-4623-928a-0b947a7f53cb-catalog-content\") pod \"redhat-operators-kx2v2\" (UID: \"f50966d5-a8bb-4623-928a-0b947a7f53cb\") " pod="openshift-marketplace/redhat-operators-kx2v2" Dec 03 14:17:06 crc kubenswrapper[4986]: I1203 14:17:06.931950 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbsk2\" (UniqueName: \"kubernetes.io/projected/f50966d5-a8bb-4623-928a-0b947a7f53cb-kube-api-access-tbsk2\") pod \"redhat-operators-kx2v2\" (UID: \"f50966d5-a8bb-4623-928a-0b947a7f53cb\") " pod="openshift-marketplace/redhat-operators-kx2v2" Dec 03 14:17:06 crc kubenswrapper[4986]: I1203 14:17:06.932122 4986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f50966d5-a8bb-4623-928a-0b947a7f53cb-utilities\") pod \"redhat-operators-kx2v2\" (UID: \"f50966d5-a8bb-4623-928a-0b947a7f53cb\") " pod="openshift-marketplace/redhat-operators-kx2v2" Dec 03 14:17:06 crc kubenswrapper[4986]: I1203 14:17:06.932428 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f50966d5-a8bb-4623-928a-0b947a7f53cb-catalog-content\") pod \"redhat-operators-kx2v2\" (UID: \"f50966d5-a8bb-4623-928a-0b947a7f53cb\") " pod="openshift-marketplace/redhat-operators-kx2v2" Dec 03 14:17:06 crc kubenswrapper[4986]: I1203 14:17:06.932695 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f50966d5-a8bb-4623-928a-0b947a7f53cb-utilities\") pod \"redhat-operators-kx2v2\" (UID: \"f50966d5-a8bb-4623-928a-0b947a7f53cb\") " pod="openshift-marketplace/redhat-operators-kx2v2" Dec 03 14:17:06 crc kubenswrapper[4986]: I1203 14:17:06.962133 4986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbsk2\" (UniqueName: \"kubernetes.io/projected/f50966d5-a8bb-4623-928a-0b947a7f53cb-kube-api-access-tbsk2\") pod \"redhat-operators-kx2v2\" (UID: \"f50966d5-a8bb-4623-928a-0b947a7f53cb\") " pod="openshift-marketplace/redhat-operators-kx2v2" Dec 03 14:17:07 crc kubenswrapper[4986]: I1203 14:17:07.098486 4986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kx2v2" Dec 03 14:17:07 crc kubenswrapper[4986]: I1203 14:17:07.427344 4986 generic.go:334] "Generic (PLEG): container finished" podID="1bd04fa2-be2e-493f-a5d7-019917931c47" containerID="1f14e251db31114e810b9c2f6190be7b4496de267555fae05594164529d80abe" exitCode=0 Dec 03 14:17:07 crc kubenswrapper[4986]: I1203 14:17:07.427381 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mln4j" event={"ID":"1bd04fa2-be2e-493f-a5d7-019917931c47","Type":"ContainerDied","Data":"1f14e251db31114e810b9c2f6190be7b4496de267555fae05594164529d80abe"} Dec 03 14:17:07 crc kubenswrapper[4986]: I1203 14:17:07.598237 4986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kx2v2"] Dec 03 14:17:07 crc kubenswrapper[4986]: W1203 14:17:07.616515 4986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf50966d5_a8bb_4623_928a_0b947a7f53cb.slice/crio-7fd1e5e44ba250e099a832e0d391f767a78e3770c84f037691235ffd625bd431 WatchSource:0}: Error finding container 7fd1e5e44ba250e099a832e0d391f767a78e3770c84f037691235ffd625bd431: Status 404 returned error can't find the container with id 7fd1e5e44ba250e099a832e0d391f767a78e3770c84f037691235ffd625bd431 Dec 03 14:17:08 crc kubenswrapper[4986]: I1203 14:17:08.446458 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mln4j" event={"ID":"1bd04fa2-be2e-493f-a5d7-019917931c47","Type":"ContainerStarted","Data":"8acee098c11a5ee93558f611ac9d1089fb3ccfe06887a633f2237a7cd2ba5e15"} Dec 03 14:17:08 crc kubenswrapper[4986]: I1203 14:17:08.451623 4986 generic.go:334] "Generic (PLEG): container finished" podID="f50966d5-a8bb-4623-928a-0b947a7f53cb" containerID="e0a62524c493c996c6fffb33fb234a68e47e04baa3921ca2768e36a543ce7eeb" exitCode=0 Dec 03 14:17:08 crc kubenswrapper[4986]: I1203 14:17:08.451667 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kx2v2" event={"ID":"f50966d5-a8bb-4623-928a-0b947a7f53cb","Type":"ContainerDied","Data":"e0a62524c493c996c6fffb33fb234a68e47e04baa3921ca2768e36a543ce7eeb"} Dec 03 14:17:08 crc kubenswrapper[4986]: I1203 14:17:08.451692 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kx2v2" event={"ID":"f50966d5-a8bb-4623-928a-0b947a7f53cb","Type":"ContainerStarted","Data":"7fd1e5e44ba250e099a832e0d391f767a78e3770c84f037691235ffd625bd431"} Dec 03 14:17:08 crc kubenswrapper[4986]: I1203 14:17:08.487639 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mln4j" podStartSLOduration=3.075487969 podStartE2EDuration="5.487621274s" podCreationTimestamp="2025-12-03 14:17:03 +0000 UTC" firstStartedPulling="2025-12-03 14:17:05.396437583 +0000 UTC m=+4884.862868774" lastFinishedPulling="2025-12-03 14:17:07.808570878 +0000 UTC m=+4887.275002079" observedRunningTime="2025-12-03 14:17:08.47702421 +0000 UTC m=+4887.943455401" watchObservedRunningTime="2025-12-03 14:17:08.487621274 +0000 UTC m=+4887.954052465" Dec 03 14:17:10 crc kubenswrapper[4986]: I1203 14:17:10.494049 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kx2v2" event={"ID":"f50966d5-a8bb-4623-928a-0b947a7f53cb","Type":"ContainerStarted","Data":"a648fa90bfeed49e38660b03fe0476a035f69baed9ed11de07795084021a60d5"} Dec 03 14:17:11 crc kubenswrapper[4986]: I1203 14:17:11.519037 4986 generic.go:334] "Generic (PLEG): container finished" podID="f50966d5-a8bb-4623-928a-0b947a7f53cb" containerID="a648fa90bfeed49e38660b03fe0476a035f69baed9ed11de07795084021a60d5" exitCode=0 Dec 03 14:17:11 crc kubenswrapper[4986]: I1203 14:17:11.519094 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kx2v2" event={"ID":"f50966d5-a8bb-4623-928a-0b947a7f53cb","Type":"ContainerDied","Data":"a648fa90bfeed49e38660b03fe0476a035f69baed9ed11de07795084021a60d5"} Dec 03 14:17:13 crc kubenswrapper[4986]: I1203 14:17:13.539607 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kx2v2" event={"ID":"f50966d5-a8bb-4623-928a-0b947a7f53cb","Type":"ContainerStarted","Data":"15b6d5c28a735a03c169e3ebb51e6d7389fbc6ae50ff360d2682ceaf79e43350"} Dec 03 14:17:13 crc kubenswrapper[4986]: I1203 14:17:13.562773 4986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-kx2v2" podStartSLOduration=3.153964299 podStartE2EDuration="7.562755759s" podCreationTimestamp="2025-12-03 14:17:06 +0000 UTC" firstStartedPulling="2025-12-03 14:17:08.454554991 +0000 UTC m=+4887.920986182" lastFinishedPulling="2025-12-03 14:17:12.863346451 +0000 UTC m=+4892.329777642" observedRunningTime="2025-12-03 14:17:13.556572624 +0000 UTC m=+4893.023003835" watchObservedRunningTime="2025-12-03 14:17:13.562755759 +0000 UTC m=+4893.029186950" Dec 03 14:17:14 crc kubenswrapper[4986]: I1203 14:17:14.086666 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mln4j" Dec 03 14:17:14 crc kubenswrapper[4986]: I1203 14:17:14.086980 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mln4j" Dec 03 14:17:14 crc kubenswrapper[4986]: I1203 14:17:14.139153 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mln4j" Dec 03 14:17:14 crc kubenswrapper[4986]: I1203 14:17:14.598342 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mln4j" Dec 03 14:17:15 crc kubenswrapper[4986]: I1203 14:17:15.166066 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mln4j"] Dec 03 14:17:16 crc kubenswrapper[4986]: I1203 14:17:16.566777 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mln4j" podUID="1bd04fa2-be2e-493f-a5d7-019917931c47" containerName="registry-server" containerID="cri-o://8acee098c11a5ee93558f611ac9d1089fb3ccfe06887a633f2237a7cd2ba5e15" gracePeriod=2 Dec 03 14:17:17 crc kubenswrapper[4986]: I1203 14:17:17.099308 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-kx2v2" Dec 03 14:17:17 crc kubenswrapper[4986]: I1203 14:17:17.099619 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-kx2v2" Dec 03 14:17:18 crc kubenswrapper[4986]: I1203 14:17:18.144860 4986 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kx2v2" podUID="f50966d5-a8bb-4623-928a-0b947a7f53cb" containerName="registry-server" probeResult="failure" output=< Dec 03 14:17:18 crc kubenswrapper[4986]: timeout: failed to connect service ":50051" within 1s Dec 03 14:17:18 crc kubenswrapper[4986]: > Dec 03 14:17:18 crc kubenswrapper[4986]: I1203 14:17:18.588387 4986 generic.go:334] "Generic (PLEG): container finished" podID="1bd04fa2-be2e-493f-a5d7-019917931c47" containerID="8acee098c11a5ee93558f611ac9d1089fb3ccfe06887a633f2237a7cd2ba5e15" exitCode=0 Dec 03 14:17:18 crc kubenswrapper[4986]: I1203 14:17:18.588452 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mln4j" event={"ID":"1bd04fa2-be2e-493f-a5d7-019917931c47","Type":"ContainerDied","Data":"8acee098c11a5ee93558f611ac9d1089fb3ccfe06887a633f2237a7cd2ba5e15"} Dec 03 14:17:18 crc kubenswrapper[4986]: I1203 14:17:18.837903 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mln4j" Dec 03 14:17:18 crc kubenswrapper[4986]: I1203 14:17:18.908319 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs982\" (UniqueName: \"kubernetes.io/projected/1bd04fa2-be2e-493f-a5d7-019917931c47-kube-api-access-qs982\") pod \"1bd04fa2-be2e-493f-a5d7-019917931c47\" (UID: \"1bd04fa2-be2e-493f-a5d7-019917931c47\") " Dec 03 14:17:18 crc kubenswrapper[4986]: I1203 14:17:18.908458 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1bd04fa2-be2e-493f-a5d7-019917931c47-utilities\") pod \"1bd04fa2-be2e-493f-a5d7-019917931c47\" (UID: \"1bd04fa2-be2e-493f-a5d7-019917931c47\") " Dec 03 14:17:18 crc kubenswrapper[4986]: I1203 14:17:18.908670 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1bd04fa2-be2e-493f-a5d7-019917931c47-catalog-content\") pod \"1bd04fa2-be2e-493f-a5d7-019917931c47\" (UID: \"1bd04fa2-be2e-493f-a5d7-019917931c47\") " Dec 03 14:17:18 crc kubenswrapper[4986]: I1203 14:17:18.909652 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1bd04fa2-be2e-493f-a5d7-019917931c47-utilities" (OuterVolumeSpecName: "utilities") pod "1bd04fa2-be2e-493f-a5d7-019917931c47" (UID: "1bd04fa2-be2e-493f-a5d7-019917931c47"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 14:17:18 crc kubenswrapper[4986]: I1203 14:17:18.915122 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bd04fa2-be2e-493f-a5d7-019917931c47-kube-api-access-qs982" (OuterVolumeSpecName: "kube-api-access-qs982") pod "1bd04fa2-be2e-493f-a5d7-019917931c47" (UID: "1bd04fa2-be2e-493f-a5d7-019917931c47"). InnerVolumeSpecName "kube-api-access-qs982". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 14:17:18 crc kubenswrapper[4986]: I1203 14:17:18.966090 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1bd04fa2-be2e-493f-a5d7-019917931c47-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1bd04fa2-be2e-493f-a5d7-019917931c47" (UID: "1bd04fa2-be2e-493f-a5d7-019917931c47"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 14:17:19 crc kubenswrapper[4986]: I1203 14:17:19.011406 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs982\" (UniqueName: \"kubernetes.io/projected/1bd04fa2-be2e-493f-a5d7-019917931c47-kube-api-access-qs982\") on node \"crc\" DevicePath \"\"" Dec 03 14:17:19 crc kubenswrapper[4986]: I1203 14:17:19.011441 4986 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1bd04fa2-be2e-493f-a5d7-019917931c47-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 14:17:19 crc kubenswrapper[4986]: I1203 14:17:19.011451 4986 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1bd04fa2-be2e-493f-a5d7-019917931c47-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 14:17:19 crc kubenswrapper[4986]: I1203 14:17:19.600131 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mln4j" event={"ID":"1bd04fa2-be2e-493f-a5d7-019917931c47","Type":"ContainerDied","Data":"5776afe4d68113a2ac0020e1c9f301302d36bdaf8b99392bdd53b70f4af16399"} Dec 03 14:17:19 crc kubenswrapper[4986]: I1203 14:17:19.600559 4986 scope.go:117] "RemoveContainer" containerID="8acee098c11a5ee93558f611ac9d1089fb3ccfe06887a633f2237a7cd2ba5e15" Dec 03 14:17:19 crc kubenswrapper[4986]: I1203 14:17:19.600739 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mln4j" Dec 03 14:17:19 crc kubenswrapper[4986]: I1203 14:17:19.624752 4986 scope.go:117] "RemoveContainer" containerID="1f14e251db31114e810b9c2f6190be7b4496de267555fae05594164529d80abe" Dec 03 14:17:19 crc kubenswrapper[4986]: I1203 14:17:19.654813 4986 scope.go:117] "RemoveContainer" containerID="a21ae30f374daaa50b3506ca205dfa49b5af77374452b03f082802e595392f5f" Dec 03 14:17:19 crc kubenswrapper[4986]: I1203 14:17:19.669597 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mln4j"] Dec 03 14:17:19 crc kubenswrapper[4986]: I1203 14:17:19.686086 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mln4j"] Dec 03 14:17:20 crc kubenswrapper[4986]: I1203 14:17:20.959412 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bd04fa2-be2e-493f-a5d7-019917931c47" path="/var/lib/kubelet/pods/1bd04fa2-be2e-493f-a5d7-019917931c47/volumes" Dec 03 14:17:27 crc kubenswrapper[4986]: I1203 14:17:27.147993 4986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-kx2v2" Dec 03 14:17:27 crc kubenswrapper[4986]: I1203 14:17:27.218247 4986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-kx2v2" Dec 03 14:17:27 crc kubenswrapper[4986]: I1203 14:17:27.391975 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kx2v2"] Dec 03 14:17:28 crc kubenswrapper[4986]: I1203 14:17:28.706656 4986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-kx2v2" podUID="f50966d5-a8bb-4623-928a-0b947a7f53cb" containerName="registry-server" containerID="cri-o://15b6d5c28a735a03c169e3ebb51e6d7389fbc6ae50ff360d2682ceaf79e43350" gracePeriod=2 Dec 03 14:17:29 crc kubenswrapper[4986]: I1203 14:17:29.173056 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kx2v2" Dec 03 14:17:29 crc kubenswrapper[4986]: I1203 14:17:29.200497 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f50966d5-a8bb-4623-928a-0b947a7f53cb-utilities\") pod \"f50966d5-a8bb-4623-928a-0b947a7f53cb\" (UID: \"f50966d5-a8bb-4623-928a-0b947a7f53cb\") " Dec 03 14:17:29 crc kubenswrapper[4986]: I1203 14:17:29.200659 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tbsk2\" (UniqueName: \"kubernetes.io/projected/f50966d5-a8bb-4623-928a-0b947a7f53cb-kube-api-access-tbsk2\") pod \"f50966d5-a8bb-4623-928a-0b947a7f53cb\" (UID: \"f50966d5-a8bb-4623-928a-0b947a7f53cb\") " Dec 03 14:17:29 crc kubenswrapper[4986]: I1203 14:17:29.200689 4986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f50966d5-a8bb-4623-928a-0b947a7f53cb-catalog-content\") pod \"f50966d5-a8bb-4623-928a-0b947a7f53cb\" (UID: \"f50966d5-a8bb-4623-928a-0b947a7f53cb\") " Dec 03 14:17:29 crc kubenswrapper[4986]: I1203 14:17:29.201640 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f50966d5-a8bb-4623-928a-0b947a7f53cb-utilities" (OuterVolumeSpecName: "utilities") pod "f50966d5-a8bb-4623-928a-0b947a7f53cb" (UID: "f50966d5-a8bb-4623-928a-0b947a7f53cb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 14:17:29 crc kubenswrapper[4986]: I1203 14:17:29.202479 4986 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f50966d5-a8bb-4623-928a-0b947a7f53cb-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 14:17:29 crc kubenswrapper[4986]: I1203 14:17:29.207984 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f50966d5-a8bb-4623-928a-0b947a7f53cb-kube-api-access-tbsk2" (OuterVolumeSpecName: "kube-api-access-tbsk2") pod "f50966d5-a8bb-4623-928a-0b947a7f53cb" (UID: "f50966d5-a8bb-4623-928a-0b947a7f53cb"). InnerVolumeSpecName "kube-api-access-tbsk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 14:17:29 crc kubenswrapper[4986]: I1203 14:17:29.304500 4986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tbsk2\" (UniqueName: \"kubernetes.io/projected/f50966d5-a8bb-4623-928a-0b947a7f53cb-kube-api-access-tbsk2\") on node \"crc\" DevicePath \"\"" Dec 03 14:17:29 crc kubenswrapper[4986]: I1203 14:17:29.311626 4986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f50966d5-a8bb-4623-928a-0b947a7f53cb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f50966d5-a8bb-4623-928a-0b947a7f53cb" (UID: "f50966d5-a8bb-4623-928a-0b947a7f53cb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 14:17:29 crc kubenswrapper[4986]: I1203 14:17:29.406498 4986 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f50966d5-a8bb-4623-928a-0b947a7f53cb-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 14:17:29 crc kubenswrapper[4986]: I1203 14:17:29.721724 4986 generic.go:334] "Generic (PLEG): container finished" podID="f50966d5-a8bb-4623-928a-0b947a7f53cb" containerID="15b6d5c28a735a03c169e3ebb51e6d7389fbc6ae50ff360d2682ceaf79e43350" exitCode=0 Dec 03 14:17:29 crc kubenswrapper[4986]: I1203 14:17:29.721784 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kx2v2" event={"ID":"f50966d5-a8bb-4623-928a-0b947a7f53cb","Type":"ContainerDied","Data":"15b6d5c28a735a03c169e3ebb51e6d7389fbc6ae50ff360d2682ceaf79e43350"} Dec 03 14:17:29 crc kubenswrapper[4986]: I1203 14:17:29.722177 4986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kx2v2" event={"ID":"f50966d5-a8bb-4623-928a-0b947a7f53cb","Type":"ContainerDied","Data":"7fd1e5e44ba250e099a832e0d391f767a78e3770c84f037691235ffd625bd431"} Dec 03 14:17:29 crc kubenswrapper[4986]: I1203 14:17:29.722214 4986 scope.go:117] "RemoveContainer" containerID="15b6d5c28a735a03c169e3ebb51e6d7389fbc6ae50ff360d2682ceaf79e43350" Dec 03 14:17:29 crc kubenswrapper[4986]: I1203 14:17:29.721811 4986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kx2v2" Dec 03 14:17:29 crc kubenswrapper[4986]: I1203 14:17:29.778308 4986 scope.go:117] "RemoveContainer" containerID="a648fa90bfeed49e38660b03fe0476a035f69baed9ed11de07795084021a60d5" Dec 03 14:17:29 crc kubenswrapper[4986]: I1203 14:17:29.781110 4986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kx2v2"] Dec 03 14:17:29 crc kubenswrapper[4986]: I1203 14:17:29.793475 4986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-kx2v2"] Dec 03 14:17:29 crc kubenswrapper[4986]: I1203 14:17:29.802123 4986 scope.go:117] "RemoveContainer" containerID="e0a62524c493c996c6fffb33fb234a68e47e04baa3921ca2768e36a543ce7eeb" Dec 03 14:17:29 crc kubenswrapper[4986]: I1203 14:17:29.846748 4986 scope.go:117] "RemoveContainer" containerID="15b6d5c28a735a03c169e3ebb51e6d7389fbc6ae50ff360d2682ceaf79e43350" Dec 03 14:17:29 crc kubenswrapper[4986]: E1203 14:17:29.847246 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15b6d5c28a735a03c169e3ebb51e6d7389fbc6ae50ff360d2682ceaf79e43350\": container with ID starting with 15b6d5c28a735a03c169e3ebb51e6d7389fbc6ae50ff360d2682ceaf79e43350 not found: ID does not exist" containerID="15b6d5c28a735a03c169e3ebb51e6d7389fbc6ae50ff360d2682ceaf79e43350" Dec 03 14:17:29 crc kubenswrapper[4986]: I1203 14:17:29.847326 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15b6d5c28a735a03c169e3ebb51e6d7389fbc6ae50ff360d2682ceaf79e43350"} err="failed to get container status \"15b6d5c28a735a03c169e3ebb51e6d7389fbc6ae50ff360d2682ceaf79e43350\": rpc error: code = NotFound desc = could not find container \"15b6d5c28a735a03c169e3ebb51e6d7389fbc6ae50ff360d2682ceaf79e43350\": container with ID starting with 15b6d5c28a735a03c169e3ebb51e6d7389fbc6ae50ff360d2682ceaf79e43350 not found: ID does not exist" Dec 03 14:17:29 crc kubenswrapper[4986]: I1203 14:17:29.847369 4986 scope.go:117] "RemoveContainer" containerID="a648fa90bfeed49e38660b03fe0476a035f69baed9ed11de07795084021a60d5" Dec 03 14:17:29 crc kubenswrapper[4986]: E1203 14:17:29.847941 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a648fa90bfeed49e38660b03fe0476a035f69baed9ed11de07795084021a60d5\": container with ID starting with a648fa90bfeed49e38660b03fe0476a035f69baed9ed11de07795084021a60d5 not found: ID does not exist" containerID="a648fa90bfeed49e38660b03fe0476a035f69baed9ed11de07795084021a60d5" Dec 03 14:17:29 crc kubenswrapper[4986]: I1203 14:17:29.847990 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a648fa90bfeed49e38660b03fe0476a035f69baed9ed11de07795084021a60d5"} err="failed to get container status \"a648fa90bfeed49e38660b03fe0476a035f69baed9ed11de07795084021a60d5\": rpc error: code = NotFound desc = could not find container \"a648fa90bfeed49e38660b03fe0476a035f69baed9ed11de07795084021a60d5\": container with ID starting with a648fa90bfeed49e38660b03fe0476a035f69baed9ed11de07795084021a60d5 not found: ID does not exist" Dec 03 14:17:29 crc kubenswrapper[4986]: I1203 14:17:29.848021 4986 scope.go:117] "RemoveContainer" containerID="e0a62524c493c996c6fffb33fb234a68e47e04baa3921ca2768e36a543ce7eeb" Dec 03 14:17:29 crc kubenswrapper[4986]: E1203 14:17:29.848677 4986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0a62524c493c996c6fffb33fb234a68e47e04baa3921ca2768e36a543ce7eeb\": container with ID starting with e0a62524c493c996c6fffb33fb234a68e47e04baa3921ca2768e36a543ce7eeb not found: ID does not exist" containerID="e0a62524c493c996c6fffb33fb234a68e47e04baa3921ca2768e36a543ce7eeb" Dec 03 14:17:29 crc kubenswrapper[4986]: I1203 14:17:29.848708 4986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0a62524c493c996c6fffb33fb234a68e47e04baa3921ca2768e36a543ce7eeb"} err="failed to get container status \"e0a62524c493c996c6fffb33fb234a68e47e04baa3921ca2768e36a543ce7eeb\": rpc error: code = NotFound desc = could not find container \"e0a62524c493c996c6fffb33fb234a68e47e04baa3921ca2768e36a543ce7eeb\": container with ID starting with e0a62524c493c996c6fffb33fb234a68e47e04baa3921ca2768e36a543ce7eeb not found: ID does not exist" Dec 03 14:17:30 crc kubenswrapper[4986]: I1203 14:17:30.961409 4986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f50966d5-a8bb-4623-928a-0b947a7f53cb" path="/var/lib/kubelet/pods/f50966d5-a8bb-4623-928a-0b947a7f53cb/volumes"